Sample records for frequently applied method

  1. Calibration of decadal ensemble predictions

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe

    2017-04-01

    Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).

  2. [Use of the Grade of Membership method to identify consumption patterns and eating behaviors among adolescents in Rio de Janeiro, Brazil].

    PubMed

    Cardoso, Letícia de Oliveira; Alves, Luciana Correia; Castro, Inês Rugani Ribeiro de; Leite, Iuri da Costa; Machado, Carla Jorge

    2011-02-01

    To identify food patterns and eating behaviors among adolescents and to describe the prevalence rates, this study applied the Grade of Membership method to data from a survey on health risk factors among adolescent students in Rio de Janeiro, Brazil (N = 1,632). The four profiles generated were: "A" (12.1%) more frequent consumption of all foods labeled as healthy, less frequent consumption of unhealthy foods, and healthy eating behaviors; "B" (45.8%) breakfast and three meals a day as a habit, less frequent consumption of fruits and vegetables and of five markers of unhealthy diet; "C" (22.8%) lack of healthy eating behaviors, less frequent consumption of vegetables, fruit, milk, cold cuts, cookies, and soft drinks; and "D" (19.3%) more frequent consumption of all unhealthy foods and less frequent consumption of fruits and vegetables. The results indicate the need for interventions to promote healthy eating in this age group.

  3. Bayesian-information-gap decision theory with an application to CO 2 sequestration

    DOE PAGES

    O'Malley, D.; Vesselinov, V. V.

    2015-09-04

    Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less

  4. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277

  5. Some Inspection Methods for Quality Control and In-service Inspection of GLARE

    NASA Astrophysics Data System (ADS)

    Sinke, J.

    2003-07-01

    Quality control of materials and structures is an important issue, also for GLARE. During the manufacturing stage the processes and materials should be monitored and checked frequently in order to obtain a qualified product. During the operation of the aircraft, frequent monitoring and inspections are performed to maintain the quality at a prescribed level. Therefore, in-service inspection methods are applied, and when necessary repair activities are conducted. For the quality control of the GLARE panels and components during manufacturing, the C-scan method proves to be an effective tool. For in-service inspection the Eddy Current Method is one of the suitable options. In this paper a brief overview is presented of both methods and their application on GLARE products.

  6. Teacher Portfolios: An Effective Way to Assess Teacher Performance and Enhance Learning

    ERIC Educational Resources Information Center

    Gelfer, Jeff; 'O' Hara, Katie; Krasch, Delilah; Nguyen, Neal

    2015-01-01

    Often administrators seek alternative methods of evaluating staff while staff are frequently searching for methods to represent the breadth and quality of their efforts. One method proving to be effective for gathering and organising products of teacher activity is the portfolio. This article will discuss the procedures that teachers can apply in…

  7. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks.

    PubMed

    He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei

    2014-01-01

    Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.

  8. Management of Dentin Hypersensitivity by National Dental Practice-Based Research Network practitioners: results from a questionnaire administered prior to initiation of a clinical study on this topic.

    PubMed

    Kopycka-Kedzierawski, Dorota T; Meyerowitz, Cyril; Litaker, Mark S; Chonowski, Sidney; Heft, Marc W; Gordan, Valeria V; Yardic, Robin L; Madden, Theresa E; Reyes, Stephanie C; Gilbert, Gregg H

    2017-01-13

    Dentin hypersensitivity (DH) is a common problem encountered in clinical practice. The purpose of this study was to identify the management approaches for DH among United States dentists. One hundred eighty five National Dental Practice-Based Research Network clinicians completed a questionnaire regarding their preferred methods to diagnose and manage DH in the practice setting, and their beliefs about DH predisposing factors. Almost all dentists (99%) reported using more than one method to diagnose DH. Most frequently, they reported using spontaneous patient reports coupled with excluding other causes of oral pain by direct clinical examination (48%); followed by applying an air blast (26%), applying cold water (12%), and obtaining patient reports after dentist's query (6%). In managing DH, the most frequent first choice was desensitizing, over-the-counter (OTC), potassium nitrate toothpaste (48%), followed by fluorides (38%), and glutaraldehyde/HEMA (3%). A total of 86% of respondents reported using a combination of products when treating DH, most frequently using fluoride varnish and desensitizing OTC potassium nitrate toothpaste (70%). The most frequent predisposing factor leading to DH, as reported by the practitioners, was recessed gingiva (66%), followed by abrasion, erosion, abfraction/attrition lesions (59%) and bruxism (32%). The majority of network practitioners use multiple methods to diagnose and manage DH. Desensitizing OTC potassium nitrate toothpaste and fluoride formulations are the most widely used products to manage DH in dental practice setting.

  9. The Effectiveness of Andragogically Oriented Teaching Method to Improve the Male Students' Achievement of Teaching Practice

    ERIC Educational Resources Information Center

    Rismiyanto; Saleh, Mursid; Mujiyanto, Januarius; Warsono

    2018-01-01

    Students at universities are still frequently found to have low independency in learning. Besides, lecturers also still have tendency to treat students as if they were young learners, or in other words, the lecturers still use pedagogically oriented teaching methods (POTM); although they claimed themselves to have applied methods of teaching…

  10. Systematic text condensation: a strategy for qualitative analysis.

    PubMed

    Malterud, Kirsti

    2012-12-01

    To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies. Giorgi's psychological phenomenological analysis is the point of departure and inspiration for systematic text condensation. The basic elements of Giorgi's method and the elaboration of these in systematic text condensation are presented, followed by a detailed description of procedures for analysis according to systematic text condensation. Finally, similarities and differences compared with other frequently applied methods for qualitative analysis are identified, as the foundation of a discussion of strengths and limitations of systematic text condensation. Systematic text condensation is a descriptive and explorative method for thematic cross-case analysis of different types of qualitative data, such as interview studies, observational studies, and analysis of written texts. The method represents a pragmatic approach, although inspired by phenomenological ideas, and various theoretical frameworks can be applied. The procedure consists of the following steps: 1) total impression - from chaos to themes; 2) identifying and sorting meaning units - from themes to codes; 3) condensation - from code to meaning; 4) synthesizing - from condensation to descriptions and concepts. Similarities and differences comparing systematic text condensation with other frequently applied qualitative methods regarding thematic analysis, theoretical methodological framework, analysis procedures, and taxonomy are discussed. Systematic text condensation is a strategy for analysis developed from traditions shared by most of the methods for analysis of qualitative data. The method offers the novice researcher a process of intersubjectivity, reflexivity, and feasibility, while maintaining a responsible level of methodological rigour.

  11. Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data

    PubMed Central

    Király, András; Abonyi, János

    2014-01-01

    During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data) and biclustering (applied to gene expression data analysis). The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers. PMID:24616651

  12. Focus Group Study Exploring Factors Related to Frequent Sickness Absence

    PubMed Central

    van Rhenen, Willem

    2016-01-01

    Introduction Research investigating frequent sickness absence (3 or more episodes per year) is scarce and qualitative research from the perspective of frequent absentees themselves is lacking. The aim of the current study is to explore awareness, determinants of and solutions to frequent sickness absence from the perspective of frequent absentees themselves. Methods We performed a qualitative study of 3 focus group discussions involving a total of 15 frequent absentees. Focus group discussions were audiotaped and transcribed verbatim. Results were analyzed with the Graneheim method using the Job Demands Resources (JD–R) model as theoretical framework. Results Many participants were not aware of their frequent sickness absence and the risk of future long-term sickness absence. As determinants, participants mentioned job demands, job resources, home demands, poor health, chronic illness, unhealthy lifestyles, and diminished feeling of responsibility to attend work in cases of low job resources. Managing these factors and improving communication (skills) were regarded as solutions to reduce frequent sickness absence. Conclusions The JD–R model provided a framework for determinants of and solutions to frequent sickness absence. Additional determinants were poor health, chronic illness, unhealthy lifestyles, and diminished feeling of responsibility to attend work in cases of low job resources. Frequent sickness absence should be regarded as a signal that something is wrong. Managers, supervisors, and occupational health care providers should advise and support frequent absentees to accommodate job demands, increase both job and personal resources, and improve health rather than express disapproval of frequent sickness absence and apply pressure regarding work attendance. PMID:26872050

  13. Numerical Simulations Using the Immersed Boundary Technique

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Balaras, Elias

    1997-01-01

    The immersed-boundary method can be used to simulate flows around complex geometries within a Cartesian grid. This method has been used quite extensively in low Reynolds-number flows, and is now being applied to turbulent flows more frequently. The technique will be discussed, and three applications of the method will be presented, with increasing complexity. to illustrate the potential and limitations of the method, and some of the directions for future work.

  14. Experience with the selection method in pine stands in the southern United States, with implications for future application

    Treesearch

    James M. Guldin

    2011-01-01

    The selection method applied in shade-intolerant pine stands in the southern United States has been shown to be an effective method of uneven-aged silviculture, but it is becoming less frequently practiced for a variety of reasons. Economically, the high value of standing timber puts fully stocked uneven-aged pine stands at risk of liquidation if the timberland is sold...

  15. METHOD DEVELOPMENT FOR ALACHLOR ESA AND OTHER ACENTANILIDE HERBICIDE DEGRADATION PRODUCTS

    EPA Science Inventory

    Introduction: Acetanilide herbicides are frequently applied in the U.S. on crops (corn, soybeans, popcorn, etc.) to control broadleaf and annual weeds. The acetanilide and acetamide herbicides currently registered for use in the U.S. are alachlor, acetochlor, metolachlor, propa...

  16. Automatically Detect and Track Multiple Fish Swimming in Shallow Water with Frequent Occlusion

    PubMed Central

    Qian, Zhi-Ming; Cheng, Xi En; Chen, Yan Qiu

    2014-01-01

    Due to its universality, swarm behavior in nature attracts much attention of scientists from many fields. Fish schools are examples of biological communities that demonstrate swarm behavior. The detection and tracking of fish in a school are of important significance for the quantitative research on swarm behavior. However, different from other biological communities, there are three problems in the detection and tracking of fish school, that is, variable appearances, complex motion and frequent occlusion. To solve these problems, we propose an effective method of fish detection and tracking. In this method, first, the fish head region is positioned through extremum detection and ellipse fitting; second, The Kalman filtering and feature matching are used to track the target in complex motion; finally, according to the feature information obtained by the detection and tracking, the tracking problems caused by frequent occlusion are processed through trajectory linking. We apply this method to track swimming fish school of different densities. The experimental results show that the proposed method is both accurate and reliable. PMID:25207811

  17. ANALYTICAL METHOD DEVELOPMENT FOR ALACHLOR ESA AND OTHER ACETANILIDE PESTICIDE TRANSFORMATION PRODUCTS

    EPA Science Inventory

    Acetanilide herbicides are frequently applied in the U.S. on crops (corn, soybeans, popcorn, etc.) to control broadleaf and annual weeds. The acetanilide herbicides currently registered for use in the U.S. are: alachlor, acetochlor, metolachlor, propachlor, dimethenamid and fluf...

  18. EMRlog method for computer security for electronic medical records with logic and data mining.

    PubMed

    Martínez Monterrubio, Sergio Mauricio; Frausto Solis, Juan; Monroy Borja, Raúl

    2015-01-01

    The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system.

  19. EMRlog Method for Computer Security for Electronic Medical Records with Logic and Data Mining

    PubMed Central

    Frausto Solis, Juan; Monroy Borja, Raúl

    2015-01-01

    The proper functioning of a hospital computer system is an arduous work for managers and staff. However, inconsistent policies are frequent and can produce enormous problems, such as stolen information, frequent failures, and loss of the entire or part of the hospital data. This paper presents a new method named EMRlog for computer security systems in hospitals. EMRlog is focused on two kinds of security policies: directive and implemented policies. Security policies are applied to computer systems that handle huge amounts of information such as databases, applications, and medical records. Firstly, a syntactic verification step is applied by using predicate logic. Then data mining techniques are used to detect which security policies have really been implemented by the computer systems staff. Subsequently, consistency is verified in both kinds of policies; in addition these subsets are contrasted and validated. This is performed by an automatic theorem prover. Thus, many kinds of vulnerabilities can be removed for achieving a safer computer system. PMID:26495300

  20. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  1. Neglected waterborne parasitic protozoa and their detection in water.

    PubMed

    Plutzer, Judit; Karanis, Panagiotis

    2016-09-15

    Outbreak incidents raise the question of whether the less frequent aetiological agents of outbreaks are really less frequent in water. Alternatively, waterborne transmission could be relevant, but the lack of attention and rapid, sensitive methods to recover and detect the exogenous stages in water may keep them under-recognized. High quality information on the prevalence and detection of less frequent waterborne protozoa, such as Cyclospora cayetanensis, Toxoplasma gondii, Isospora belli, Balantidium coli, Blastocystis hominis, Entamoeba histolytica and other free-living amoebae (FLA), are not available. This present paper discusses the detection tools applied for the water surveillance of the neglected waterborne protozoa mentioned above and provides future perspectives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A new method for motion capture of the scapula using an optoelectronic tracking device: a feasibility study.

    PubMed

    Šenk, Miroslav; Chèze, Laurence

    2010-06-01

    Optoelectronic tracking systems are rarely used in 3D studies examining shoulder movements including the scapula. Among the reasons is the important slippage of skin markers with respect to scapula. Methods using electromagnetic tracking devices are validated and frequently applied. Thus, the aim of this study was to develop a new method for in vivo optoelectronic scapular capture dealing with the accepted accuracy issues of validated methods. Eleven arm positions in three anatomical planes were examined using five subjects in static mode. The method was based on local optimisation, and recalculation procedures were made using a set of five scapular surface markers. The scapular rotations derived from the recalculation-based method yielded RMS errors comparable with the frequently used electromagnetic scapular methods (RMS up to 12.6° for 150° arm elevation). The results indicate that the present method can be used under careful considerations for 3D kinematical studies examining different shoulder movements.

  3. Using Quenching to Detect Corrosion on Sculptural Metalwork: A Real-World Application of Fluorescence Spectroscopy

    ERIC Educational Resources Information Center

    Hensen, Cory; Clare, Tami Lasseter; Barbera, Jack

    2018-01-01

    Fluorescence spectroscopy experiments are a frequently taught as part of upper-division teaching laboratories. To expose undergraduate students to an applied fluorescence technique, a corrosion detection method, using quenching, was adapted from authentic research for an instrumental analysis laboratory. In the experiment, students acquire…

  4. Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Boker, Steven M.

    2010-01-01

    Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…

  5. SIMCA T 1.0: A SAS Computer Program for Simulating Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    2006-01-01

    Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…

  6. Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.

    PubMed

    Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian

    2017-04-01

    Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.

  7. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  8. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  9. [Not Available].

    PubMed

    Bernard, A M; Burgot, J L

    1981-12-01

    The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.

  10. Conservation of Shannon's redundancy for proteins. [information theory applied to amino acid sequences

    NASA Technical Reports Server (NTRS)

    Gatlin, L. L.

    1974-01-01

    Concepts of information theory are applied to examine various proteins in terms of their redundancy in natural originators such as animals and plants. The Monte Carlo method is used to derive information parameters for random protein sequences. Real protein sequence parameters are compared with the standard parameters of protein sequences having a specific length. The tendency of a chain to contain some amino acids more frequently than others and the tendency of a chain to contain certain amino acid pairs more frequently than other pairs are used as randomness measures of individual protein sequences. Non-periodic proteins are generally found to have random Shannon redundancies except in cases of constraints due to short chain length and genetic codes. Redundant characteristics of highly periodic proteins are discussed. A degree of periodicity parameter is derived.

  11. Characteristics of women who frequently under report their energy intake: a doubly labelled water study.

    PubMed

    Scagliusi, F B; Ferriolli, E; Pfrimer, K; Laureano, C; Cunha, C S F; Gualano, B; Lourenço, B H; Lancha, A H

    2009-10-01

    We applied three dietary assessment methods and aimed at obtaining a set of physical, social and psychological variables that can discriminate those individuals who did not underreport ('never under-reporters'), those who underreported in one dietary assessment method ('occasional under-reporters') and those who underreported in two or three dietary assessment methods ('frequent under-reporters'). Sixty-five women aged 18-57 years were recruited for this study. Total energy expenditure was determined by doubly labelled water, and energy intake was estimated by three 24-h diet recalls, 3-day food records and a food frequency questionnaire. A multiple discriminant analysis was used to identify which of those variables better discriminated the three groups: body mass index (BMI), income, education, social desirability, nutritional knowledge, dietary restraint, physical activity practice, body dissatisfaction and binge-eating symptoms. Twenty-three participants were 'never under-reporters'. Twenty-four participants were 'occasional under-reporters' and 18 were 'frequent under-reporters'. Four variables entered the discriminant model: income, BMI, social desirability and body dissatisfaction. According to potency indices, income contributed the most to the total discriminant power, followed in decreasing order by social desirability score, BMI and body dissatisfaction. Income, social desirability and BMI were the characteristics that mainly separated the 'never under-reporters' from the under-reporters (occasional or frequent). Body dissatisfaction better discriminated the 'occasional under-reporters' from the 'frequent under-reporters'. 'Frequent under-reporters' have a greater BMI, social desirability score, body dissatisfaction score and lower income. These four variables seemed to be able to discriminate individuals who are more prone to systematic under reporting.

  12. A Comparison of Four Approaches to Account for Method Effects in Latent State-Trait Analyses

    ERIC Educational Resources Information Center

    Geiser, Christian; Lockhart, Ginger

    2012-01-01

    Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…

  13. Frequent statistics of link-layer bit stream data based on AC-IM algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Chenghong; Lei, Yingke; Xu, Yiming

    2017-08-01

    At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.

  14. Systemic and topical drugs for aging skin.

    PubMed

    Kockaert, Michael; Neumann, Martino

    2003-08-01

    The rejuvenation of aging skin is a common desire for our patients, and several options are available. Although there are some systemic methods, the most commonly used treatments for rejuvenation of the skin are applied topically. The most frequently used topical drugs include retinoids, alpha hydroxy acids (AHAs), vitamin C, beta hydroxy acids, anti-oxidants, and tocopherol. Combination therapy is frequently used; particularly common is the combination of retinoids and AHAs. Systemic therapies available include oral retinoids and vitamin C. Other available therapies such as chemical peels, face-lifts, collagen, and botulinum toxin injections are not discussed in this article.

  15. [Modeling in value-based medicine].

    PubMed

    Neubauer, A S; Hirneiss, C; Kampik, A

    2010-03-01

    Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.

  16. Focus Group Study Exploring Factors Related to Frequent Sickness Absence.

    PubMed

    Notenbomer, Annette; Roelen, Corné A M; van Rhenen, Willem; Groothoff, Johan W

    2016-01-01

    Research investigating frequent sickness absence (3 or more episodes per year) is scarce and qualitative research from the perspective of frequent absentees themselves is lacking. The aim of the current study is to explore awareness, determinants of and solutions to frequent sickness absence from the perspective of frequent absentees themselves. We performed a qualitative study of 3 focus group discussions involving a total of 15 frequent absentees. Focus group discussions were audiotaped and transcribed verbatim. Results were analyzed with the Graneheim method using the Job Demands Resources (JD-R) model as theoretical framework. Many participants were not aware of their frequent sickness absence and the risk of future long-term sickness absence. As determinants, participants mentioned job demands, job resources, home demands, poor health, chronic illness, unhealthy lifestyles, and diminished feeling of responsibility to attend work in cases of low job resources. Managing these factors and improving communication (skills) were regarded as solutions to reduce frequent sickness absence. The JD-R model provided a framework for determinants of and solutions to frequent sickness absence. Additional determinants were poor health, chronic illness, unhealthy lifestyles, and diminished feeling of responsibility to attend work in cases of low job resources. Frequent sickness absence should be regarded as a signal that something is wrong. Managers, supervisors, and occupational health care providers should advise and support frequent absentees to accommodate job demands, increase both job and personal resources, and improve health rather than express disapproval of frequent sickness absence and apply pressure regarding work attendance.

  17. A computational method for the coupled solution of reaction-diffusion equations on evolving domains and manifolds: Application to a model of cell migration and chemotaxis.

    PubMed

    MacDonald, G; Mackenzie, J A; Nolan, M; Insall, R H

    2016-03-15

    In this paper, we devise a moving mesh finite element method for the approximate solution of coupled bulk-surface reaction-diffusion equations on an evolving two dimensional domain. Fundamental to the success of the method is the robust generation of bulk and surface meshes. For this purpose, we use a novel moving mesh partial differential equation (MMPDE) approach. The developed method is applied to model problems with known analytical solutions; these experiments indicate second-order spatial and temporal accuracy. Coupled bulk-surface problems occur frequently in many areas; in particular, in the modelling of eukaryotic cell migration and chemotaxis. We apply the method to a model of the two-way interaction of a migrating cell in a chemotactic field, where the bulk region corresponds to the extracellular region and the surface to the cell membrane.

  18. Method and apparatus for using magneto-acoustic remanence to determine embrittlement

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G. (Inventor); Namkung, Min (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)

    1992-01-01

    A method and apparatus for testing steel components for temperature embrittlement uses magneto-acoustic emission to nondestructively evaluate the component are presented. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an AC current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a DC current to the electromagnets and then by turning the magnets off and observing the residual magnetic induction.

  19. Validating Automated Essay Scoring: A (Modest) Refinement of the "Gold Standard"

    ERIC Educational Resources Information Center

    Powers, Donald E.; Escoffery, David S.; Duchnowski, Matthew P.

    2015-01-01

    By far, the most frequently used method of validating (the interpretation and use of) automated essay scores has been to compare them with scores awarded by human raters. Although this practice is questionable, human-machine agreement is still often regarded as the "gold standard." Our objective was to refine this model and apply it to…

  20. Incomplete Detection of Nonclassical Phase-Space Distributions

    NASA Astrophysics Data System (ADS)

    Bohmann, M.; Tiedau, J.; Bartley, T.; Sperling, J.; Silberhorn, C.; Vogel, W.

    2018-02-01

    We implement the direct sampling of negative phase-space functions via unbalanced homodyne measurement using click-counting detectors. The negativities significantly certify nonclassical light in the high-loss regime using a small number of detectors which cannot resolve individual photons. We apply our method to heralded single-photon states and experimentally demonstrate the most significant certification of nonclassicality for only two detection bins. By contrast, the frequently applied Wigner function fails to directly indicate such quantum characteristics for the quantum efficiencies present in our setup without applying additional reconstruction algorithms. Therefore, we realize a robust and reliable approach to characterize nonclassical light in phase space under realistic conditions.

  1. Friendly Extensible Transfer Tool Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William P.; Gutierrez, Kenneth M.; McRee, Susan R.

    2016-04-15

    Often data transfer software is designed to meet specific requirements or apply to specific environments. Frequently, this requires source code integration for added functionality. An extensible data transfer framework is needed to more easily incorporate new capabilities, in modular fashion. Using FrETT framework, functionality may be incorporated (in many cases without need of source code) to handle new platform capabilities: I/O methods (e.g., platform specific data access), network transport methods, data processing (e.g., data compression.).

  2. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.

    PubMed

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.

  3. Educational and evaluation strategies in the training of physician specialists

    PubMed

    Gaona-Flores, Verónica Alejandra; Campos-Navarro, Luz Arcelia; Arenas-Osuna, Jesús; Alcalá-Martínez, Enrique

    2017-01-01

    Teaching strategies have been defined as procedures, means or resources that teachers used to promote meaningful learning. Identify teaching strategies and evaluation used by the professor with residents in tertiary hospitals health care. This is a cross-sectional study conducted with full, associate and assistant professors of various medical specialties. A questionnaire was applied to evaluate the strategies used by professors to teach and evaluate students. We included a sample of 90 professors in 35 medical specialties. The most frequent teaching activities were: organizing students to develop presentations on specific subjects, followed by asking questions on previously reviewed subjects, In terms of the strategies employed, the most frequent "always" option was applied to case analyses. The most frequent methods used for the evaluation of theoretical knowledge were: participation in class, topic presentation and exams. Teaching activities were primarily based on the presentation of specific topics by the residents. The most commonly used educational strategies were clinical case analyses followed by problem-based learning and the use of illustrations. Evaluation of the residents' performance in theory knowledge, hinged on class participation, presentation of assigned topics and exams. Copyright: © 2017 SecretarÍa de Salud

  4. Human Health Risk Assessment Applied to Rural Populations Dependent on Unregulated Drinking Water Sources: A Scoping Review.

    PubMed

    Ford, Lorelei; Bharadwaj, Lalita; McLeod, Lianne; Waldner, Cheryl

    2017-07-28

    Safe drinking water is a global challenge for rural populations dependent on unregulated water. A scoping review of research on human health risk assessments (HHRA) applied to this vulnerable population may be used to improve assessments applied by government and researchers. This review aims to summarize and describe the characteristics of HHRA methods, publications, and current literature gaps of HHRA studies on rural populations dependent on unregulated or unspecified drinking water. Peer-reviewed literature was systematically searched (January 2000 to May 2014) and identified at least one drinking water source as unregulated (21%) or unspecified (79%) in 100 studies. Only 7% of reviewed studies identified a rural community dependent on unregulated drinking water. Source water and hazards most frequently cited included groundwater (67%) and chemical water hazards (82%). Most HHRAs (86%) applied deterministic methods with 14% reporting probabilistic and stochastic methods. Publications increased over time with 57% set in Asia, and 47% of studies identified at least one literature gap in the areas of research, risk management, and community exposure. HHRAs applied to rural populations dependent on unregulated water are poorly represented in the literature even though almost half of the global population is rural.

  5. Human Health Risk Assessment Applied to Rural Populations Dependent on Unregulated Drinking Water Sources: A Scoping Review

    PubMed Central

    Ford, Lorelei; Bharadwaj, Lalita; McLeod, Lianne; Waldner, Cheryl

    2017-01-01

    Safe drinking water is a global challenge for rural populations dependent on unregulated water. A scoping review of research on human health risk assessments (HHRA) applied to this vulnerable population may be used to improve assessments applied by government and researchers. This review aims to summarize and describe the characteristics of HHRA methods, publications, and current literature gaps of HHRA studies on rural populations dependent on unregulated or unspecified drinking water. Peer-reviewed literature was systematically searched (January 2000 to May 2014) and identified at least one drinking water source as unregulated (21%) or unspecified (79%) in 100 studies. Only 7% of reviewed studies identified a rural community dependent on unregulated drinking water. Source water and hazards most frequently cited included groundwater (67%) and chemical water hazards (82%). Most HHRAs (86%) applied deterministic methods with 14% reporting probabilistic and stochastic methods. Publications increased over time with 57% set in Asia, and 47% of studies identified at least one literature gap in the areas of research, risk management, and community exposure. HHRAs applied to rural populations dependent on unregulated water are poorly represented in the literature even though almost half of the global population is rural. PMID:28788087

  6. Strength of surgical wire fixation. A laboratory study.

    PubMed

    Guadagni, J R; Drummond, D S

    1986-08-01

    Because of the frequent use of stainless steel wire in spinal surgery and to augment fracture fixation, several methods of securing wire fixation were tested in the laboratory to determine the relative strength of fixation. Any method of fixation stronger than the yield strength of the wire is sufficient. Square knots, knot twists, symmetric twists, and the AO loop-tuck techniques afforded acceptable resistance against tension loads, but the wire wrap and AO loop technique were unacceptable. The double symmetric twist, which is frequently used for tension banding, was barely acceptable. The symmetric twist technique was the most practical because it is strong enough, efficient in maintaining tension applied during fixation, and least likely to cause damage to the wire. To optimize the fixation strength of the symmetrical twist, at least two twists are required at a reasonably tight pitch.

  7. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  8. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  9. Peristalticity-driven banded chemical garden

    NASA Astrophysics Data System (ADS)

    Pópity-Tóth, É.; Schuszter, G.; Horváth, D.; Tóth, Á.

    2018-05-01

    Complex structures in nature are often formed by self-assembly. In order to mimic the formation, to enhance the production, or to modify the structures, easy-to-use methods are sought to couple engineering and self-assembly. Chemical-garden-like precipitation reactions are frequently used to study such couplings because of the intrinsic chemical and hydrodynamic interplays. In this work, we present a simple method of applying periodic pressure fluctuations given by a peristaltic pump which can be used to achieve regularly banded precipitate membranes in the copper-phosphate system.

  10. An improved silver staining procedure for schizodeme analysis in polyacrylamide gradient gels.

    PubMed

    Gonçalves, A M; Nehme, N S; Morel, C M

    1990-01-01

    A simple protocol is described for the silver staining of polyacrylamide gradient gels used for the separation of restriction fragments of kinetoplast DNA [schizodeme analysis of trypanosomatids (Morel et al., 1980)]. The method overcomes the problems of non-uniform staining and strong background color which are frequently encountered when conventional protocols for silver staining of linear gels are applied to gradient gels. The method described has proven to be of general applicability for DNA, RNA and protein separations in gradient gels.

  11. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study

    ERIC Educational Resources Information Center

    Rusticus, Shayna A.; Lovato, Chris Y.

    2014-01-01

    The question of equivalence between two or more groups is frequently of interest to many applied researchers. Equivalence testing is a statistical method designed to provide evidence that groups are comparable by demonstrating that the mean differences found between groups are small enough that they are considered practically unimportant. Few…

  12. [Diversity and frequency of scientific research design and statistical methods in the "Arquivos Brasileiros de Oftalmologia": a systematic review of the "Arquivos Brasileiros de Oftalmologia"--1993-2002].

    PubMed

    Crosta, Fernando; Nishiwaki-Dantas, Maria Cristina; Silvino, Wilmar; Dantas, Paulo Elias Correa

    2005-01-01

    To verify the frequency of study design, applied statistical analysis and approval by institutional review offices (Ethics Committee) of articles published in the "Arquivos Brasileiros de Oftalmologia" during a 10-year interval, with later comparative and critical analysis by some of the main international journals in the field of Ophthalmology. Systematic review without metanalysis was performed. Scientific papers published in the "Arquivos Brasileiros de Oftalmologia" between January 1993 and December 2002 were reviewed by two independent reviewers and classified according to the applied study design, statistical analysis and approval by the institutional review offices. To categorize those variables, a descriptive statistical analysis was used. After applying inclusion and exclusion criteria, 584 articles for evaluation of statistical analysis and, 725 articles for evaluation of study design were reviewed. Contingency table (23.10%) was the most frequently applied statistical method, followed by non-parametric tests (18.19%), Student's t test (12.65%), central tendency measures (10.60%) and analysis of variance (9.81%). Of 584 reviewed articles, 291 (49.82%) presented no statistical analysis. Observational case series (26.48%) was the most frequently used type of study design, followed by interventional case series (18.48%), observational case description (13.37%), non-random clinical study (8.96%) and experimental study (8.55%). We found a higher frequency of observational clinical studies, lack of statistical analysis in almost half of the published papers. Increase in studies with approval by institutional review Ethics Committee was noted since it became mandatory in 1996.

  13. Fungicide application practices and personal protective equipment use among orchard farmers in the agricultural health study.

    PubMed

    Hines, C J; Deddens, J A; Coble, J; Alavanja, M C R

    2007-04-01

    Fungicides are routinely applied to deciduous tree fruits for disease management. Seventy-four private orchard applicators enrolled in the Agricultural Health Study participated in the Orchard Fungicide Exposure Study in 2002-2003. During 144 days of observation, information was obtained on chemicals applied and applicator mixing, application, personal protective, and hygiene practices. At least half of the applicators had orchards with <100 trees. Air blast was the most frequent application method used (55%), followed by hand spray (44%). Rubber gloves were the most frequently worn protective equipment (68% mix; 59% apply), followed by respirators (45% mix; 49% apply), protective outerwear (36% mix; 37% apply), and rubber boots (35% mix; 36% apply). Eye protection was worn while mixing and applying on only 35% and 41% of the days, respectively. Bivariate analyses were performed using repeated logistic or repeated linear regression. Mean duration of mixing, pounds of captan applied, total acres sprayed, and number of tank mixes sprayed were greater for air blast than for hand spray (p < 0.05). Spraying from a tractor/vehicle without an enclosed cab was associated with wearing some type of coverall (p < 0.05). Applicators often did not wash their hands after mixing (77%), a finding not explained by glove use. Glove use during mixing was associated with younger age, while wearing long-sleeve shirts was associated with older age (p < 0.05 each). Self-reported unusually high fungicide exposures were more likely on days applicators performed repairs (p < 0.05). These data will be useful for evaluating fungicide exposure determinants among orchard applicators.

  14. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  15. Bottled water: analysis of mycotoxins by LC-MS/MS.

    PubMed

    Mata, A T; Ferreira, J P; Oliveira, B R; Batoréu, M C; Barreto Crespo, M T; Pereira, V J; Bronze, M R

    2015-06-01

    The presence of mycotoxins in food samples has been widely studied as well as its impact in human health, however, information about its distribution in the environment is scarce. An analytical method comprising a solid phase extraction procedure followed by liquid chromatography tandem mass spectrometry analysis was implemented and validated for the trace analysis of mycotoxins in drinking bottled waters. Limits of quantification achieved for the method were between 0.2ngL(-1) for aflatoxins and ochratoxin, and 2.0ngL(-1) for fumonisins and neosolaniol. The method was applied to real samples. Aflatoxin B2 was the most frequently detected mycotoxin in water samples, with a maximum concentration of 0.48±0.05ngL(-1) followed by aflatoxin B1, aflatoxin G1 and ochratoxin A. The genera Cladosporium, Fusarium and Penicillium were the fungi more frequently detected. These results show that the consumption of these waters does not represent a toxicological risk for an adult. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Estimating the robustness of contingenet valuation estimates of WTP to survey mode and treatment of protest responses.

    Treesearch

    John Loomis; Armando Gonzalez-Caban; Joseph Champ

    2011-01-01

    Over the past four decades teh contingent valuation method (CVM) has become a technique frequently used by economists to estimate willingness-to-pay (WTP) for improvements in environmental quality and prot3tion of natural resources. The CVM was originall applied to estmate recreation use values (Davis, 1963; Hammack and Brown, 1974)and air quality (Brookshire et al....

  17. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  18. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, B.; Matthews, R.T.

    Since Drake`s first oil well in 1859, well fires have been frequent and disastrous. Hardly a year has passed in over a century without a well fire somewhere in the world. In the 1920`s the classic method of fire fighting using explosives to starve the fire of oxygen was developed and it has been used extensively ever since. While explosives are still one of the most frequently used methods today, several other methods are used to supplement it where special conditions exist. Tunneling at an angle from a safe distance is used in some cases, especially where the fire ismore » too hot for a close approach on the ground surface. Pumping drilling muds into a well to plug it is another method that has been used successfully for some time. Diverter wells are occasionally used, and sometimes simply pumping enough water on a well fire is sufficient to extinguish it. Of course, prevention is always the best solution. Many advances in blow-out prevention devices have been developed in the last 50 years and the number of fires has been substantially reduced compared to the number of wells drilled. However, very little in new technology has been applied to oil well fire fighting in the 1960s, 1970s, or 1980s. Overall technological progress has accelerated tremendously in this period, of course, but new materials and equipment were not applied to this field for some reason. Saddam Hussein`s environmental holocaust in Kuwait changed that by causing many people throughout the world to focus their creative energy on more efficient oil well fire fighting methods.« less

  20. Analysis of beta-lactam antibiotics in incurred raw milk by rapid test methods and liquid chromatography coupled with electrospray ionization tandem mass spectrometry.

    PubMed

    Riediker, S; Diserens, J M; Stadler, R H

    2001-09-01

    A recently developed confirmatory LC-MS method has been applied to the quantification of five major beta-lactam antibiotics in suspect raw bovine milk samples that gave a positive response with receptor-based (BetaStar) and rapid microbial inhibitory screen tests (Delvotest SP). In total, 18 presumptive positive raw milk samples were reanalyzed; 16 samples showed traces of antibiotic residues that could be identified and quantified by the LC-MS method, ranging from the limits of confirmation up to 38 microg/kg. Of the positive samples, only five (approximately 30%) were found to be violative of EU maximum residue limits. The most frequently detected antibiotic residues were cloxacillin and penicillin G, the former often in combination with amoxicillin or ampicillin. This study compares the results obtained by the three methods on identical samples and addresses how these relate to certain criteria such as sensitivity and selectivity. Furthermore, the limitations of the LC-MS method and the potential impact of the presence of frequently more than one residue in the same milk sample on the response of the rapid test methods are discussed.

  1. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  2. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  3. Where do Students Go Wrong in Applying the Scientific Method?

    NASA Astrophysics Data System (ADS)

    Rubbo, Louis; Moore, Christopher

    2015-04-01

    Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific method. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to apply the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific method. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.

  4. Guidelines for reporting and using prediction tools for genetic variation analysis.

    PubMed

    Vihinen, Mauno

    2013-02-01

    Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.

  5. H-classic: a new method to identify classic articles in Implant Dentistry, Periodontics, and Oral Surgery.

    PubMed

    De la Flor-Martínez, Maria; Galindo-Moreno, Pablo; Sánchez-Fernández, Elena; Piattelli, Adriano; Cobo, Manuel Jesus; Herrera-Viedma, Enrique

    2016-10-01

    The study of classic papers permits analysis of the past, present, and future of a specific area of knowledge. This type of analysis is becoming more frequent and more sophisticated. Our objective was to use the H-classics method, based on the h-index, to analyze classic papers in Implant Dentistry, Periodontics, and Oral Surgery (ID, P, and OS). First, an electronic search of documents related to ID, P, and OS was conducted in journals indexed in Journal Citation Reports (JCR) 2014 within the category 'Dentistry, Oral Surgery & Medicine'. Second, Web of Knowledge databases were searched using Mesh terms related to ID, P, and OS. Finally, the H-classics method was applied to select the classic articles in these disciplines, collecting data on associated research areas, document type, country, institutions, and authors. Of 267,611 documents related to ID, P, and OS retrieved from JCR journals (2014), 248 were selected as H-classics. They were published in 35 journals between 1953 and 2009, most frequently in the Journal of Clinical Periodontology (18.95%), the Journal of Periodontology (18.54%), International Journal of Oral and Maxillofacial Implants (9.27%), and Clinical Oral Implant Research (6.04%). These classic articles derived from the USA in 49.59% of cases and from Europe in 47.58%, while the most frequent host institution was the University of Gothenburg (17.74%) and the most frequent authors were J. Lindhe (10.48%) and S. Socransky (8.06%). The H-classics approach offers an objective method to identify core knowledge in clinical disciplines such as ID, P, and OS. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.

  7. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  8. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  9. Magneto acoustic emission apparatus for testing materials for embrittlement

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G. (Inventor); Min, Namkung (Inventor); Yost, William T. (Inventor); Cantrell, John H. (Inventor)

    1990-01-01

    A method and apparatus for testing steel components for temper embrittlement uses magneto-acoustic emission to nondestructively evaluate the component. Acoustic emission signals occur more frequently at higher levels in embrittled components. A pair of electromagnets are used to create magnetic induction in the test component. Magneto-acoustic emission signals may be generated by applying an ac current to the electromagnets. The acoustic emission signals are analyzed to provide a comparison between a component known to be unembrittled and a test component. Magnetic remanence is determined by applying a dc current to the electromagnets, then turning the magnets off and observing the residual magnetic induction.

  10. Polymer separations by liquid interaction chromatography: principles - prospects - limitations.

    PubMed

    Radke, Wolfgang

    2014-03-28

    Most heterogeneities of polymers with respect to different structural features cannot be resolved by only size exclusion chromatography (SEC), the most frequently applied mode of polymer chromatography. Instead, methods of interaction chromatography became increasingly important. However, despite the increasing applications the principles and potential of polymer interaction chromatography are still often unknown to a large number of polymer scientists. The present review will explain the principles of the different modes of polymer chromatography. Based on selected examples it will be shown which separation techniques can be successfully applied for separations with respect to the different structural features of polymers. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness

    NASA Technical Reports Server (NTRS)

    Rossbacher, Lisa A.

    1987-01-01

    One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.

  12. Turkish Nurses' Use of Nonpharmacological Methods for Relieving Children's Postoperative Pain.

    PubMed

    Çelebioğlu, Ayda; Küçükoğlu, Sibel; Odabaşoğlu, Emel

    2015-01-01

    The experience of pain is frequently observed among children undergoing surgery. Hospitalization and surgery are stressful experiences for those children. The research was conducted to investigate and analyze Turkish nurses' use of nonpharmacological methods to relieve postoperative pain in children. The study was cross-sectional and descriptive. The study took place at 2 hospitals in eastern Turkey. Participants were 143 nurses whose patients had undergone surgical procedures at the 2 hospitals. The researchers used a questionnaire, a checklist of nonpharmacological methods, and a visual analogue scale (VAS) to collect the data. To assess the data, descriptive statistics and the χ² test were used. Of the 143 nurses, 73.4% initially had applied medication when the children had pain. Most of the nurses (58.7%) stated the children generally experienced a middle level of postoperative pain. The most frequent practices that the nurses applied after the children's surgery were (1) "providing verbal encouragement" (90.2%), a cognitive-behavioral method; (2) "a change in the child's position" (85.3%), a physical method; (3) "touch" (82.5%), a method of emotional support; and (4) "ventilation of the room" (79.7%), a regulation of the surroundings. Compared with participants with other educational levels, the cognitive-behavioral methods were the ones most commonly used by the more educated nurses (P < .05): (1) encouraging patients with rewards, (2) helping them think happy thoughts, (3) helping them use their imaginations, (4) providing music, and (5) reading books. Female nurses used the following methods more than the male nurses did (P < .05): (1) providing encouragement with rewards, (2) helping patients with deep breathing, (3) keeping a desired item beside them, (4) changing their positions, and (5) ventilating the room. Undergoing surgery is generally a painful experience for children. Nurses most commonly use cognitive-behavioral methods in the postoperative care of their pediatric patients after surgery.

  13. [Questionnaire-based study on the key to the guidance to the patients with atopic dermatitis by pharmacist].

    PubMed

    Kaneko, Sakae; Kakamu, Takeyasu; Matsuo, Hiroaki; Naora, Koji; Morita, Eishin

    2014-11-01

    Atopic dermatitis is a condition with a chronic or recurrent course that requires continued treatment, meaning that patients must be provided with instructions that fit their lifestyle. Surveys of doctors and patients have revealed the importance of instructions on how to apply topical medication. Here we conducted a survey of the instructions provided by pharmacists, who play an important role in educating patients on how to apply topical medication. Questionnaires were distributed to clinics and dispensing pharmacies in Shimane and Hiroshima prefectures. The questionnaire format comprised selecting each matter on which instructions are provided. A total of 548 questionnaires (response rate, 13.8%) were collected and analyzed. Concerning topical steroids, the most frequently instructed item was "Explanation of application site"(86%), followed by "Explanation of number and timing of applications"(68%). Only 45% chose "Instruction to apply a small amount to avoid side effects." For tacrolimus ointment, "Explanation of tingling sensation"(as a side effect) was the most frequently selected item (52%), and "Instruction by using a brochure"(27.3%) was more commonly selected for tacrolimus ointment than for steroids and emollients. "Demonstrate the application method by means of actual application" was selected by few respondents for any topical medication. Regarding what they wanted from doctors, many respondents wrote in the section for their own comments that they would like a clear description of the method of use and dose and indications of the amount to be applied. Failure included times when patients failed to apply medication correctly due to inadequate instructions and an insufficient explanation of side effects. Instructions vary among patients and professions, but good instructions lead to good results. Cross-tabulation showed that pharmacists who are aware of the guidelines of atopic dermatitis offer significantly more instructions in a range of areas, suggesting that the first important task is to spread awareness of these guidelines among them.

  14. Reading recognition of pointer meter based on pattern recognition and dynamic three-points on a line

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqiang; Ding, Mingli; Fu, Wuyifang; Li, Yongqiang

    2017-03-01

    Pointer meters are frequently applied to industrial production for they are directly readable. They should be calibrated regularly to ensure the precision of the readings. Currently the method of manual calibration is most frequently adopted to accomplish the verification of the pointer meter, and professional skills and subjective judgment may lead to big measurement errors and poor reliability and low efficiency, etc. In the past decades, with the development of computer technology, the skills of machine vision and digital image processing have been applied to recognize the reading of the dial instrument. In terms of the existing recognition methods, all the parameters of dial instruments are supposed to be the same, which is not the case in practice. In this work, recognition of pointer meter reading is regarded as an issue of pattern recognition. We obtain the features of a small area around the detected point, make those features as a pattern, divide those certified images based on Gradient Pyramid Algorithm, train a classifier with the support vector machine (SVM) and complete the pattern matching of the divided mages. Then we get the reading of the pointer meter precisely under the theory of dynamic three points make a line (DTPML), which eliminates the error caused by tiny differences of the panels. Eventually, the result of the experiment proves that the proposed method in this work is superior to state-of-the-art works.

  15. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method

    PubMed Central

    Zhang, Tingting; Kou, S. C.

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615

  16. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    PubMed

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  17. Review of surface steam sterilization for validation purposes.

    PubMed

    van Doornmalen, Joost; Kopinga, Klaas

    2008-03-01

    Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.

  18. Algorithm to Identify Frequent Coupled Modules from Two-Layered Network Series: Application to Study Transcription and Splicing Coupling

    PubMed Central

    Li, Wenyuan; Dai, Chao; Liu, Chun-Chi

    2012-01-01

    Abstract Current network analysis methods all focus on one or multiple networks of the same type. However, cells are organized by multi-layer networks (e.g., transcriptional regulatory networks, splicing regulatory networks, protein-protein interaction networks), which interact and influence each other. Elucidating the coupling mechanisms among those different types of networks is essential in understanding the functions and mechanisms of cellular activities. In this article, we developed the first computational method for pattern mining across many two-layered graphs, with the two layers representing different types yet coupled biological networks. We formulated the problem of identifying frequent coupled clusters between the two layers of networks into a tensor-based computation problem, and proposed an efficient solution to solve the problem. We applied the method to 38 two-layered co-transcription and co-splicing networks, derived from 38 RNA-seq datasets. With the identified atlas of coupled transcription-splicing modules, we explored to what extent, for which cellular functions, and by what mechanisms transcription-splicing coupling takes place. PMID:22697243

  19. Beverage and water intake of healthy adults in some European countries.

    PubMed

    Nissensohn, Mariela; Castro-Quezada, Itandehui; Serra-Majem, Lluis

    2013-11-01

    Nutritional surveys frequently collect some data of consumption of beverages; however, information from different sources and different methodologies raises issues of comparability. The main objective of this review was to examine the available techniques used for assessing beverage intake in European epidemiological studies and to describe the most frequent method applied to assess it. Information of beverage intake available from European surveys and nutritional epidemiological investigations was obtained from gray literature. Twelve articles were included and relevant data were extracted. The studies were carried out on healthy adults by different types of assessments. The most frequent tool used was a 7-d dietary record. Only Germany used a specific beverage assessment tool (Beverage Dietary History). From the limited data available and the diversity of the methodology used, the results show that consumption of beverages is different between countries. Current epidemiological studies in Europe focusing on beverage intake are scarce. Further research is needed to clarify the amount of beverage intake in European population.

  20. Review of guidelines and literature for handling missing data in longitudinal clinical trials with a case study.

    PubMed

    Liu, M; Wei, L; Zhang, J

    2006-01-01

    Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial.

  1. Development of an oximeter for neurology

    NASA Astrophysics Data System (ADS)

    Aleinik, A.; Serikbekova, Z.; Zhukova, N.; Zhukova, I.; Nikitina, M.

    2016-06-01

    Cerebral desaturation can occur during surgery manipulation, whereas other parameters vary insignificantly. Prolonged intervals of cerebral anoxia can cause serious damage to the nervous system. Commonly used method for measurement of cerebral blood flow uses invasive catheters. Other techniques include single photon emission computed tomography (SPECT), positron emission tomography (PET), magnetic resonance imaging (MRI). Tomographic methods frequently use isotope administration, that may result in anaphylactic reactions to contrast media and associated nerve diseases. Moreover, the high cost and the need for continuous monitoring make it difficult to apply these techniques in clinical practice. Cerebral oximetry is a method for measuring oxygen saturation using infrared spectrometry. Moreover reflection pulse oximetry can detect sudden changes in sympathetic tone. For this purpose the reflectance pulse oximeter for use in neurology is developed. Reflectance oximeter has a definite advantage as it can be used to measure oxygen saturation in any part of the body. Preliminary results indicate that the device has a good resolution and high reliability. Modern applied schematics have improved device characteristics compared with existing ones.

  2. [A new formula for the measurement of rigor mortis: the determination of the FRR-index (author's transl)].

    PubMed

    Forster, B; Ropohl, D; Raule, P

    1977-07-05

    The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.

  3. Automated quantitative assessment of proteins' biological function in protein knowledge bases.

    PubMed

    Mayr, Gabriele; Lepperdinger, Günter; Lackner, Peter

    2008-01-01

    Primary protein sequence data are archived in databases together with information regarding corresponding biological functions. In this respect, UniProt/Swiss-Prot is currently the most comprehensive collection and it is routinely cross-examined when trying to unravel the biological role of hypothetical proteins. Bioscientists frequently extract single entries and further evaluate those on a subjective basis. In lieu of a standardized procedure for scoring the existing knowledge regarding individual proteins, we here report about a computer-assisted method, which we applied to score the present knowledge about any given Swiss-Prot entry. Applying this quantitative score allows the comparison of proteins with respect to their sequence yet highlights the comprehension of functional data. pfs analysis may be also applied for quality control of individual entries or for database management in order to rank entry listings.

  4. Reporting Qualitative Research: Standards, Challenges, and Implications for Health Design.

    PubMed

    Peditto, Kathryn

    2018-04-01

    This Methods column describes the existing reporting standards for qualitative research, their application to health design research, and the challenges to implementation. Intended for both researchers and practitioners, this article provides multiple perspectives on both reporting and evaluating high-quality qualitative research. Two popular reporting standards exist for reporting qualitative research-the Consolidated Criteria for Reporting Qualitative Research (COREQ) and the Standards for Reporting Qualitative Research (SRQR). Though compiled using similar procedures, they differ in their criteria and the methods to which they apply. Creating and applying reporting criteria is inherently difficult due to the undefined and fluctuating nature of qualitative research when compared to quantitative studies. Qualitative research is expansive and occasionally controversial, spanning many different methods of inquiry and epistemological approaches. A "one-size-fits-all" standard for reporting qualitative research can be restrictive, but COREQ and SRQR both serve as valuable tools for developing responsible qualitative research proposals, effectively communicating research decisions, and evaluating submissions. Ultimately, tailoring a set of standards specific to health design research and its frequently used methods would ensure quality research and aid reviewers in their evaluations.

  5. Quantitative PCR analysis reveals a high incidence of large intragenic deletions in the FANCA gene in Spanish Fanconi anemia patients.

    PubMed

    Callén, E; Tischkowitz, M D; Creus, A; Marcos, R; Bueren, J A; Casado, J A; Mathew, C G; Surrallés, J

    2004-01-01

    Fanconi anaemia is an autosomal recessive disease characterized by chromosome fragility, multiple congenital abnormalities, progressive bone marrow failure and a high predisposition to develop malignancies. Most of the Fanconi anaemia patients belong to complementation group FA-A due to mutations in the FANCA gene. This gene contains 43 exons along a 4.3-kb coding sequence with a very heterogeneous mutational spectrum that makes the mutation screening of FANCA a difficult task. In addition, as the FANCA gene is rich in Alu sequences, it was reported that Alu-mediated recombination led to large intragenic deletions that cannot be detected in heterozygous state by conventional PCR, SSCP analysis, or DNA sequencing. To overcome this problem, a method based on quantitative fluorescent multiplex PCR was proposed to detect intragenic deletions in FANCA involving the most frequently deleted exons (exons 5, 11, 17, 21 and 31). Here we apply the proposed method to detect intragenic deletions in 25 Spanish FA-A patients previously assigned to complementation group FA-A by FANCA cDNA retroviral transduction. A total of eight heterozygous deletions involving from one to more than 26 exons were detected. Thus, one third of the patients carried a large intragenic deletion that would have not been detected by conventional methods. These results are in agreement with previously published data and indicate that large intragenic deletions are one of the most frequent mutations leading to Fanconi anaemia. Consequently, this technology should be applied in future studies on FANCA to improve the mutation detection rate. Copyright 2003 S. Karger AG, Basel

  6. Theories Applied to m-Health Interventions for Behavior Change in Low- and Middle-Income Countries: A Systematic Review.

    PubMed

    Cho, Yoon-Min; Lee, Seohyun; Islam, Sheikh Mohammed Shariful; Kim, Sun-Young

    2018-02-13

    Recently there has been dramatic increase in the use of mobile technologies for health (m-Health) in both high and low- and middle-income countries (LMICs). However, little is known whether m-Health interventions in LMICs are based on relevant theories critical for effective implementation of such interventions. This review aimed to systematically identify m-Health studies on health behavioral changes in LMICs and to examine how each study applied behavior change theories. A systematic review was conducted using the standard method from the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. By searching electronic databases (MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials [CENTRAL]), we identified eligible studies published in English from inception to June 30, 2017. For the identified m-Health studies in LMICs, we examined their theoretical bases, use of behavior change techniques (BCTs), and modes of delivery. A total of 14 m-Health studies on behavioral changes were identified and, among them, only 5 studies adopted behavior change theory. The most frequently cited theory was the health belief model, which was adopted in three studies. Likewise, studies have applied only a limited number of BCTs. Among the seven BCTs identified, the most frequently used one was the social support (practical) technique for medication reminder and medical appointment. m-Health studies in LMICs most commonly used short messaging services and phone calls as modes of delivery for behavior change interventions. m-Health studies in LMICs are suboptimally based on behavior change theory yet. To maximize effectiveness of m-Health, rigorous delivery methods as well as theory-based intervention designs will be needed.

  7. Massive processing of pyro-chromatogram mass spectra (py-GCMS) of soil samples using the PARAFAC2 algorithm

    NASA Astrophysics Data System (ADS)

    Cécillon, Lauric; Quénéa, Katell; Anquetil, Christelle; Barré, Pierre

    2015-04-01

    Due to its large heterogeneity at all scales (from soil core to the globe), several measurements are often mandatory to get a meaningful value of a measured soil property. A large number of measurements can therefore be needed to study a soil property whatever the scale of the study. Moreover, several soil investigation techniques produce large and complex datasets, such as pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) which produces complex 3-way data. In this context, straightforward methods designed to speed up data treatments are needed to deal with large datasets. GC-MS pyrolysis (py-GCMS) is a powerful and frequently used tool to characterize soil organic matter (SOM). However, the treatment of the results of a py-GCMS analysis of soil sample is time consuming (number of peaks, co-elution, etc.) and the treatment of large data set of py-GCMS results is rather laborious. Moreover, peak position shifts and baseline drifts between analyses make the automation of GCMS programs data treatment difficult. These problems can be fixed using the Parallel Factor Analysis 2 (PARAFAC 2, Kiers et al., 1999; Bro et al., 1999). This algorithm has been applied frequently on chromatography data but has never been applied to analyses of SOM. We developed a Matlab routine based on existing Matlab packages dedicated to the simultaneous treatment of dozens of pyro-chromatograms mass spectra. We applied this routine on 40 soil samples. The benefits and expected improvements of our method will be discussed in our poster. References Kiers et al. (1999) PARAFAC2 - PartI. A direct fitting algorithm for the PARAFAC2 model. Journal of Chemometrics, 13: 275-294. Bro et al. (1999) PARAFAC2 - PartII. Modeling chromatographic data with retention time shifts. Journal of Chemometrics, 13: 295-309.

  8. Living with earthquakes - development and usage of earthquake-resistant construction methods in European and Asian Antiquity

    NASA Astrophysics Data System (ADS)

    Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes

    2010-05-01

    Earthquakes are among the most horrible events of nature due to unexpected occurrence, for which no spiritual means are available for protection. The only way of preserving life and property is applying earthquake-resistant construction methods. Ancient Greek architects of public buildings applied steel clamps embedded in lead casing to hold together columns and masonry walls during frequent earthquakes in the Aegean region. Elastic steel provided strength, while plastic lead casing absorbed minor shifts of blocks without fracturing rigid stone. Romans invented concrete and built all sizes of buildings as a single, unflexible unit. Masonry surrounding and decorating concrete core of the wall did not bear load. Concrete resisted minor shaking, yielding only to forces higher than fracture limits. Roman building traditions survived the Dark Ages and 12th century Crusader castles erected in earthquake-prone Syria survive until today in reasonably good condition. Concrete and steel clamping persisted side-by-side in the Roman Empire. Concrete was used for cheap construction as compared to building of masonry. Applying lead-encased steel increased costs, and was avoided whenever possible. Columns of the various forums in Italian Pompeii mostly lack steel fittings despite situated in well-known earthquake-prone area. Whether frequent recurrence of earthquakes in the Naples region was known to inhabitants of Pompeii might be a matter of debate. Seemingly the shock of the AD 62 earthquake was not enough to apply well-known protective engineering methods throughout the reconstruction of the city before the AD 79 volcanic catastrophe. An independent engineering tradition developed on the island of Java (Indonesia). The mortar-less construction technique of 8-9th century Hindu masonry shrines around Yogyakarta would allow scattering of blocks during earthquakes. To prevent dilapidation an intricate mortise-and-tenon system was carved into adjacent faces of blocks. Only the outermost layer was treated this way, the core of the shrines was made of simple rectangular blocks. The system resisted both in-plane and out-of-plane shaking quite well, as proven by survival of many shrines for more than a millennium, and by fracturing of blocks instead of displacement during the 2006 Yogyakarta earthquake. Systematic use or disuse of known earthquake-resistant techniques in any one society depends on the perception of earthquake risk and on available financial resources. Earthquake-resistant construction practice is significantly more expensive than regular construction. Perception is influenced mostly by short individual and longer social memory. If earthquake recurrence time is longer than the preservation of social memory, if damaging quakes fade into the past, societies commit the same construction mistakes again and again. Length of the memory is possibly about a generation's lifetime. Events occurring less frequently than 25-30 years can be readily forgotten, and the risk of recurrence considered as negligible, not worth the costs of safe construction practices. (Example of recurring flash floods in Hungary.) Frequent earthquakes maintain safe construction practices, like the Java masonry technique throughout at least two centuries, and like the Fachwerk tradition on Modern Aegean Samos throughout 500 years of political and technological development. (OTKA K67583)

  9. Translations on Eastern Europe Political, Sociological, and Military Affairs, Number 1368.

    DTIC Science & Technology

    1977-03-21

    isotopes. The radio isotopes penetrate into the body with the air that is breathed and, most frequently, with the consumption of fresh milk from qows...quantities of radioactive iodine could cause damages, the more so since milk is the children’s main daily food. Ik Should it be impossible to forbid the...consumption of fresh milk , it would be expedient to apply the method of "iodine prophylaxis." The personnel of the reactor and the population in the

  10. Model methodology for estimating pesticide concentration extremes based on sparse monitoring data

    USGS Publications Warehouse

    Vecchia, Aldo V.

    2018-03-22

    This report describes a new methodology for using sparse (weekly or less frequent observations) and potentially highly censored pesticide monitoring data to simulate daily pesticide concentrations and associated quantities used for acute and chronic exposure assessments, such as the annual maximum daily concentration. The new methodology is based on a statistical model that expresses log-transformed daily pesticide concentration in terms of a seasonal wave, flow-related variability, long-term trend, and serially correlated errors. Methods are described for estimating the model parameters, generating conditional simulations of daily pesticide concentration given sparse (weekly or less frequent) and potentially highly censored observations, and estimating concentration extremes based on the conditional simulations. The model can be applied to datasets with as few as 3 years of record, as few as 30 total observations, and as few as 10 uncensored observations. The model was applied to atrazine, carbaryl, chlorpyrifos, and fipronil data for U.S. Geological Survey pesticide sampling sites with sufficient data for applying the model. A total of 112 sites were analyzed for atrazine, 38 for carbaryl, 34 for chlorpyrifos, and 33 for fipronil. The results are summarized in this report; and, R functions, described in this report and provided in an accompanying model archive, can be used to fit the model parameters and generate conditional simulations of daily concentrations for use in investigations involving pesticide exposure risk and uncertainty.

  11. Prioritizing land management efforts at a landscape scale: a case study using prescribed fire in Wisconsin.

    PubMed

    Hmielowski, Tracy L; Carter, Sarah K; Spaul, Hannah; Helmers, David; Radeloff, Volker C; Zedler, Paul

    2016-06-01

    One challenge in the effort to conserve biodiversity is identifying where to prioritize resources for active land management. Cost-benefit analyses have been used successfully as a conservation tool to identify sites that provide the greatest conservation benefit per unit cost. Our goal was to apply cost-benefit analysis to the question of how to prioritize land management efforts, in our case the application of prescribed fire to natural landscapes in Wisconsin, USA. We quantified and mapped frequently burned communities and prioritized management units based on a suite of indices that captured ecological benefits, management effort, and the feasibility of successful long-term management actions. Data for these indices came from LANDFIRE, Wisconsin's Wildlife Action Plan, and a nationwide wildland-urban interface assessment. We found that the majority of frequently burned vegetation types occurred in the southern portion of the state. However, the highest priority areas for applying prescribed fire occurred in the central, northwest, and northeast portion of the state where frequently burned vegetation patches were larger and where identified areas of high biological importance area occurred. Although our focus was on the use of prescribed fire in Wisconsin, our methods can be adapted to prioritize other land management activities. Such prioritization is necessary to achieve the greatest possible benefits from limited funding for land management actions, and our results show that it is feasible at scales that are relevant for land management decisions.

  12. Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute.

    PubMed

    Bamberger, Michael; Tarsilla, Michele; Hesse-Biber, Sharlene

    2016-04-01

    Many widely-used impact evaluation designs, including randomized control trials (RCTs) and quasi-experimental designs (QEDs), frequently fail to detect what are often quite serious unintended consequences of development programs. This seems surprising as experienced planners and evaluators are well aware that unintended consequences frequently occur. Most evaluation designs are intended to determine whether there is credible evidence (statistical, theory-based or narrative) that programs have achieved their intended objectives and the logic of many evaluation designs, even those that are considered the most "rigorous," does not permit the identification of outcomes that were not specified in the program design. We take the example of RCTs as they are considered by many to be the most rigorous evaluation designs. We present a numbers of cases to illustrate how infusing RCTs with a mixed-methods approach (sometimes called an "RCT+" design) can strengthen the credibility of these designs and can also capture important unintended consequences. We provide a Mixed Methods Evaluation Framework that identifies 9 ways in which UCs can occur, and we apply this framework to two of the case studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Statistical approaches to lifetime measurements with restricted observation times

    NASA Astrophysics Data System (ADS)

    Chen, X. C.; Zeng, Q.; Litvinov, Yu. A.; Tu, X. L.; Walker, P. M.; Wang, M.; Wang, Q.; Yue, K.; Zhang, Y. H.

    2017-09-01

    Two generic methods based on frequentism and Bayesianism are presented in this work aiming to adequately estimate decay lifetimes from measured data, while accounting for restricted observation times in the measurements. All the experimental scenarios that can possibly arise from the observation constraints are treated systematically and formulas are derived. The methods are then tested against the decay data of bare isomeric 44+94mRu, which were measured using isochronous mass spectrometry with a timing detector at the CSRe in Lanzhou, China. Applying both methods in three distinct scenarios yields six different but consistent lifetime estimates. The deduced values are all in good agreement with a prediction based on the neutral-atom value modified to take the absence of internal conversion into account. Potential applications of such methods are discussed.

  14. Comparison of gravimetric and gas chromatographic methods for assessing performance of textile materials against liquid pesticide penetration.

    PubMed

    Shaw, Anugrah; Abbi, Ruchika

    2004-01-01

    Penetration of liquid pesticides through textile materials is a criterion for determining the performance of protective clothing used by pesticide handlers. The pipette method is frequently used to apply liquid pesticides onto textile materials to measure penetration. Typically, analytical techniques such as Gas Chromatography (GC) are used to measure percentage penetration. These techniques are labor intensive and costly. A simpler gravimetric method was developed, and tests were conducted to compare the gravimetric and GC methods of analysis. Three types of pesticide formulations and 4 fabrics were used for the study. Diluted pesticide formulations were pipetted onto the test specimens and percentage penetration was measured using the 2 methods. For homogeneous formulation, the results of the two methods were fairly comparable. However, due to the filtering action of the textile materials, there were differences in the percentage penetration between the 2 methods for formulations that were not homogeneous.

  15. Study on the application of ambient vibration tests to evaluate the effectiveness of seismic retrofitting

    NASA Astrophysics Data System (ADS)

    Liang, Li; Takaaki, Ohkubo; Guang-hui, Li

    2018-03-01

    In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.

  16. Nuclear magnetic resonance and high-performance liquid chromatography techniques for the characterization of bioactive compounds from Humulus lupulus L. (hop).

    PubMed

    Bertelli, Davide; Brighenti, Virginia; Marchetti, Lucia; Reik, Anna; Pellati, Federica

    2018-06-01

    Humulus lupulus L. (hop) represents one of the most cultivated crops, it being a key ingredient in the brewing process. Many health-related properties have been described for hop extracts, making this plant gain more interest in the field of pharmaceutical and nutraceutical research. Among the analytical tools available for the phytochemical characterization of plant extracts, quantitative nuclear magnetic resonance (qNMR) represents a new and powerful technique. In this ambit, the present study was aimed at the development of a new, simple, and efficient qNMR method for the metabolite fingerprinting of bioactive compounds in hop cones, taking advantage of the novel ERETIC 2 tool. To the best of our knowledge, this is the first attempt to apply this method to complex matrices of natural origin, such as hop extracts. The qNMR method set up in this study was applied to the quantification of both prenylflavonoids and bitter acids in eight hop cultivars. The performance of this analytical method was compared with that of HPLC-UV/DAD, which represents the most frequently used technique in the field of natural product analysis. The quantitative data obtained for hop samples by means of the two aforementioned techniques highlighted that the amount of bioactive compounds was slightly higher when qNMR was applied, although the order of magnitude of the values was the same. The accuracy of qNMR was comparable to that of the chromatographic method, thus proving to be a reliable tool for the analysis of these secondary metabolites in hop extracts. Graphical abstract Graphical abstract related to the extraction and analytical methods applied in this work for the analysis of bioactive compounds in Humulus lupulus L. (hop) cones.

  17. Vytra Healthcare forms a frequent user plan focusing on wellness.

    PubMed

    Herreria, J

    1998-01-01

    Vytra Healthcare's "Constellation Club" marks the first time the concept of wellness has been applied to health plans. The Constellation Club is comparable to a frequent flier club in the airline industry.

  18. Likelihood-based inference for discretely observed birth-death-shift processes, with applications to evolution of mobile genetic elements.

    PubMed

    Xu, Jason; Guttorp, Peter; Kato-Maeda, Midori; Minin, Vladimir N

    2015-12-01

    Continuous-time birth-death-shift (BDS) processes are frequently used in stochastic modeling, with many applications in ecology and epidemiology. In particular, such processes can model evolutionary dynamics of transposable elements-important genetic markers in molecular epidemiology. Estimation of the effects of individual covariates on the birth, death, and shift rates of the process can be accomplished by analyzing patient data, but inferring these rates in a discretely and unevenly observed setting presents computational challenges. We propose a multi-type branching process approximation to BDS processes and develop a corresponding expectation maximization algorithm, where we use spectral techniques to reduce calculation of expected sufficient statistics to low-dimensional integration. These techniques yield an efficient and robust optimization routine for inferring the rates of the BDS process, and apply broadly to multi-type branching processes whose rates can depend on many covariates. After rigorously testing our methodology in simulation studies, we apply our method to study intrapatient time evolution of IS6110 transposable element, a genetic marker frequently used during estimation of epidemiological clusters of Mycobacterium tuberculosis infections. © 2015, The International Biometric Society.

  19. Fire spread estimation on forest wildfire using ensemble kalman filter

    NASA Astrophysics Data System (ADS)

    Syarifah, Wardatus; Apriliani, Erna

    2018-04-01

    Wildfire is one of the most frequent disasters in the world, for example forest wildfire, causing population of forest decrease. Forest wildfire, whether naturally occurring or prescribed, are potential risks for ecosystems and human settlements. These risks can be managed by monitoring the weather, prescribing fires to limit available fuel, and creating firebreaks. With computer simulations we can predict and explore how fires may spread. The model of fire spread on forest wildfire was established to determine the fire properties. The fire spread model is prepared based on the equation of the diffusion reaction model. There are many methods to estimate the spread of fire. The Kalman Filter Ensemble Method is a modified estimation method of the Kalman Filter algorithm that can be used to estimate linear and non-linear system models. In this research will apply Ensemble Kalman Filter (EnKF) method to estimate the spread of fire on forest wildfire. Before applying the EnKF method, the fire spread model will be discreted using finite difference method. At the end, the analysis obtained illustrated by numerical simulation using software. The simulation results show that the Ensemble Kalman Filter method is closer to the system model when the ensemble value is greater, while the covariance value of the system model and the smaller the measurement.

  20. Phase-Transition-Induced Pattern Formation Applied to Basic Research on Homeopathy: A Systematic Review.

    PubMed

    Kokornaczyk, Maria Olga; Scherr, Claudia; Bodrova, Natalia Borisovna; Baumgartner, Stephan

    2018-05-16

     Methods based on phase-transition-induced pattern formation (PTPF) are increasingly used in medical research. Frequent application fields are medical diagnosis and basic research in homeopathy. Here, we present a systematic review of experimental studies concerning PTPF-based methods applied to homeopathy research. We also aimed at categorizing the PTPF methods included in this review.  Experimental studies were collected from scientific databases (PubMed, Web of Science, Russian eLibrary) and from experts in the research field in question, following the PRISMA guidelines. The studies were rated according to pre-defined scientific criteria.  The review included 15 experimental studies. We identified seven different PTPF methods applied in 12 experimental models. Among these methods, phase-transition was triggered through evaporation, freezing, or solution, and in most cases led to the formation of crystals. First experimental studies concerning the application of PTPF methods in homeopathic research were performed in the first half of the 20th century; however, they were not continued in the following years. Only in the last decade, different research groups re-launched the idea, introducing new experimental approaches and computerized pattern evaluation techniques. The here-identified PTPF methods are for the first time proposed to be classified as one group of methods based on the same basic physical phenomenon.  Although the number of experimental studies in the area is still rather limited, the long tradition in the application of PTPF methods and the dynamics of the present developments point out the high potential of these methods and indicate that they might meet the demand for scientific methods to study potentized preparations. The Faculty of Homeopathy.

  1. Space and Atmospheric Environments: From Low Earth Orbits to Deep Space

    NASA Technical Reports Server (NTRS)

    Barth, Janet L.

    2003-01-01

    Natural space and atmospheric environments pose a difficult challenge for designers of technological systems in space. The deleterious effects of environment interactions with the systems include degradation of materials, thermal changes, contamination, excitation, spacecraft glow, charging, radiation damage, and induced background interference. Design accommodations must be realistic with minimum impact on performance while maintaining a balance between cost and risk. The goal of applied research in space environments and effects is to limit environmental impacts at low cost relative to spacecraft cost and to infuse enabling and commercial off-the-shelf technologies into space programs. The need to perform applied research to understand the space environment in a practical sense and to develop methods to mitigate these environment effects is frequently underestimated by space agencies and industry. Applied science research in this area is critical because the complexity of spacecraft systems is increasing, and they are exposed simultaneously to a multitude of space environments.

  2. Detection of classical enterotoxins and identification of enterotoxin genes in Staphylococcus aureus from milk and dairy products.

    PubMed

    Morandi, S; Brasca, M; Lodi, R; Cremonesi, P; Castiglioni, B

    2007-09-20

    Milk and dairy products are frequently contaminated with enterotoxigenic Staphylococcus aureus, which is often involved in staphylococcal food poisoning. The distribution of genes encoding staphylococcal enterotoxins (SE) in S. aureus isolated from bovine, goat, sheep and buffalo milk and dairy products was verified by the presence of the corresponding SE production. A total of 112 strains of S. aureus were tested for SE production by immuno-enzymatic (SEA-SEE) and reversed passive latex agglutination (SEA-SED) methods, while multiplex-PCR was applied for SE genes (sea, sec, sed, seg, seh, sei, sej and sel). Of the total strains studied, 67% were detected to have some SE genes (se), but only 52% produced a detectable amount of the classic antigenic SE types. The bovine isolates frequently had enterotoxin SEA, SED and sej, while SEC and sel predominated in the goat and sheep strains. The results demonstrated (i) marked enterotoxigenic S. aureus strain variations, in accordance with strain origin and (ii) the two methods resulted in different information but concurred on the risk of foodstuff infection by S. aureus.

  3. Methodological flaws introduce strong bias into molecular analysis of microbial populations.

    PubMed

    Krakat, N; Anjum, R; Demirel, B; Schröder, P

    2017-02-01

    In this study, we report how different cell disruption methods, PCR primers and in silico analyses can seriously bias results from microbial population studies, with consequences for the credibility and reproducibility of the findings. Our results emphasize the pitfalls of commonly used experimental methods that can seriously weaken the interpretation of results. Four different cell lysis methods, three commonly used primer pairs and various computer-based analyses were applied to investigate the microbial diversity of a fermentation sample composed of chicken dung. The fault-prone, but still frequently used, amplified rRNA gene restriction analysis was chosen to identify common weaknesses. In contrast to other studies, we focused on the complete analytical process, from cell disruption to in silico analysis, and identified potential error rates. This identified a wide disagreement of results between applied experimental approaches leading to very different community structures depending on the chosen approach. The interpretation of microbial diversity data remains a challenge. In order to accurately investigate the taxonomic diversity and structure of prokaryotic communities, we suggest a multi-level approach combining DNA-based and DNA-independent techniques. The identified weaknesses of commonly used methods to study microbial diversity can be overcome by a multi-level approach, which produces more reliable data about the fate and behaviour of microbial communities of engineered habitats such as biogas plants, so that the best performance can be ensured. © 2016 The Society for Applied Microbiology.

  4. Fractal Analysis of Rock Joint Profiles

    NASA Astrophysics Data System (ADS)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.

  5. LC-MS/MS method development for quantitative analysis of acetaminophen uptake by the aquatic fungus Mucor hiemalis.

    PubMed

    Esterhuizen-Londt, Maranda; Schwartz, Katrin; Balsano, Evelyn; Kühn, Sandra; Pflugmacher, Stephan

    2016-06-01

    Acetaminophen is a pharmaceutical, frequently found in surface water as a contaminant. Bioremediation, in particular, mycoremediation of acetaminophen is a method to remove this compound from waters. Owing to the lack of quantitative analytical method for acetaminophen in aquatic organisms, the present study aimed to develop a method for the determination of acetaminophen using LC-MS/MS in the aquatic fungus Mucor hiemalis. The method was then applied to evaluate the uptake of acetaminophen by M. hiemalis, cultured in pellet morphology. The method was robust, sensitive and reproducible with a lower limit of quantification of 5 pg acetaminophen on column. It was found that M. hiemalis internalize the pharmaceutical, and bioaccumulate it with time. Therefore, M. hiemalis was deemed a suitable candidate for further studies to elucidate its pharmaceutical tolerance and the longevity in mycoremediation applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. [Basic Regularities and Characteristics of Compound Reinforcing--reducing Manipulation of Acu- puncture Revealed by Data Mining].

    PubMed

    Yang, Qing-qing; Jia, Chun-sheng; Wang, Jian-ling; Li, Jun-lei; Feng, Xin-xin; Tan, Zhan-na; Li, Bo-ying; Zhu, Xue-liang; Shi, Jing; Sun, Yan-hui; Li, Xiao-feng; Xu, Jing; Zhang, Xuan-ping; Zhang, Xin; Du, Yu-zhu; Bao, Na; Wang, Qiong

    2016-04-01

    To explore the regularities and features of compound reinforcing-reducing manipulation of acupuncture filiform needles in the treatment of clinical conditions or diseases by using data mining technique, so as to guide clinical practice. At first, the data base about the reinforcing-reducing manipulation (CRRM) of filiform needles for different clinical problems was established by collection, sorting, screening, recording, collation, data extraction of the related original papers published in journals and conferences and related academic dissertations from Jan. 1 of 1950 to Jan. 31 of 2015 by using key words of "acupuncture" "moxibustion" "needling" "filiform needle", and according to the included and excluded standards. A total of 130 835 papers met the included standards were collected. Outcomes of data mining in the present study showed that (1) the ORRM is most frequently applied in the internal medicine, followed by surgery, gynecology, ophthalmology and otorhinolaryngology, dermatology, and pediatrics, successively, mostly for lumbago and leg pain; (2) the heat-producing needling manipulation is the most frequently applied technique, followed by cool-producing needling, dragon-tiger warring, yang occluding in yin, yin occluding in yang techniques; (3) the highest effective rate of CRRM is for problems of the pediatrics, followed by those of the internal medicine, surgery, ophthalmology and otorhinolaryngology, dermatology, and gynecology; (4) the most fre- quently used acupoints are Zusanli (ST 36), then Sanyinjiao (SP 6), stimulated by heat-producing needling, and Zusanli (ST 36), then Quchi (LI 11), stimulated by cool-producing needling, and Huantiao (GB 30), stimulated by dragon-tiger warring needling. The compound reinforcing-reducing manipulation of acupuncture is most frequently applied to problems in the inter- nal medicine, predominately for lumbago and leg pain, and the best effectiveness is for pediatric conditions. The heat-producing needling and cool-producing needling are most frequently applied at Zusanli (ST 36) and the dragon-tiger warring manipulation is most frequently applied at Huantiao (GB 30).

  7. Variation of strain rate sensitivity index of a superplastic aluminum alloy in different testing methods

    NASA Astrophysics Data System (ADS)

    Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab

    2017-10-01

    The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.

  8. Careflow Mining Techniques to Explore Type 2 Diabetes Evolution.

    PubMed

    Dagliati, Arianna; Tibollo, Valentina; Cogni, Giulia; Chiovato, Luca; Bellazzi, Riccardo; Sacchi, Lucia

    2018-03-01

    In this work we describe the application of a careflow mining algorithm to detect the most frequent patterns of care in a type 2 diabetes patients cohort. The applied method enriches the detected patterns with clinical data to define temporal phenotypes across the studied population. Novel phenotypes are discovered from heterogeneous data of 424 Italian patients, and compared in terms of metabolic control and complications. Results show that careflow mining can help to summarize the complex evolution of the disease into meaningful patterns, which are also significant from a clinical point of view.

  9. Treatment of adolescent sexual offenders: theory-based practice.

    PubMed

    Sermabeikian, P; Martinez, D

    1994-11-01

    The treatment of adolescent sexual offenders (ASO) has its theoretical underpinnings in social learning theory. Although social learning theory has been frequently cited in literature, a comprehensive application of this theory, as applied to practice, has not been mapped out. The social learning and social cognitive theories of Bandura appear to be particularly relevant to the group treatment of this population. The application of these theories to practice, as demonstrated in a program model, is discussed as a means of demonstrating how theory-driven practice methods can be developed.

  10. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  11. Sea otter research methods and tools

    USGS Publications Warehouse

    Bodkin, James L.; Maldini, Daniela; Calkins, Donald; Atkinson, Shannon; Meehan, Rosa

    2004-01-01

    Sea otters possess physical characteristics and life history attributes that provide both opportunity and constraint to their study. Because of their relatively limited diving ability they occur in nearshore marine habitats that are usually viewable from shore, allowing direct observation of most behaviors. Because sea otters live nearshore and forage on benthic invertebrates, foraging success and diet are easily measured. Because they rely almost exclusively on their pelage for insulation, which requires frequent grooming, successful application of external tags or instruments has been limited to attachments in the interdigital webbing of the hind flippers. Techniques to surgically implant instruments into the intraperitoneal cavity are well developed and routinely applied. Because they have relatively small home ranges and rest in predictable areas, they can be recaptured with some predictability using closed-circuit scuba diving technology. The purpose of this summary is to identify some of the approaches, methods, and tools that are currently engaged for the study of sea otters, and to suggest potential avenues for applying advancing technologies.

  12. Novel Spectrophotometric Method for the Assay of Captopril in Dosage Forms using 2,6-Dichloroquinone-4-Chlorimide

    PubMed Central

    El-Enany, Nahed; Belal, Fathalla; Rizk, Mohamed

    2008-01-01

    A simple spectrophotometric method was developed for the determination of captopril (CPL) in pharmaceutical preparations. The method is based on coupling captopril with 2,6-dichloroquinone-4-chlorimide (DCQ) in dimethylsulphoxide. The yellow reaction product was measured at 443 nm. The absorbance–concentration plot was rectilinear over the range of 10-50 μg/mL with minimum detection limit (LOD) of 0.66 μg/mL and a quantification limit (LOQ) of 2.0 μg/mL. The different experimental parameters affecting the development and stability of the color were carefully studied and optimized. The proposed method was successfully applied to the analysis of commercial tablets and the results were in good agreement with those obtained using official and reference spectrophotometric methods. Hydrochlorothiazide which is frequently co-formulated with CPL did not interfere with the assay. A proposal of the reaction pathway was presented. PMID:23675082

  13. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  14. A three phase optimization method for precopy based VM live migration.

    PubMed

    Sharma, Sangeeta; Chawla, Meenu

    2016-01-01

    Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.

  15. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  16. Citrus transformation using juvenile tissue explants.

    PubMed

    Orbović, Vladimir; Grosser, Jude W

    2015-01-01

    The most frequently used method for production of citrus transgenic plants is via Agrobacterium-mediated transformation of tissues found on explants obtained from juvenile seedlings. Within the last decade and especially within the last 5-6 years, this robust method was employed to produce thousands of transgenic plants. With the newly applied screening methods that allow easier and faster detection of transgenic shoots, estimates of transformation rate for some cultivars have gone up making this approach even more attractive. Although adjustments have to be made regarding the (varietal) source of the starting material and Agrobacterium strain used in each experiment preformed, the major steps of this procedure have not changed significantly if at all. Transgenic citrus plants produced this way belong to cultivars of rootstocks, sweet oranges, grapefruits, mandarins, limes, and lemons.

  17. A Review of Trend of Nursing Theories related Caregivers in Korea

    PubMed Central

    Hae Kim, Sung; Choi, Yoona; Lee, Ji-Hye; Jang, Da-El; Kim, Sanghee

    2018-01-01

    Background: The prevalence of chronic diseases has been rapidly increased due to population aging. As the duration of care needs increase, the caregivers’ socioeconomic burdens have also increased. Objective: This review examines the attributes of caregiving experience and quality of life of caregivers in Korea with a focus on the application of nursing theory. Method: We reviewed studies on caregivers’ caring for adult patients published till 2016 in 4 bio-medical research portal websites or data bases. A total of 1,939 studies were identified through the keyword search. One hundred forty five studies were selected by a process; of which, 17 studies were theory-applied. Selected studies were analyzed in accordance with the structured analysis format. Results: Quantitative studies accounted for 76.6%, while 22.1% were qualitative studies and 1.3% were triangulation studies. Caregiver-related studies increased after 2000. Most frequently, the caregivers were spouses (28.4%), and most frequently, care was provided to a recipient affected by stroke (22.5%). The 17 theory-based studies described 20 theories (70% psychology theories, 30% nursing theories). The most frequent nursing theory was the theory of stress, appraisal and coping. Conclusion: This study sought to better understand caregiving through the analysis of Korean studies on the caregiving experience and caregivers’ QOL and this finding helped presenting empirical data for nursing by identifying the nursing theories applied to the caregiving experience and caregivers’ QOL. The results suggest that the need for further expansion of nursing theories and their greater utilization in the studies of caregiving. PMID:29515682

  18. FIXED DOSE COMBINATIONS WITH SELECTIVE BETA-BLOCKERS: QUANTITATIVE DETERMINATION IN BIOLOGICAL FLUIDS.

    PubMed

    Mahu, Ştefania Corina; Hăncianu, Monica; Agoroaei, Luminiţa; Grigoriu, Ioana Cezara; Strugaru, Anca Monica; Butnaru, Elena

    2015-01-01

    Hypertension is one of the most common causes of death, a complex and incompletely controlled disease for millions of patients. Metoprolol, bisoprolol, nebivolol and atenolol are selective beta-blockers frequently used in the management of arterial hypertension, alone or in fixed combination with other substances. This study presents the most used analytical methods for simultaneous determination in biological fluids of fixed combinations containing selective beta-blockers. Articles in Pub-Med, Science Direct and Wiley Journals databases published between years 2004-2014 were reviewed. Methods such as liquid chromatography--mass spectrometry--mass spectrometry (LC-MS/MS), high performance liquid chromatography (HPLC) or high performance liquid chromatography--mass spectrometry (HPLC-MS) were used for determination of fixed combination with beta-blockers in human plasma, rat plasma and human breast milk. LC-MS/MS method was used for simultaneous determination of fixed combinations of metoprolol with simvastatin, hydrochlorothiazide or ramipril, combinations of nebivolol and valsartan, or atenolol and amlodipine. Biological samples were processed by protein precipitation techniques or by liquid-liquid extraction. For the determination of fixed dose combinations of felodipine and metoprolol in rat plasma liquid chromatography--electrospray ionization--mass spectrometry (LC-ESI-MS/MS) was applied, using phenacetin as internal standard. HPLC-MS method was applied for the determination of bisoprolol and hydrochlorothiazide in human plasma. For the determination of atenolol and chlorthalidone from human breast milk and human plasma the HPLC method was used. The analytical methods were validated according to the specialized guidelines, and were applied to biological samples, thing that confirms the permanent concern of researchers in this field.

  19. Learning predictive models that use pattern discovery--a bootstrap evaluative approach applied in organ functioning sequences.

    PubMed

    Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen

    2010-08-01

    An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Microbiological study of lactic acid bacteria in kefir grains by culture-dependent and culture-independent methods.

    PubMed

    Chen, Hsi-Chia; Wang, Sheng-Yao; Chen, Ming-Ju

    2008-05-01

    Lactic acid bacteria (LAB) in different original kefir grains were first assessed using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) by a culture-dependent way, and were further confirmed by DNA sequencing techniques. Results indicated that a combined method of cultivation with PCR-DGGE and subsequent DNA sequencing could successfully identify four LAB strains from three kefir grains from Taiwan (named Hsinchu, Mongolia and Ilan). Lactobacillus kefiri accounted, in the three kefir grains, for at least half of the isolated colonies while Lb. kefiranofaciens was the second most frequently isolated species. Leuconostoc mesenteroides was less frequently found but still in the three kefir grains conversely to Lactococcus lactis which based on culture-dependent isolation was only found in two of the kefir grains. It was interesting to find that all three kefir grains contain similar LAB species. Furthermore, the DGGE as a culture-independent method was also applied to detect the LAB strains. Results indicated that Lb. kefiranofaciens was found in all three kefir grains, whereas Lb. kefiri was only observed in Hsinchu kefir grain and Lc. lactis was found in both Mongolia and Ilan samples. Two additional strains, Pseudomonas spp. and E. coli, were also detected in kefir grains.

  1. Multi-target determination of organic ultraviolet absorbents in organism tissues by ultrasonic assisted extraction and ultra-high performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Peng, Xianzhi; Jin, Jiabin; Wang, Chunwei; Ou, Weihui; Tang, Caiming

    2015-03-06

    A sensitive and reliable method was developed for multi-target determination of 13 most widely used organic ultraviolet (UV) absorbents (including UV filters and UV stabilizers) in aquatic organism tissues. The organic UV absorbents were extracted using ultrasonic-assisted extraction, purified via gel permeation chromatography coupled with silica gel column chromatography, and determined by ultra-high performance liquid chromatography-tandem mass spectrometry. Recoveries of the UV absorbents from organism tissues mostly ranged from 70% to 120% from fish filet with satisfactory reproducibility. Method quantification limits were 0.003-1.0ngg(-1) dry weight (dw) except for 2-ethylhexyl 4-methoxycinnamate. This method has been applied to analysis of the UV absorbents in wild and farmed aquatic organisms collected from the Pearl River Estuary, South China. 2-Hydroxy-4-methoxybenzophenone and UV-P were frequently detected in both wild and farmed marine organisms at low ngg(-1)dw. 3-(4-Methylbenzylidene)camphor and most of the benzotriazole UV stabilizers were also frequently detected in maricultured fish. Octocrylene and 2-ethylhexyl 4-methoxycinnamate were not detected in any sample. This work lays basis for in-depth study about bioaccumulation and biomagnification of the UV absorbents in marine environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Adaptation of High-Throughput Screening in Drug Discovery—Toxicological Screening Tests

    PubMed Central

    Szymański, Paweł; Markowicz, Magdalena; Mikiciuk-Olasik, Elżbieta

    2012-01-01

    High-throughput screening (HTS) is one of the newest techniques used in drug design and may be applied in biological and chemical sciences. This method, due to utilization of robots, detectors and software that regulate the whole process, enables a series of analyses of chemical compounds to be conducted in a short time and the affinity of biological structures which is often related to toxicity to be defined. Since 2008 we have implemented the automation of this technique and as a consequence, the possibility to examine 100,000 compounds per day. The HTS method is more frequently utilized in conjunction with analytical techniques such as NMR or coupled methods e.g., LC-MS/MS. Series of studies enable the establishment of the rate of affinity for targets or the level of toxicity. Moreover, researches are conducted concerning conjugation of nanoparticles with drugs and the determination of the toxicity of such structures. For these purposes there are frequently used cell lines. Due to the miniaturization of all systems, it is possible to examine the compound’s toxicity having only 1–3 mg of this compound. Determination of cytotoxicity in this way leads to a significant decrease in the expenditure and to a reduction in the length of the study. PMID:22312262

  3. Degree of anisotropy as an automated indicator of rip channels in high resolution bathymetric models

    NASA Astrophysics Data System (ADS)

    Trimble, S. M.; Houser, C.; Bishop, M. P.

    2017-12-01

    A rip current is a concentrated seaward flow of water that forms in the surf zone of a beach as a result of alongshore variations in wave breaking. Rips can carry swimmers swiftly into deep water, and they are responsible for hundreds of fatal drownings and thousands of rescues worldwide each year. These currents form regularly alongside hard structures like piers and jetties, and can also form along sandy coasts when there is a three dimensional bar morphology. This latter rip type tends to be variable in strength and location, making them arguably the most dangerous to swimmers and most difficult to identify. These currents form in characteristic rip channels in surf zone bathymetry, in which the primary axis of self-similarity is oriented shore-normal. This paper demonstrates a new method for automating identification of such rip channels in bathymetric digital surface models (DSMs) using bathymetric data collected by various remote sensing methods. Degree of anisotropy is used to detect rip channels and distinguishes between sandbars, rip channels, and other beach features. This has implications for coastal geomorphology theory and safety practices. As technological advances increase access and accuracy of topobathy mapping methods in the surf zone, frequent nearshore bathymetric DSMs could be more easily captured and processed, then analyzed with this method to result in localized, automated, and frequent detection of rip channels. This could ultimately reduce rip-related fatalities worldwide (i) in present mitigation, by identifying the present location of rip channels, (ii) in forecasting, by tracking the channel's evolution through multiple DSMs, and (iii) in rip education by improving local lifeguard knowledge of the rip hazard. Although this paper on applies analysis of degree of anisotropy to the identification of rip channels, this parameter can be applied to multiple facets of barrier island morphological analysis.

  4. A Novel Method to Detect Early Colorectal Cancer Based on Chromosome Copy Number Variation in Plasma.

    PubMed

    Xu, Jun-Feng; Kang, Qian; Ma, Xing-Yong; Pan, Yuan-Ming; Yang, Lang; Jin, Peng; Wang, Xin; Li, Chen-Guang; Chen, Xiao-Chen; Wu, Chao; Jiao, Shao-Zhuo; Sheng, Jian-Qiu

    2018-01-01

    Colonoscopy screening has been accepted broadly to evaluate the risk and incidence of colorectal cancer (CRC) during health examination in outpatients. However, the intrusiveness, complexity and discomfort of colonoscopy may limit its application and the compliance of patients. Thus, more reliable and convenient diagnostic methods are necessary for CRC screening. Genome instability, especially copy-number variation (CNV), is a hallmark of cancer and has been proved to have potential in clinical application. We determined the diagnostic potential of chromosomal CNV at the arm level by whole-genome sequencing of CRC plasma samples (n = 32) and healthy controls (n = 38). Arm level CNV was determined and the consistence of arm-level CNV between plasma and tissue was further analyzed. Two methods including regular z score and trained Support Vector Machine (SVM) classifier were applied for detection of colorectal cancer. In plasma samples of CRC patients, the most frequent deletions were detected on chromosomes 6, 8p, 14q and 1p, and the most frequent amplifications occurred on chromosome 19, 5, 2, 9p and 20p. These arm-level alterations detected in plasma were also observed in tumor tissues. We showed that the specificity of regular z score analysis for the detection of colorectal cancer was 86.8% (33/38), whereas its sensitivity was only 56.3% (18/32). Applying a trained SVM classifier (n = 40 in trained group) as the standard to detect colorectal cancer relevance ratio in the test samples (n = 30), a sensitivity of 91.7% (11/12) and a specificity 88.9% (16/18) were finally reached. Furthermore, all five early CRC patients in stages I and II were successfully detected. Trained SVM classifier based on arm-level CNVs can be used as a promising method to screen early-stage CRC. © 2018 The Author(s). Published by S. Karger AG, Basel.

  5. [The styles of coping in stressful situations and the strain of psychological complaints in relation to tobacco smoking in senior secondary school adolescents].

    PubMed

    Małkowska-Szkutnik, Agnieszka; Mazur, Joanna

    2012-01-01

    A stress-coping style is a relatively constant tendency to apply specific methods of coping with a situation perceived as stressful. The study examined the links between stress-coping styles and tobacco smoking among young people. The strain of psychological complaints was also considered. The aim of this paper is to examine the relationship between frequency of tobacco smoking and stress-coping styles and strain of psychological complaints. The presented data came from the studies conducted in the Institute of Mother and Child in 2011 among 997 students of senior secondary schools of various types. The average age of the surveyed was 16.6. Selected scales and questions from the CHIP-AE questionnaire were used. The surveyed were divided into three groups: non-smokers, occasional smokers, frequent smokers. The scale of psychological complaints was used. The CISS questionnaire was used to examine stress-coping styles. Summary indexes of stress-coping styles were devised; the following styles were identified: task-oriented, emotion-oriented, distraction-oriented and social diversion-oriented. A single-factor variant analysis was used. It was concluded that of the surveyed adolescents, 55.6% had never smoked tobacco, 16.1% smoked occasionally and 28.4% smoked frequently. The percentage of frequent smokers was higher in boys (32%) than in girls (25%). A link was established between tobacco smoking and selected stress-coping styles as well as the strain from mental complaints. It was concluded that parallel to the rising frequency of smoking, the tendency increases to apply emotion-oriented, distraction-oriented and social diversion-oriented styles. The individuals who had never smoked had a lower index of the abovementioned stress-coping styles in comparison with the frequent smokers. No correlation was found between tobacco smoking and the task-oriented style. It was also determined that the frequent smokers, in comparison with the non-smokers, suffered more often from mental complaints (8.70 (SD=6.99) vs. 6.57(SD=6.05); p<0.001). Frequent tobacco use is related to a greater strain from mental complaints. Frequent tobacco smoking is also connected with the choice of stress-coping strategies which focus around one's own emotions and distractions, and are not related to making efforts aimed at solving the problem. It may be therefore assumed that tobacco smoking intensifies behaviours which make it impossible to solve a stressful situation.

  6. Exploratory Factor Analysis as a Construct Validation Tool: (Mis)applications in Applied Linguistics Research

    ERIC Educational Resources Information Center

    Karami, Hossein

    2015-01-01

    Factor analysis has been frequently exploited in applied research to provide evidence about the underlying factors in various measurement instruments. A close inspection of a large number of studies published in leading applied linguistic journals shows that there is a misconception among applied linguists as to the relative merits of exploratory…

  7. Target identification for small bioactive molecules: finding the needle in the haystack.

    PubMed

    Ziegler, Slava; Pries, Verena; Hedberg, Christian; Waldmann, Herbert

    2013-03-04

    Identification and confirmation of bioactive small-molecule targets is a crucial, often decisive step both in academic and pharmaceutical research. Through the development and availability of several new experimental techniques, target identification is, in principle, feasible, and the number of successful examples steadily grows. However, a generic methodology that can successfully be applied in the majority of the cases has not yet been established. Herein we summarize current methods for target identification of small molecules, primarily for a chemistry audience but also the biological community, for example, the chemist or biologist attempting to identify the target of a given bioactive compound. We describe the most frequently employed experimental approaches for target identification and provide several representative examples illustrating the state-of-the-art. Among the techniques currently available, protein affinity isolation using suitable small-molecule probes (pulldown) and subsequent mass spectrometric analysis of the isolated proteins appears to be most powerful and most frequently applied. To provide guidance for rapid entry into the field and based on our own experience we propose a typical workflow for target identification, which centers on the application of chemical proteomics as the key step to generate hypotheses for potential target proteins. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Age as a Determinant to Select an Anesthesia Method for Tympanostomy Tube Insertion in a Pediatric Population

    PubMed Central

    Jung, Kihwan; Kim, Hojong

    2015-01-01

    Background and Objectives To evaluate the relationship between age and anesthesia method used for tympanostomy tube insertion (TTI) and to provide evidence to guide the selection of an appropriate anesthesia method in children. Subjects and Methods We performed a retrospective review of children under 15 years of age who underwent tympanostomy tube insertion (n=159) or myringotomy alone (n=175) under local or general anesthesia by a single surgeon at a university-based, secondary care referral hospital. Epidermiologic data between local and general anesthesia groups as well as between TTI and myringotomy were analyzed. Medical costs were compared between local and general anesthesia groups. Results Children who received local anesthesia were significantly older than those who received general anesthesia. Unilateral tympanostomy tube insertion was performed more frequently under local anesthesia than bilateral. Logistic regression modeling showed that local anesthesia was more frequently applied in older children (odds ratio=1.041) and for unilateral tympanostomy tube insertion (odds ratio=8.990). The cut-off value of age for local anesthesia was roughly 5 years. Conclusions In a pediatric population at a single medical center, age and whether unilateral or bilateral procedures were required were important factors in selecting an anesthesia method for tympanostomy tube insertion. Our findings suggest that local anesthesia can be preferentially considered for children 5 years of age or older, especially in those with unilateral otitis media with effusion. PMID:26185791

  9. Which early works are cited most frequently in climate change research literature? A bibliometric approach based on Reference Publication Year Spectroscopy.

    PubMed

    Marx, Werner; Haunschild, Robin; Thor, Andreas; Bornmann, Lutz

    2017-01-01

    This bibliometric analysis focuses on the general history of climate change research and, more specifically, on the discovery of the greenhouse effect. First, the Reference Publication Year Spectroscopy (RPYS) is applied to a large publication set on climate change of 222,060 papers published between 1980 and 2014. The references cited therein were extracted and analyzed with regard to publications, which are cited most frequently. Second, a new method for establishing a more subject-specific publication set for applying RPYS (based on the co-citations of a marker reference) is proposed (RPYS-CO). The RPYS of the climate change literature focuses on the history of climate change research in total. We identified 35 highly-cited publications across all disciplines, which include fundamental early scientific works of the nineteenth century (with a weak connection to climate change) and some cornerstones of science with a stronger connection to climate change. By using the Arrhenius (Philos Mag J Sci Ser 5(41):237-276, 1896) paper as a RPYS-CO marker paper, we selected only publications specifically discussing the discovery of the greenhouse effect and the role of carbon dioxide. Using different RPYS approaches in this study, we were able to identify the complete range of works of the celebrated icons as well as many less known works relevant for the history of climate change research. The analyses confirmed the potential of the RPYS method for historical studies: Seminal papers are detected on the basis of the references cited by the overall community without any further assumptions.

  10. Occurrence of early adverse events after vaccination against influenza at a Brazilian reference center.

    PubMed

    Lopes, Marta Heloísa; Mascheretti, Melissa; Franco, Marilia Miranda; Vasconcelos, Ricardo; Gutierrez, Eliana Battaggia

    2008-02-01

    Since 1999, the Ministry of Health in Brazil has conducted campaigns of vaccination against influenza targeted towards the elderly, chronically-diseased people and health care workers. The vaccine against influenza is associated with adverse events of minor importance. To investigate the early adverse events related to the vaccine against influenza. CASUISTICS AND METHODS: One hundred and ninety seven elderly individuals and health care workers vaccinated against influenza were included. An inquiry regarding adverse events related to the vaccine was applied seven days after the vaccination. Local adverse events were reported by 32.5% and systemic effects by 26.4% of the vaccinated subjects. Pain in the region of the injection, headache, myalgia, malaise, and coryza were more frequent in the workers than in the elderly (p<0.05). There was no statistically significant difference in the occurrence of fever. The belief of part of the population that credits frequent and uncomfortable adverse events to the vaccine was not confirmed. The subjective adverse events were more frequent in the health care workers, which can influence, in a negative way, the disclosure of the benefits of this vaccine due to their role as opinion makers.

  11. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  12. Frequent Questions About the Regulation of Used Cathode Ray Tubes (CRTs) and CRT Glass

    EPA Pesticide Factsheets

    Frequent questions such as Which materials are covered by the CRT exclusion?, How does U.S. EPA regulate recycling of used CRTs and CRT glass under the RCRA hazardous waste regulations?, What export requirements apply to CRTs and CRT glass?

  13. 40 CFR Appendix I to Part 310 - Frequently Asked Questions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 27 2010-07-01 2010-07-01 false Frequently Asked Questions I Appendix I to Part 310 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND... have a release in an elementary school, can the school district apply for reimbursement? No, for...

  14. Hazard property classification of waste according to the recent propositions of the EC using different methods.

    PubMed

    Hennebert, Pierre; van der Sloot, Hans A; Rebischung, Flore; Weltens, Reinhilde; Geerts, Lieve; Hjelmar, Ole

    2014-10-01

    Hazard classification of waste is a necessity, but the hazard properties (named "H" and soon "HP") are still not all defined in a practical and operational manner at EU level. Following discussion of subsequent draft proposals from the Commission there is still no final decision. Methods to implement the proposals have recently been proposed: tests methods for physical risks, test batteries for aquatic and terrestrial ecotoxicity, an analytical package for exhaustive determination of organic substances and mineral elements, surrogate methods for the speciation of mineral elements in mineral substances in waste, and calculation methods for human toxicity and ecotoxicity with M factors. In this paper the different proposed methods have been applied to a large assortment of solid and liquid wastes (>100). Data for 45 wastes - documented with extensive chemical analysis and flammability test - were assessed in terms of the different HP criteria and results were compared to LoW for lack of an independent classification. For most waste streams the classification matches with the designation provided in the LoW. This indicates that the criteria used by LoW are similar to the HP limit values. This data set showed HP 14 'Ecotoxic chronic' is the most discriminating HP. All wastes classified as acute ecotoxic are also chronic ecotoxic and the assessment of acute ecotoxicity separately is therefore not needed. The high number of HP 14 classified wastes is due to the very low limit values when stringent M factors are applied to total concentrations (worst case method). With M factor set to 1 the classification method is not sufficiently discriminating between hazardous and non-hazardous materials. The second most frequent hazard is HP 7 'Carcinogenic'. The third most frequent hazard is HP 10 'Toxic for reproduction' and the fourth most frequent hazard is HP 4 "Irritant - skin irritation and eye damage". In a stepwise approach, it seems relevant to assess HP 14 first, then, if the waste is not classified as hazardous, to assess subsequently HP 7, HP 10 and HP 4, and then if still not classified as hazardous, to assess the remaining properties. The elements triggering the HP 14 classification in order of importance are Zn, Cu, Pb, Cr, Cd and Hg. Progress in the speciation of Zn and Cu is essential for HP 14. Organics were quantified by the proposed method (AFNOR XP X30-489) and need no speciation. Organics can contribute significantly to intrinsic toxicity in many waste materials, but they are only of minor importance for the assessment of HP 14 as the metal concentrations are the main HP 14 classifiers. Organic compounds are however responsible for other toxicological characteristics (hormone disturbance, genotoxicity, reprotoxicity…) and shall be taken into account when the waste is not HP 14 classified. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Scope of partial least-squares regression applied to the enantiomeric composition determination of ketoprofen from strongly overlapped chromatographic profiles.

    PubMed

    Padró, Juan M; Osorio-Grisales, Jaiver; Arancibia, Juan A; Olivieri, Alejandro C; Castells, Cecilia B

    2015-07-01

    Valuable quantitative information could be obtained from strongly overlapped chromatographic profiles of two enantiomers by using proper chemometric methods. Complete separation profiles where the peaks are fully resolved are difficult to achieve in chiral separation methods, and this becomes a particularly severe problem in case that the analyst needs to measure the chiral purity, i.e., when one of the enantiomers is present in the sample in very low concentrations. In this report, we explore the scope of a multivariate chemometric technique based on unfolded partial least-squares regression, as a mathematical tool to solve this quite frequent difficulty. This technique was applied to obtain quantitative results from partially overlapped chromatographic profiles of R- and S-ketoprofen, with different values of enantioresolution factors (from 0.81 down to less than 0.2 resolution units), and also at several different S:R enantiomeric ratios. Enantiomeric purity below 1% was determined with excellent precision even from almost completely overlapped signals. All these assays were tested on the most demanding condition, i.e., when the minor peak elutes immediately after the main peak. The results were validated using univariate calibration of completely resolved profiles and the method applied to the determination of enantiomeric purity of commercial pharmaceuticals. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Assessing HTS Performance Using BioAssay Ontology: Screening and Analysis of a Bacterial Phospho-N-Acetylmuramoyl-Pentapeptide Translocase Campaign

    PubMed Central

    Moberg, Andreas; Hansson, Eva; Boyd, Helen

    2014-01-01

    Abstract With the public availability of biochemical assays and screening data constantly increasing, new applications for data mining and method analysis are evolving in parallel. One example is BioAssay Ontology (BAO) for systematic classification of assays based on screening setup and metadata annotations. In this article we report a high-throughput screening (HTS) against phospho-N-acetylmuramoyl-pentapeptide translocase (MraY), an attractive antibacterial drug target involved in peptidoglycan synthesis. The screen resulted in novel chemistry identification using a fluorescence resonance energy transfer assay. To address a subset of the false positive hits, a frequent hitter analysis was performed using an approach in which MraY hits were compared with hits from similar assays, previously used for HTS. The MraY assay was annotated according to BAO and three internal reference assays, using a similar assay design and detection technology, were identified. Analyzing the assays retrospectively, it was clear that both MraY and the three reference assays all showed a high false positive rate in the primary HTS assays. In the case of MraY, false positives were efficiently identified by applying a method to correct for compound interference at the hit-confirmation stage. Frequent hitter analysis based on the three reference assays with similar assay method identified additional false actives in the primary MraY assay as frequent hitters. This article demonstrates how assays annotated using BAO terms can be used to identify closely related reference assays, and that analysis based on these assays clearly can provide useful data to influence assay design, technology, and screening strategy. PMID:25415593

  17. The Effect of Interface Cracks on the Electrical Performance of Solar Cells

    NASA Astrophysics Data System (ADS)

    Kim, Hansung; Tofail, Md. Towfiq; John, Ciby

    2018-04-01

    Among a variety of solar cell types, thin-film solar cells have been rigorously investigated as cost-effective and efficient solar cells. In many cases, flexible solar cells are also fabricated as thin films and undergo frequent stress due to the rolling and bending modes of applications. These frequent motions result in crack initiation and propagation (including delamination) in the thin-film solar cells, which cause degradation in efficiency. Reliability evaluation of solar cells is essential for developing a new type of solar cell. In this paper, we investigated the effect of layer delamination and grain boundary crack on 3D thin-film solar cells. We used finite element method simulation for modeling of both electrical performance and cracked structure of 3D solar cells. Through simulations, we quantitatively calculated the effect of delamination length on 3D copper indium gallium diselenide (CIGS) solar cell performance. Moreover, it was confirmed that the grain boundary of CIGS could improve the solar cell performance and that grain boundary cracks could decrease cell performance by altering the open circuit voltage. In this paper, the investigated material is a CIGS solar cell, but our method can be applied to general polycrystalline solar cells.

  18. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  19. Balancing research and funding using value of information and portfolio tools for nanomaterial risk classification

    NASA Astrophysics Data System (ADS)

    Bates, Matthew E.; Keisler, Jeffrey M.; Zussblatt, Niels P.; Plourde, Kenton J.; Wender, Ben A.; Linkov, Igor

    2016-02-01

    Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis—methods commonly applied in financial and operations management—to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios—combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.

  20. Removing inorganics: Common methods have limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorg, T.J.

    1991-06-01

    When EPA sets a regulation (a maximum contaminant level) for a contaminant, it must also specify the best available technology (BAT) that can be used to remove the contaminant. Because the regulations apply to community water systems, the technologies selected are ones that are commonly used to treat community size water systems. Thus, EPA R and D program has focused its efforts on evaluating primarily community applied technologies such as conventional coagulation-filtration, lime softening, ion exchange, adsorption, and membrane process. When BAT is identified for a specific contaminant, frequently the BAT will be listed with its limitations because the processmore » is often not effective under all water quality conditions. The same limitations would also apply to POU/POE treatment. The paper discusses EPA's regulations on inorganic contaminants, the best available technologies cited by EPA, and the limitations of the processes. Using arsenic as an example, the impact of the contaminant chemistry and water quality on removals is presented.« less

  1. Balancing research and funding using value of information and portfolio tools for nanomaterial risk classification.

    PubMed

    Bates, Matthew E; Keisler, Jeffrey M; Zussblatt, Niels P; Plourde, Kenton J; Wender, Ben A; Linkov, Igor

    2016-02-01

    Risk research for nanomaterials is currently prioritized by means of expert workshops and other deliberative processes. However, analytical techniques that quantify and compare alternative research investments are increasingly recommended. Here, we apply value of information and portfolio decision analysis-methods commonly applied in financial and operations management-to prioritize risk research for multiwalled carbon nanotubes and nanoparticulate silver and titanium dioxide. We modify the widely accepted CB Nanotool hazard evaluation framework, which combines nano- and bulk-material properties into a hazard score, to operate probabilistically with uncertain inputs. Literature is reviewed to develop uncertain estimates for each input parameter, and a Monte Carlo simulation is applied to assess how different research strategies can improve hazard classification. The relative cost of each research experiment is elicited from experts, which enables identification of efficient research portfolios-combinations of experiments that lead to the greatest improvement in hazard classification at the lowest cost. Nanoparticle shape, diameter, solubility and surface reactivity were most frequently identified within efficient portfolios in our results.

  2. Effect of topical applications of neutral sodium fluoride on dental careis in the rat.

    PubMed

    Poulsen, S; Larson, R H

    1975-01-01

    A study on various regimens by which the same total amount of neutral sodium fluoride is applied to the teeth of rats showed that greater effects were observed after frequent application of 0.2% solutions than after less frequent application of more concentrated solutions.

  3. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  4. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shippert, Tim; Gaustad, Krista

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  5. The use of comparative duplex PCR in monitoring of patients with non-Hodgkin's lymphoma and chronic lymphocytic leukaemia.

    PubMed

    Slavícková, A; Forsterová, K; Ivánek, R; Cerný, J; Klener, P

    2005-01-01

    Various quantitative PCR approaches have been utilized during the last years to provide information about the treatment efficacy and the risk of recurrent disease in haematological malignancies. Apart from the frequently used real-time PCR, cost-saving modified standard PCR methods may be applied as well. This report evaluates the utility of the end-point comparative duplex PCR. We have used this method for monitoring of 35 patients with either NHL or CLL and observed a good correlation between quantitative molecular results and clinical outcome. There was also an agreement between comparative duplex PCR and real-time PCR in patients who were monitored by both methods. We therefore believe that use of this technique should be strongly considered instead of simple qualitative detection in monitoring of therapeutic outcome in NHL or CLL patients.

  6. An architecture for consolidating multidimensional time-series data onto a common coordinate grid

    DOE PAGES

    Shippert, Tim; Gaustad, Krista

    2016-12-16

    In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less

  7. Personalized Privacy-Preserving Frequent Itemset Mining Using Randomized Response

    PubMed Central

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy. PMID:25143989

  8. Personalized privacy-preserving frequent itemset mining using randomized response.

    PubMed

    Sun, Chongjing; Fu, Yan; Zhou, Junlin; Gao, Hui

    2014-01-01

    Frequent itemset mining is the important first step of association rule mining, which discovers interesting patterns from the massive data. There are increasing concerns about the privacy problem in the frequent itemset mining. Some works have been proposed to handle this kind of problem. In this paper, we introduce a personalized privacy problem, in which different attributes may need different privacy levels protection. To solve this problem, we give a personalized privacy-preserving method by using the randomized response technique. By providing different privacy levels for different attributes, this method can get a higher accuracy on frequent itemset mining than the traditional method providing the same privacy level. Finally, our experimental results show that our method can have better results on the frequent itemset mining while preserving personalized privacy.

  9. Usability evaluation techniques in mobile commerce applications: A systematic review

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.

    2016-08-01

    Obviously, there are a number of literatures concerning the usability of mobile commerce (m-commerce) applications and related areas, but they do not adequately provide knowledge about usability techniques used in most of the empirical usability evaluation for m-commerce application. Therefore, this paper is aimed at producing the usability techniques frequently used in the aspect of usability evaluation for m-commerce applications. To achieve the stated objective, systematic literature review was employed. Sixty seven papers were downloaded in usability evaluation for m-commerce and related areas; twenty one most relevant studies were selected for review in order to extract the appropriate information. The results from the review shows that heuristic evaluation, formal test and think aloud methods are the most commonly used methods in m-commerce application in comparison to cognitive walkthrough and the informal test methods. Moreover, most of the studies applied control experiment (33.3% of the total studies); other studies that applied case study for usability evaluation are 14.28%. The results from this paper provide additional knowledge to the usability practitioners and research community for the current state and use of usability techniques in m-commerce application.

  10. What methods are used to apply positive deviance within healthcare organisations? A systematic review

    PubMed Central

    Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca

    2016-01-01

    Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency of positive deviance must be assessed and compared with other quality improvement approaches. PROSPERO registration number CRD42014009365. PMID:26590198

  11. Extremality of Gaussian quantum states.

    PubMed

    Wolf, Michael M; Giedke, Geza; Cirac, J Ignacio

    2006-03-03

    We investigate Gaussian quantum states in view of their exceptional role within the space of all continuous variables states. A general method for deriving extremality results is provided and applied to entanglement measures, secret key distillation and the classical capacity of bosonic quantum channels. We prove that for every given covariance matrix the distillable secret key rate and the entanglement, if measured appropriately, are minimized by Gaussian states. This result leads to a clearer picture of the validity of frequently made Gaussian approximations. Moreover, it implies that Gaussian encodings are optimal for the transmission of classical information through bosonic channels, if the capacity is additive.

  12. Comparison of static and dynamic computer-assisted guidance methods in implantology.

    PubMed

    Mischkowski, R A; Zinser, M J; Neugebauer, J; Kübler, A C; Zöller, J E

    2006-01-01

    The planning of dental implant position and its transfer to the operation site can be considered as one of the most important factors for the long-term success of implant-supported prosthetic and epithetic restorations. This study compares computer-assisted fabricated surgical templates as the static method with intro-operative image guided navigation as the dynamic method for transfer of three-dimensional pre-operative planning. For the static method, the systems Med3D, coDiagnostix/ gonyX, and SimPlant were used. For the dynamic method, the systems RoboDent und VectorVision2 were applied. A total of 746 implants were inserted between August 1999 and December 2005 in 206 patients. The static approach was used most frequently, accounting for 611 fixtures in 168 patients. The failure ratios within the first 6 months were 1.31% in the statically controlled insertion group compared to 2.96% in the dynamically controlled insertion group. Complications related to an incorrect position of the implants have not been observed so far in either group. All computer-assisted methods included in this study were successfully applied in a clinical setting after a certain start-up period. The indications for application of computer-assisted methods in implantology are currently given in difficult anatomical situations. Due to uncomplicated handling and low resource demands, the static template technique can be recommended as the method of choice for the majority of all cases falling into this category.

  13. Apps to promote physical activity among adults: a review and content analysis

    PubMed Central

    2014-01-01

    Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981

  14. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  15. Adapting and applying common methods used in pharmacovigilance to the environment: A possible starting point for the implementation  of eco-pharmacovigilance.

    PubMed

    Wang, Jun; Zhang, Mengya; Li, Shulan; He, Bingshu

    2018-07-01

    Now, the occurrence of pharmaceuticals in natural environment has been frequently reported around the world. As a kind of biologically active compounds specially designed to be effective even at very low concentration levels, pharmaceuticals in the environment could have adverse impacts to the health of human beings or other non-targeted organisms due to long-term exposures. To minimize the pharmaceutical pollution from the perspective of drug administration, a new concept called as eco-pharmacovigilance (EPV) has been proposed as a kind of pharmacovigilance(PV) for the environment. However, as a new and comprehensive science, EPV has not sophisticated methods in practice and formalized implementation model up to now. Since EPV is a special kind of PV, it could be feasible to draw on the experience of PV as a possible and reasonable starting point for EPV. In this paper, we discussed the common methods and activities used in PV including spontaneous reporting, intensive monitoring, database studies, and their potential applicability to the environment. And we concluded that these common methods in PV could be adapted and applied to EPV. But there is still the need for organizational, technical and financial supports of the EPV system. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Using conversation analytic methods to assess fidelity to a talk-based healthcare intervention for frequently attending patients.

    PubMed

    Barnes, Rebecca K; Jepson, Marcus; Thomas, Clare; Jackson, Sue; Metcalfe, Chris; Kessler, David; Cramer, Helen

    2018-06-01

    The study aim was to assess implementation fidelity (i.e., adherence) to a talk-based primary care intervention using Conversation Analytic (CA) methods. The context was a UK feasibility trial where General Practitioners (GPs) were trained to use "BATHE" (Background,Affect,Trouble,Handling,Empathy) - a technique to screen for psychosocial issues during consultations - with frequently attending patients. 35 GPs received BATHE training between July-October 2015. 15 GPs across six practices self-selected to record a sample of their consultations with study patients at three and six months. 31 consultations were recorded. 21/26 patients in four intervention practices gave permission for analysis. The recordings were transcribed and initially coded for the presence or absence of the five BATHE components. CA methods were applied to assess delivery, focusing on position and composition of each component, and patients' responses. Initial coding showed most of the BATHE components to be present in most contacts. However the CA analysis revealed unplanned deviations in position and adaptations in composition. Frequently the intervention was initiated too early in the consultation, and the BATHE questions misunderstood by patients as pertaining to their presenting problems rather than the psychosocial context for their problems. Often these deviations resulted in reducing theoretical fidelity of the intervention as a whole. A CA approach enabled a dynamic assessment of the delivery and receipt of BATHE in situ revealing common pitfalls in delivery and provided valuable examples of more and less efficacious implementations. During the trial this evidence was used in top-up trainings to address problems in delivery and to improve GP engagement. Using CA methods enabled a more accurate assessment of implementation fidelity, a fuller description of the intervention itself, and enhanced resources for future training. When positioned appropriately, BATHE can be a useful tool for eliciting information about the wider context of the medical visit. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Beyond Astro 101: A First Report on Applying Interactive Education Techniques to an Astronphysics Class for Majors

    NASA Astrophysics Data System (ADS)

    Perrin, Marshall D.; Ghez, A. M.

    2009-05-01

    Learner-centered interactive instruction methods now have a proven track record in improving learning in "Astro 101" courses for non-majors, but have rarely been applied to higher-level astronomy courses. Can we hope for similar gains in classes aimed at astrophysics majors, or is the subject matter too fundamentally different for those techniques to apply? We present here an initial report on an updated calculus-based Introduction to Astrophysics class at UCLA that suggests such techniques can indeed result in increased learning for major students. We augmented the traditional blackboard-derivation lectures and challenging weekly problem sets by adding online questions on pre-reading assignments (''just-in-time teaching'') and frequent multiple-choice questions in class ("Think-Pair-Share''). We describe our approach, and present examples of the new Think-Pair-Share questions developed for this more sophisticated material. Our informal observations after one term are that with this approach, students are more engaged and alert, and score higher on exams than typical in previous years. This is anecdotal evidence, not hard data yet, and there is clearly a vast amount of work to be done in this area. But our first impressions strongly encourage us that interactive methods should be able improve the astrophysics major just as they have improved Astro 101.

  18. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  19. The use of propensity score methods with survival or time-to-event outcomes: reporting measures of effect similar to those used in randomized experiments.

    PubMed

    Austin, Peter C

    2014-03-30

    Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time-to-event in nature. Propensity-score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time-to-event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time-to-event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time-to-event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  20. Prognostics Applied to Electric Propulsion UAV

    NASA Technical Reports Server (NTRS)

    Goebel, Kai; Saha, Bhaskar

    2013-01-01

    Health management plays an important role in operations of UAV. If there is equipment malfunction on critical components, safe operation of the UAV might possibly be compromised. A technology with particular promise in this arena is equipment prognostics. This technology provides a state assessment of the health of components of interest and, if a degraded state has been found, it estimates how long it will take before the equipment will reach a failure threshold, conditional on assumptions about future operating conditions and future environmental conditions. This chapter explores the technical underpinnings of how to perform prognostics and shows an implementation on the propulsion of an electric UAV. A particle filter is shown as the method of choice in performing state assessment and predicting future degradation. The method is then applied to the batteries that provide power to the propeller motors. An accurate run-time battery life prediction algorithm is of critical importance to ensure the safe operation of the vehicle if one wants to maximize in-air time. Current reliability based techniques turn out to be insufficient to manage the use of such batteries where loads vary frequently in uncertain environments.

  1. Behaviorism: part of the problem or part of the solution.

    PubMed Central

    Holland, J G

    1978-01-01

    The form frequently taken by behavior-modification programs is analyzed in terms of the parent science, Behaviorism. Whereas Behaviorism assumes that behavior is the result of contingencies, and that lasting behavior change involves changing the contingencies that give rise to and support the behavior, most behavior-modification programs merely arrange special contingencies in a special environment to eliminate the "problem" behavior. Even when the problem behavior is as widespread as alcoholism and crime, behavior modifiers focus on "fixing" the alcoholic and the criminal, not on changing the societal contingencies that prevail outside the therapeutic environment and continue to produce alcoholics and criminals. The contingencies that shape this method of dealing with behavioral problems are also analyzed, and this analysis leads to a criticism of the current social structure as a behavior control system. Although applied behaviorists have frequently focused on fixing individuals, the science of Behaviorism provides the means to analyze the structures, the system, and the forms of societal control that produce the "problems". PMID:649524

  2. Mapping the literature of addictions treatment

    PubMed Central

    Blobaum, Paul M.

    2013-01-01

    Objectives: This study analyzes and describes the literature of addictions treatment and indexing coverage for core journals in the field. Methods: Citations from three source journals for the years 2008 through 2010 were analyzed using the 2010 Mapping the Literature of Nursing and Allied Health Professions Project Protocol. The distribution of cited journals was analyzed by applying Bradford's Law of Scattering. Results: More than 40,000 citations were analyzed. Journals (2,655 unique titles) were the most frequently cited form of literature, with 10 journals providing one-third of the cited journal references. Drug and Alcohol Dependence was the most frequently cited journal. The frequency of cited addictions journals, formats cited, age of citations, and indexing coverage is identified. Conclusions: Addictions treatment literature is widely dispersed among multidisciplinary publications with relatively few publications providing most of the citations. Results of this study will help researchers, students, clinicians, and librarians identify the most important journals and bibliographic indexes in this field, as well as publishing opportunities. PMID:23646025

  3. Integration of Gas Chromatography Mass Spectrometry Methods for Differentiating Ricin Preparation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wunschel, David S.; Melville, Angela M.; Ehrhardt, Christopher J.

    2012-05-17

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of the castor plant Ricinus communis. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatographicmore » - mass spectrometric (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method and independent of the seed source. In particular the abundance of mannose, arabinose, fucose, ricinoleic acid and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation.« less

  4. Social network analysis identified central outcomes for core outcome sets using systematic reviews of HIV/AIDS.

    PubMed

    Saldanha, Ian J; Li, Tianjing; Yang, Cui; Ugarte-Gil, Cesar; Rutherford, George W; Dickersin, Kay

    2016-02-01

    Methods to develop core outcome sets, the minimum outcomes that should be measured in research in a topic area, vary. We applied social network analysis methods to understand outcome co-occurrence patterns in human immunodeficiency virus (HIV)/AIDS systematic reviews and identify outcomes central to the network of outcomes in HIV/AIDS. We examined all Cochrane reviews of HIV/AIDS as of June 2013. We defined a tie as two outcomes (nodes) co-occurring in ≥2 reviews. To identify central outcomes, we used normalized node betweenness centrality (nNBC) (the extent to which connections between other outcomes in a network rely on that outcome as an intermediary). We conducted a subgroup analysis by HIV/AIDS intervention type (i.e., clinical management, biomedical prevention, behavioral prevention, and health services). The 140 included reviews examined 1,140 outcomes, 294 of which were unique. The most central outcome overall was all-cause mortality (nNBC = 23.9). The most central and most frequent outcomes differed overall and within subgroups. For example, "adverse events (specified)" was among the most central but not among the most frequent outcomes, overall. Social network analysis methods are a novel application to identify central outcomes, which provides additional information potentially useful for developing core outcome sets. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Hormonal profiling: Development of a simple method to extract and quantify phytohormones in complex matrices by UHPLC-MS/MS.

    PubMed

    Delatorre, Carolina; Rodríguez, Ana; Rodríguez, Lucía; Majada, Juan P; Ordás, Ricardo J; Feito, Isabel

    2017-01-01

    Plant growth regulators (PGRs) are very different chemical compounds that play essential roles in plant development and the regulation of physiological processes. They exert their functions by a mechanism called cross-talk (involving either synergistic or antagonistic actions) thus; it is for great interest to study as many PGRs as possible to obtain accurate information about plant status. Much effort has been applied to develop methods capable of analyze large numbers of these compounds but frequently excluding some chemical families or important PGRs within each family. In addition, most of the methods are specially designed for matrices easy to work with. Therefore, we wanted to develop a method which achieved the requirements lacking in the literature and also being fast and reliable. Here we present a simple, fast and robust method for the extraction and quantification of 20 different PGRs using UHPLC-MS/MS optimized in complex matrices. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The order and priority of research and design method application within an assistive technology new product development process: a summative content analysis of 20 case studies.

    PubMed

    Torrens, George Edward

    2018-01-01

    Summative content analysis was used to define methods and heuristics from each case study. The review process was in two parts: (1) A literature review to identify conventional research methods and (2) a summative content analysis of published case studies, based on the identified methods and heuristics to suggest an order and priority of where and when were used. Over 200 research and design methods and design heuristics were identified. From the review of the 20 case studies 42 were identified as being applied. The majority of methods and heuristics were applied in phase two, market choice. There appeared a disparity between the limited numbers of methods frequently used, under 10 within the 20 case studies, when hundreds were available. Implications for Rehabilitation The communication highlights a number of issues that have implication for those involved in assistive technology new product development: •The study defined over 200 well-established research and design methods and design heuristics that are available for use by those who specify and design assistive technology products, which provide a comprehensive reference list for practitioners in the field; •The review within the study suggests only a limited number of research and design methods are regularly used by industrial design focused assistive technology new product developers; and, •Debate is required within the practitioners working in this field to reflect on how a wider range of potentially more effective methods and heuristics may be incorporated into daily working practice.

  7. Unbiased estimation of chloroplast number in mesophyll cells: advantage of a genuine three-dimensional approach

    PubMed Central

    Kubínová, Zuzana

    2014-01-01

    Chloroplast number per cell is a frequently examined quantitative anatomical parameter, often estimated by counting chloroplast profiles in two-dimensional (2D) sections of mesophyll cells. However, a mesophyll cell is a three-dimensional (3D) structure and this has to be taken into account when quantifying its internal structure. We compared 2D and 3D approaches to chloroplast counting from different points of view: (i) in practical measurements of mesophyll cells of Norway spruce needles, (ii) in a 3D model of a mesophyll cell with chloroplasts, and (iii) using a theoretical analysis. We applied, for the first time, the stereological method of an optical disector based on counting chloroplasts in stacks of spruce needle optical cross-sections acquired by confocal laser-scanning microscopy. This estimate was compared with counting chloroplast profiles in 2D sections from the same stacks of sections. Comparing practical measurements of mesophyll cells, calculations performed in a 3D model of a cell with chloroplasts as well as a theoretical analysis showed that the 2D approach yielded biased results, while the underestimation could be up to 10-fold. We proved that the frequently used method for counting chloroplasts in a mesophyll cell by counting their profiles in 2D sections did not give correct results. We concluded that the present disector method can be efficiently used for unbiased estimation of chloroplast number per mesophyll cell. This should be the method of choice, especially in coniferous needles and leaves with mesophyll cells with lignified cell walls where maceration methods are difficult or impossible to use. PMID:24336344

  8. The occurrence of Campylobacter in river water and waterfowl within a watershed in southern Ontario, Canada.

    PubMed

    Van Dyke, M I; Morton, V K; McLellan, N L; Huck, P M

    2010-09-01

    Quantitative PCR and a culture method were used to investigate Campylobacter occurrence over 3 years in a watershed located in southern Ontario, Canada that is used as a source of drinking water. Direct DNA extraction from river water followed by quantitative PCR analysis detected thermophilic campylobacters at low concentrations (<130 cells 100 ml(-1) ) in 57-79% of samples taken from five locations. By comparison, a culture-based method detected Campylobacter in 0-23% of samples. Water quality parameters such as total Escherichia coli were not highly correlated with Campylobacter levels, although higher pathogen concentrations were observed at colder water temperatures (<10°C). Strains isolated from river water were primarily nalidixic acid-susceptible Campylobacter lari, and selected isolates were identified as Campylobacter lari ssp. concheus. Campylobacter from wild birds (seagulls, ducks and geese) were detected at a similar rate using PCR (32%) and culture-based (29%) methods, and although Campylobacter jejuni was isolated most frequently, C. lari ssp. concheus was also detected. Campylobacter were frequently detected at low concentrations in the watershed. Higher prevalence rates using quantitative PCR was likely because of the formation of viable but nonculturable cells and low recovery of the culture method. In addition to animal and human waste, waterfowl can be an important contributor of Campylobacter in the environment. Results of this study show that Campylobacter in surface water can be an important vector for human disease transmission and that method selection is important in determining pathogen occurrence in a water environment. © 2010 The Authors. Journal compilation © 2010 The Society for Applied Microbiology.

  9. Introducing an operational method to forecast long-term regional drought based on the application of artificial intelligence capabilities

    NASA Astrophysics Data System (ADS)

    Kousari, Mohammad Reza; Hosseini, Mitra Esmaeilzadeh; Ahani, Hossein; Hakimelahi, Hemila

    2017-01-01

    An effective forecast of the drought definitely gives lots of advantages in regard to the management of water resources being used in agriculture, industry, and households consumption. To introduce such a model applying simple data inputs, in this study a regional drought forecast method on the basis of artificial intelligence capabilities (artificial neural networks) and Standardized Precipitation Index (SPI in 3, 6, 9, 12, 18, and 24 monthly series) has been presented in Fars Province of Iran. The precipitation data of 41 rain gauge stations were applied for computing SPI values. Besides, weather signals including Multivariate ENSO Index (MEI), North Atlantic Oscillation (NAO), Southern Oscillation Index (SOI), NINO1+2, anomaly NINO1+2, NINO3, anomaly NINO3, NINO4, anomaly NINO4, NINO3.4, and anomaly NINO3.4 were also used as the predictor variables for SPI time series forecast the next 12 months. Frequent testing and validating steps were considered to obtain the best artificial neural networks (ANNs) models. The forecasted values were mapped in verification sector then they were compared with the observed maps at the same dates. Results showed considerable spatial and temporal relationships even among the maps of different SPI time series. Also, the first 6 months forecasted maps showed an average of 73 % agreements with the observed ones. The most important finding and the strong point of this study was the fact that although drought forecast in each station and time series was completely independent, the relationships between spatial and temporal predictions remained. This strong point mainly referred to frequent testing and validating steps in order to explore the best drought forecast models from plenty of produced ANNs models. Finally, wherever the precipitation data are available, the practical application of the presented method is possible.

  10. Attitudes of fertile and infertile woman towards new reproductive technologies: a case study of Lithuania

    PubMed Central

    2014-01-01

    Background This article analyzes several key issues in the debate: the acceptability of in vitro fertilization; regulation of assisted reproduction and the possibilities of reimbursement for assisted reproduction treatment in Lithuania. Method Two groups of respondents participated in the survey: fertile women and women with fertility disorders. 93 completed questionnaires from women with fertility problems and 146 from women with no fertility problems were analysed. Results Fertile respondents more frequently perceived the embryo as a human being (Fertile Individuals – 68.5%; Infertile Individuals – 35.5%; p < 0.05) and more frequently maintained that assisted reproduction treatment should be only partly reimbursed (Fertile Individuals – 71.3%; Infertile Individuals – 39.8%; p < 0.05). Respondents with fertility disorders more frequently thought that artificial insemination procedure could also be applied to unmarried couples (Fertile Individuals – 51.4%; Infertile Individuals – 76.3%; p < 0.05), and more frequently agreed that there should be no age limit for artificial insemination procedures (Fertile Individuals – 36.3%; Infertile Individuals – 67.7%; p < 0.05). The majority of respondents in both groups (Fertile Individuals – 77.4%; Infertile Individuals – 82.8%; p < 0.05) believed that donation of reproductive cells should be regulated by law. Fertile respondents more frequently considered that strict legal regulation was necessary in case of the number of transferred embryos (Fertile Individuals – 69.2%; Infertile Individuals – 39.8%; p < 0.05) and freezing of embryos (Fertile Individuals – 69.9%; Infertile Individuals – 57.0%; p < 0.05). Conclusion Fertile respondents were statistically more likely to believe that the IVF procedure should be applied only to married couples or women who had a regular partner, the age limit should be defined and the psychological assessment of the couple’s relationship and their readiness for the IVF procedure was necessary. In contrast, infertile couples were statistically more likely than fertile respondents to maintain that the IVF procedure should be fully reimbursed by the state. Fertile respondents were statistically more likely to be categorical with respect to the number of embryos and the freezing of embryos. Meanwhile there is a statistically significant difference in opinions of infertile respondents who were in favour of stricter regulation on donation of reproductive cells. PMID:24684746

  11. [Applications of meta-analysis in multi-omics].

    PubMed

    Han, Mingfei; Zhu, Yunping

    2014-07-01

    As a statistical method integrating multi-features and multi-data, meta-analysis was introduced to the field of life science in the 1990s. With the rapid advances in high-throughput technologies, life omics, the core of which are genomics, transcriptomics and proteomics, is becoming the new hot spot of life science. Although the fast output of massive data has promoted the development of omics study, it results in excessive data that are difficult to integrate systematically. In this case, meta-analysis is frequently applied to analyze different types of data and is improved continuously. Here, we first summarize the representative meta-analysis methods systematically, and then study the current applications of meta-analysis in various omics fields, finally we discuss the still-existing problems and the future development of meta-analysis.

  12. Conducting qualitative research in audiology: a tutorial.

    PubMed

    Knudsen, Line V; Laplante-Lévesque, Ariane; Jones, Lesley; Preminger, Jill E; Nielsen, Claus; Lunner, Thomas; Hickson, Louise; Naylor, Graham; Kramer, Sophia E

    2012-02-01

    Qualitative research methodologies are being used more frequently in audiology as it allows for a better understanding of the perspectives of people with hearing impairment. This article describes why and how international interdisciplinary qualitative research can be conducted. This paper is based on a literature review and our recent experience with the conduction of an international interdisciplinary qualitative study in audiology. We describe some available qualitative methods for sampling, data collection, and analysis and we discuss the rationale for choosing particular methods. The focus is on four approaches which have all previously been applied to audiologic research: grounded theory, interpretative phenomenological analysis, conversational analysis, and qualitative content analysis. This article provides a review of methodological issues useful for those designing qualitative research projects in audiology or needing assistance in the interpretation of qualitative literature.

  13. A nonparametric smoothing method for assessing GEE models with longitudinal binary data.

    PubMed

    Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu

    2008-09-30

    Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.

  14. HPLC-FLD determination of 4-nonylphenol and 4-tert-octylphenol in surface water samples.

    PubMed

    Cruceru, Ioana; Iancu, Vasile; Petre, Jana; Badea, Irinel Adriana; Vladescu, Luminita

    2012-05-01

    A simple, sensitive and reliable HPLC-FLD method for the routine determination of 4-nonylphenol, 4-NP and 4-tert-octylphenol, 4-t-OP content in water samples was developed. The method consists in a liquid-liquid extraction of the target analytes with dichloromethane at pH  3.0-3.5 followed by the HPLC-FLD analysis of the organic extract using a Zorbax Eclipse XDB C8 column, isocratic elution with a mixed solvent acetonitrile/water 65:35, at a flow rate of 1.0 mL/min and applying a column temperature of 40°C. The method was validated and then applied with good results for the determination of 4-NP and 4-t-OP in Ialomiţa River water samples collected each month during 2006. The concentration levels of 4-NP and 4-t-OP vary between 0.08-0.17 μg/L with higher values of 0.24-0.37 μg/L in the summer months for 4-NP, and frequently <0.05 μg/L but also between 0.06-0.09 μg/L with higher values of 0.12-0.16 μg/L in July and August for 4-t-OP and were strongly influenced by sesonial and anthropic factors. The method was also applied on samples collected over 2 years 2007 and 2008 from urban wastewaters discharged into sewage or directly into the rivers by economic agents located in 30 Romanian towns. Good results were obtained when the method was used for analysis of effluents discharged into surface waters by 16 municipal wastewater treatment plants, during the year 2008.

  15. Social network analysis using k-Path centrality method

    NASA Astrophysics Data System (ADS)

    Taniarza, Natya; Adiwijaya; Maharani, Warih

    2018-03-01

    k-Path centrality is deemed as one of the effective methods to be applied in centrality measurement in which the influential node is estimated as the node that is being passed by information path frequently. Regarding this, k-Path centrality has been employed in the analysis of this paper specifically by adapting random-algorithm approach in order to: (1) determine the influential user’s ranking in a social media Twitter; and (2) ascertain the influence of parameter α in the numeration of k-Path centrality. According to the analysis, the findings showed that the method of k-Path centrality with random-algorithm approach can be used to determine user’s ranking which influences in the dissemination of information in Twitter. Furthermore, the findings also showed that parameter α influenced the duration and the ranking results: the less the α value, the longer the duration, yet the ranking results were more stable.

  16. Extinction Map of Baade's Window

    NASA Astrophysics Data System (ADS)

    Stanek, K. Z.

    1996-03-01

    Recently Wozniak & Stanek proposed a new method to investigate interstellar extinction, based on two-band photometry, which uses red clump stars as a means to construct the reddening curve. I apply this method to the color-magnitude diagrams obtained by the Optical Gravitational Lensing Experiment to construct an extinction map of a 40' x 40' region of Baade's window, with resolution of ~30". Such a map should be useful for studies of this frequently observed region of the Galactic bulge. The map and software useful for its applications are available via anonymous ftp. The total extinction AV varies from 1.26 to 2.79 mag within the 40' x 40' field of view centered on ( alpha 2000, delta 2000) = (18:03:20.9, -30:02:06), i.e., (l, b) = (1.001, -3.885). The ratio AV/E(V - I) = 2.49 +/- 0.02 is determined with this new method.

  17. Feeling lucky? Using search engines to assess perceptions of urban sustainability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keirstead, James

    2009-02-15

    The sustainability of urban environments is an important issue at both local and international scales. Indicators are frequently used by decision-makers seeking to improve urban performance but these metrics can be dependent on sparse quantitative data. This paper explores the potential of an alternative approach, using an internet search engine to quickly gather qualitative data on the key attributes of cities. The method is applied to 21 world cities and the results indicate that, while the technique does shed light on direct and indirect aspects of sustainability, the validity of derived metrics as objective indicators of long-term sustainability is questionable.more » However the method's ability to provide subjective short-term assessments is more promising and it could therefore play an important role in participatory policy exercises such as public consultations. A number of promising technical improvements to the method's performance are also highlighted.« less

  18. Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.

    PubMed

    Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L

    2013-04-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.

  19. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  20. Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model

    PubMed Central

    Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.

    2015-01-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731

  1. [Diagnosis of primary hyperlipoproteinemia in umbilical cord blood (author's transl)].

    PubMed

    Parwaresch, M R; Radzun, H J; Mäder, C

    1977-10-01

    The aim of the present investigation was to assay the frequency of primary dyslipoproteinemia in a random sample of one hundred newborns and to describe the minimal methodical requirements for sound diagnosis. After comparison of different methods total lipids were determined by gravimetry, cholesterol and triglycerides by enzymatic methods, nonesterified fatty acids by direct colorimetry; phospholipids were estimated indirectly. All measurements were applied to umbilical cord sera and to lipoprotein fractions separated by selective precipitation. The diagnosis of hyperlipoproteinemia type IV, which is the most frequent one in adults, is highly afflicted with pitfalls in the postnatal period. A primary hyper-alpha-liproteinemia occured in one case and type II-hyperlipoproteinemia in two cases, one of the parents being involved in each case. For mass screening triglycerides should be assayed in serum and cholesterol in precipitated and resolubilized LDL-fraction, for which the minimal requirements are described.

  2. Evaluation of Hydrologic and Meteorological Impacts on Dengue Fever Incidences in Southern Taiwan using Time- Frequency Method

    NASA Astrophysics Data System (ADS)

    Tsai, Christina; Yeh, Ting-Gu

    2017-04-01

    Extreme weather events are occurring more frequently as a result of climate change. Recently dengue fever has become a serious issue in southern Taiwan. It may have characteristic temporal scales that can be identified. Some researchers have hypothesized that dengue fever incidences are related to climate change. This study applies time-frequency analysis to time series data concerning dengue fever and hydrologic and meteorological variables. Results of three time-frequency analytical methods - the Hilbert Huang transform (HHT), the Wavelet Transform (WT) and the Short Time Fourier Transform (STFT) are compared and discussed. A more effective time-frequency analysis method will be identified to analyze relevant time series data. The most influential time scales of hydrologic and meteorological variables that are associated with dengue fever are determined. Finally, the linkage between hydrologic/meteorological factors and dengue fever incidences can be established.

  3. Artificial Intelligence in Cardiology.

    PubMed

    Johnson, Kipp W; Torres Soto, Jessica; Glicksberg, Benjamin S; Shameer, Khader; Miotto, Riccardo; Ali, Mohsin; Ashley, Euan; Dudley, Joel T

    2018-06-12

    Artificial intelligence and machine learning are poised to influence nearly every aspect of the human condition, and cardiology is not an exception to this trend. This paper provides a guide for clinicians on relevant aspects of artificial intelligence and machine learning, reviews selected applications of these methods in cardiology to date, and identifies how cardiovascular medicine could incorporate artificial intelligence in the future. In particular, the paper first reviews predictive modeling concepts relevant to cardiology such as feature selection and frequent pitfalls such as improper dichotomization. Second, it discusses common algorithms used in supervised learning and reviews selected applications in cardiology and related disciplines. Third, it describes the advent of deep learning and related methods collectively called unsupervised learning, provides contextual examples both in general medicine and in cardiovascular medicine, and then explains how these methods could be applied to enable precision cardiology and improve patient outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Attitudes and beliefs of the French public about schizophrenia and major depression: results from a vignette-based population survey

    PubMed Central

    2013-01-01

    Background In their study ‘Mental Health in the General Population: Images and Realities’ Jean-Luc Roelandt et al. found a huge divide between the French public’s conceptualizations of insanity and depression. The study aims to examine whether such differences can be replicated using modern operationalized diagnostic criteria for schizophrenia and major depressive disorder. Methods In 2012, an online survey was conducted using a representative sample drawn from the adult French population (N = 1600). After presentation of a case-vignette depicting a person with either schizophrenia or major depressive disorder a fully structured interview was carried out. Results Despite some similarities marked differences between both disorders emerge regarding beliefs and attitudes. While respondents presented with the schizophrenia vignette more frequently defined symptoms as the expression of an illness with a stronger biological component and a less favorable prognosis, demanding psychiatric treatment, respondents presented with the depression vignette considered the occurrence of symptoms more frequently as the consequence of current psychosocial stress, benefitting not only from established but also from alternative treatments. People with schizophrenia were more frequently perceived as unpredictable and dangerous, there was a stronger need to separate one-self from them, they were more frequently met with fear and less frequently reacted to with pro-social feelings, and they also faced more rejection. Conclusions The French public draws a clear line between schizophrenia and major depressive disorder. This applies equally to beliefs about both disorders and to attitudes towards the persons afflicted. There is a need for interventions trying to reduce existing misconceptions in order to improve the care of patients. PMID:24252540

  5. Systematic Review of Data Mining Applications in Patient-Centered Mobile-Based Information Systems.

    PubMed

    Fallah, Mina; Niakan Kalhori, Sharareh R

    2017-10-01

    Smartphones represent a promising technology for patient-centered healthcare. It is claimed that data mining techniques have improved mobile apps to address patients' needs at subgroup and individual levels. This study reviewed the current literature regarding data mining applications in patient-centered mobile-based information systems. We systematically searched PubMed, Scopus, and Web of Science for original studies reported from 2014 to 2016. After screening 226 records at the title/abstract level, the full texts of 92 relevant papers were retrieved and checked against inclusion criteria. Finally, 30 papers were included in this study and reviewed. Data mining techniques have been reported in development of mobile health apps for three main purposes: data analysis for follow-up and monitoring, early diagnosis and detection for screening purpose, classification/prediction of outcomes, and risk calculation (n = 27); data collection (n = 3); and provision of recommendations (n = 2). The most accurate and frequently applied data mining method was support vector machine; however, decision tree has shown superior performance to enhance mobile apps applied for patients' self-management. Embedded data-mining-based feature in mobile apps, such as case detection, prediction/classification, risk estimation, or collection of patient data, particularly during self-management, would save, apply, and analyze patient data during and after care. More intelligent methods, such as artificial neural networks, fuzzy logic, and genetic algorithms, and even the hybrid methods may result in more patients-centered recommendations, providing education, guidance, alerts, and awareness of personalized output.

  6. Neural net applied to anthropological material: a methodical study on the human nasal skeleton.

    PubMed

    Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich

    2005-07-01

    A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.

  7. Increasing productivity through Total Reuse Management (TRM)

    NASA Technical Reports Server (NTRS)

    Schuler, M. P.

    1991-01-01

    Total Reuse Management (TRM) is a new concept currently being promoted by the NASA Langley Software Engineering and Ada Lab (SEAL). It uses concepts similar to those promoted in Total Quality Management (TQM). Both technical and management personnel are continually encouraged to think in terms of reuse. Reuse is not something that is aimed for after a product is completed, but rather it is built into the product from inception through development. Lowering software development costs, reducing risk, and increasing code reliability are the more prominent goals of TRM. Procedures and methods used to adopt and apply TRM are described. Reuse is frequently thought of as only being applicable to code. However, reuse can apply to all products and all phases of the software life cycle. These products include management and quality assurance plans, designs, and testing procedures. Specific examples of successfully reused products are given and future goals are discussed.

  8. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  9. Marae o te Rangi, Temples of the Heavens: Explorations in Polynesian Archaeoastronomy

    NASA Astrophysics Data System (ADS)

    Kirch, Patrick V.

    2015-08-01

    It is well established that the ancient Polynesians possessed sophisticated knowledge of astronomy, applying their understanding of the movements of heavenly bodies among other things to long-distance navigation and to their calendrical systems. Nonetheless, Polynesian archaeologists have been reticent to apply the methods of archaeoastronomy to the interpretation of prehistoric monumental sites, especially temples (marae and heiau). This presentation draws upon examples from the Mangareva and Hawaiian archipelagoes to demonstrate that Polynesian ritual architecture frequently exhibits regular patterns of orientation, suggesting that these temples were aligned with particular astronomical phenomena, such as solstice, equinox, and Pleiades rising positions. The argument is advanced that Polynesian temples were not only places of offering and sacrifice to the gods, but also locations for formal astronomical observation. In part, such observation was presumably crucial to keeping the Polynesian lunar calendar synchronized with the solar year.

  10. Membrane processing technology in the food industry: food processing, wastewater treatment, and effects on physical, microbiological, organoleptic, and nutritional properties of foods.

    PubMed

    Kotsanopoulos, Konstantinos V; Arvanitoyannis, Ioannis S

    2015-01-01

    Membrane processing technology (MPT) is increasingly used nowadays in a wide range of applications (demineralization, desalination, stabilization, separation, deacidification, reduction of microbial load, purification, etc.) in food industries. The most frequently applied techniques are electrodialysis (ED), reverse osmosis (RO), nanofiltration (NF), ultrafiltration (UF), and microfiltration (MF). Several membrane characteristics, such as pore size, flow properties, and the applied hydraulic pressure mainly determine membranes' potential uses. In this review paper the basic membrane techniques, their potential applications in a large number of fields and products towards the food industry, the main advantages and disadvantages of these methods, fouling phenomena as well as their effects on the organoleptic, qualitative, and nutritional value of foods are synoptically described. Some representative examples of traditional and modern membrane applications both in tabular and figural form are also provided.

  11. SKOLAR MD: A Model for Self-Directed, In-Context Continuing Medical Education

    PubMed Central

    Strasberg, Howard R.; Rindfleisch, Thomas C.; Hardy, Steven

    2003-01-01

    INTRODUCTION SKOLAR has implemented a web-based CME program with which physicians can earn AMA Category 1 credit for self-directed learning. METHODS Physicians researched their questions in SKOLAR and applied for CME. Physician auditors reviewed all requests across two phases of the project. A selection rule set was derived from phase one and used in phase two to flag a subset of requests for detailed review. The selection rule set is described. RESULTS In phase one, SKOLAR received 1039 CME applications. Applicants frequently found their answer (94%) and would apply it clinically (93%). A linear regression analysis comparing time awarded to time requested (capped at actual time spent) had R2=0.79. DISCUSSION We believe that hat this self-directed approach to CME is effective and an important complement to traditional CME programs. However, selective audit of self-directed CME requests is necessary to ensure validity of credits awarded. PMID:14728250

  12. Characterizing the vulnerability of frequent emergency department users by applying a conceptual framework: a controlled, cross-sectional study.

    PubMed

    Bodenmann, Patrick; Baggio, Stéphanie; Iglesias, Katia; Althaus, Fabrice; Velonaki, Venetia-Sofia; Stucki, Stephanie; Ansermet, Corine; Paroz, Sophie; Trueb, Lionel; Hugli, Olivier; Griffin, Judith L; Daeppen, Jean-Bernard

    2015-12-09

    Frequent emergency department (ED) users meet several of the criteria of vulnerability, but this needs to be further examined taking into consideration all vulnerability's different dimensions. This study aimed to characterize frequent ED users and to define risk factors of frequent ED use within a universal health care coverage system, applying a conceptual framework of vulnerability. A controlled, cross-sectional study comparing frequent ED users to a control group of non-frequent users was conducted at the Lausanne University Hospital, Switzerland. Frequent users were defined as patients with five or more visits to the ED in the previous 12 months. The two groups were compared using validated scales for each one of the five dimensions of an innovative conceptual framework: socio-demographic characteristics; somatic, mental, and risk-behavior indicators; and use of health care services. Independent t-tests, Wilcoxon rank-sum tests, Pearson's Chi-squared test and Fisher's exact test were used for the comparison. To examine the -related to vulnerability- risk factors for being a frequent ED user, univariate and multivariate logistic regression models were used. We compared 226 frequent users and 173 controls. Frequent users had more vulnerabilities in all five dimensions of the conceptual framework. They were younger, and more often immigrants from low/middle-income countries or unemployed, had more somatic and psychiatric comorbidities, were more often tobacco users, and had more primary care physician (PCP) visits. The most significant frequent ED use risk factors were a history of more than three hospital admissions in the previous 12 months (adj OR:23.2, 95%CI = 9.1-59.2), the absence of a PCP (adj OR:8.4, 95%CI = 2.1-32.7), living less than 5 km from an ED (adj OR:4.4, 95%CI = 2.1-9.0), and household income lower than USD 2,800/month (adj OR:4.3, 95%CI = 2.0-9.2). Frequent ED users within a universal health coverage system form a highly vulnerable population, when taking into account all five dimensions of a conceptual framework of vulnerability. The predictive factors identified could be useful in the early detection of future frequent users, in order to address their specific needs and decrease vulnerability, a key priority for health care policy makers. Application of the conceptual framework in future research is warranted.

  13. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  14. Novel multiplex bead-based assay for detection of IDH1 and IDH2 mutations in myeloid malignancies.

    PubMed

    Shivarov, Velizar; Ivanova, Milena; Hadjiev, Evgueniy; Naumova, Elissaveta

    2013-01-01

    Isocitrate dehydrogenase 1 and 2 (IDH) mutations are frequently found in various cancer types such as gliomas, chondrosarcomas and myeloid malignancies. Their molecular detection has recently gained wide recognition in the diagnosis and prognosis of these neoplasms. For that purpose various molecular approaches have been used but a universally accepted method is still lacking. In this study we aimed to develop a novel bead-based liquid assay using Locked nucleic acids (LNA)-modified oligonucleotide probes for multiplexed detection of the most frequent IDH1 (p.R132C, p.R132G, p.R132H, p.R132L, p.R132S) and IDH2 (p.R140Q, p.R172K) mutations. The method includes four steps: 1) PCR amplification of the targeted fragments with biotinylated primers; 2) Direct hybridization to barcoded microbeads with specific LNA-modified oligonucleotide probes; 3) Incubation with phycoerythrin coupled streptavidin; 4) Acquisition of fluorescent intensities of each set of beads on a flow platform (LuminexCorp., USA). We tested the performance of the assay on both artificial plasmid constructs and on clinical samples from 114 patients with known or suspected myeloid malignancies. The method appeared to be superior to direct sequencing having a much higher sensitivity of 2.5% mutant alleles. Applying this method to patients' samples we identified a total of 9 mutations (one IDH1 p.R132C, seven IDH2 p.R140Q and one IDH2 p.R172K). In conclusion, this method could be successfully implemented in the diagnostic work-up for various tumors known to harbor IDH1/2 mutations (e.g. myeloid malignancies, gliomas, etc.). International initiatives are needed to validate the different existing methods for detection of IDH1/2 mutations in clinical settings.

  15. [Automated parturition control in primi- and multiparous cows of a Simmental and Holstein crossbred herd].

    PubMed

    Dippon, Matthias; Petzl, Wolfram; Lange, Dorothee; Zerbe, Holm

    2017-02-09

    Perinatal calf mortality is a current problem in dairy farming with regards to ethics and economic losses. Optimizing calving management by frequent monitoring helps increasing the survival rate. The objective of this study was to evaluate the breed and parity dependent applicability of a recently introduced automated parturition control system with regards to its reliability in the field. Seven days prior to the calculated calving date the automated parturition control system was applied intravaginally in 23 primiparous and 31 multiparous cows in a Holstein-Friesian (HF) and Simmental (FV) crossbred herd. In the case of three consecutive false alarms the animal was removed from the study and was rated as false positive (FP). The statistical significant interdependence of FP alarms and the genetic proportion of HF was calculated using the Mann-Whitney-U test. The automated parturition control system could successfully be applied in all animals with a genetic HF proportion > 66%. Animals with a predominant FV proportion (> 66%) frequently showed FP alarms (31.6%). Furthermore, multiparous cows lost the intravaginal sender more frequently than primiparous cows (29.0% vs. 8.7%). In 72.2% heavily pregnant cows purulent vaginal discharge was observed. The automated parturition control system can successfully be applied in HF cows. Due to frequent losses of the intravaginal sender we cannot recommend its use in cows with a genetic FV proportion > 66%. Future developments of intravaginal automated parturition control systems should incorporate the influence of different breeds on its applicability.

  16. Integration of gas chromatography mass spectrometry methods for differentiating ricin preparation methods.

    PubMed

    Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L

    2012-05-07

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.

  17. Customizing FP-growth algorithm to parallel mining with Charm++ library

    NASA Astrophysics Data System (ADS)

    Puścian, Marek

    2017-08-01

    This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.

  18. Integrated field and laboratory tests to evaluate effects of metals-impacted wetlands on amphibians: A case study from Montana

    USGS Publications Warehouse

    Linder, G.; ,

    2003-01-01

    Mining activities frequently impact wildlife habitats, and a wide range of habitats may require evaluations of the linkages between wildlife and environmental stressors common to mining activities (e.g., physical alteration of habitat, releases of chemicals such as metals and other inorganic constituents as part of the mining operation). Wetlands, for example, are frequently impacted by mining activities. Within an ecological assessment for a wetland, toxicity evaluations for representative species may be advantageous to the site evaluation, since these species could be exposed to complex chemical mixtures potentially released from the site. Amphibian species common to these transition zones between terrestrial and aquatic habitats are one key biological indicator of exposure, and integrated approaches which involve both field and laboratory methods focused on amphibians are critical to the assessment process. The laboratory and field evaluations of a wetland in western Montana illustrates the integrated approach to risk assessment and causal analysis. Here, amphibians were used to evaluate the potential toxicity associated with heavy metal-laden sediments deposited in a reservoir. Field and laboratory methods were applied to a toxicity assessment for metals characteristic of mine tailings to reduce potential "lab to field" extrapolation errors and provide adaptive management programs with critical site-specific information targeted on remediation.

  19. [Algorithms of artificial neural networks--practical application in medical science].

    PubMed

    Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna

    2005-12-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.

  20. The human chromosomal fragile sites more often involved in constitutional deletions and duplications - A genetic and statistical assessment

    NASA Astrophysics Data System (ADS)

    Gomes, Dora Prata; Sequeira, Inês J.; Figueiredo, Carlos; Rueff, José; Brás, Aldina

    2016-12-01

    Human chromosomal fragile sites (CFSs) are heritable loci or regions of the human chromosomes prone to exhibit gaps, breaks and rearrangements. Determining the frequency of deletions and duplications in CFSs may contribute to explain the occurrence of human disease due to those rearrangements. In this study we analyzed the frequency of deletions and duplications in each human CFS. Statistical methods, namely data display, descriptive statistics and linear regression analysis were applied to analyze this dataset. We found that FRA15C, FRA16A and FRAXB are the most frequently involved CFSs in deletions and duplications occurring in the human genome.

  1. [Scheimpflug photography for the examination of phakic intraocular lenses].

    PubMed

    Baumeister, M

    2014-10-01

    High myopia phakic intraocular lenses (IOL) have become an established means of surgical correction for high ametropia. Scheimpflug photography is one of the methods which are frequently applied for postoperative examination of the implants. Results from published studies employing Scheimpflug photography for examination of anterior chamber angle-fixated, iris-fixated and sulcus-fixated phakic IOLs were evaluated. In several published studies Scheimpflug photography was used to examine the position of the implant and opacification of the crystalline lens. The results provided valuable evidence for the improvement of phakic IOL design. Scheimpflug photography offers an easy to use, rapid non-contact examination of phakic IOLs.

  2. Feature Screening for Ultrahigh Dimensional Categorical Data with Applications.

    PubMed

    Huang, Danyang; Li, Runze; Wang, Hansheng

    2014-01-01

    Ultrahigh dimensional data with both categorical responses and categorical covariates are frequently encountered in the analysis of big data, for which feature screening has become an indispensable statistical tool. We propose a Pearson chi-square based feature screening procedure for categorical response with ultrahigh dimensional categorical covariates. The proposed procedure can be directly applied for detection of important interaction effects. We further show that the proposed procedure possesses screening consistency property in the terminology of Fan and Lv (2008). We investigate the finite sample performance of the proposed procedure by Monte Carlo simulation studies, and illustrate the proposed method by two empirical datasets.

  3. Finite-element approach to Brownian dynamics of polymers.

    PubMed

    Cyron, Christian J; Wall, Wolfgang A

    2009-12-01

    In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.

  4. The Chameleon Effect: characterization challenges due to the variability of nanoparticles and their surfaces of nanoparticles and their surfaces

    NASA Astrophysics Data System (ADS)

    Baer, Donald R.

    2018-05-01

    Nanoparticles in a variety of forms are increasing important in fundamental research, technological and medical applications, and environmental or toxicology studies. Physical and chemical drivers that lead to multiple types of particle instabilities complicate both the ability to produce, appropriately characterize, and consistently deliver well-defined particles, frequently leading to inconsistencies and conflicts in the published literature. This perspective suggests that provenance information, beyond that often recorded or reported, and application of a set of core characterization methods, including a surface sensitive technique, consistently applied at critical times can serve as tools in the effort minimize reproducibility issues.

  5. A review on methods of regeneration of spent pickling solutions from steel processing.

    PubMed

    Regel-Rosocka, Magdalena

    2010-05-15

    The review presents various techniques of regeneration of spent pickling solutions, including the methods with acid recovery, such as diffusion dialysis, electrodialysis, membrane electrolysis and membrane distillation, evaporation, precipitation and spray roasting as well as those with acid and metal recovery: ion exchange, retardation, crystallization solvent and membrane extraction. Advantages and disadvantages of the techniques are presented, discussed and confronted with the best available techniques requirements. Most of the methods presented meet the BAT requirements. The best available techniques are electrodialysis, diffusion dialysis and crystallization; however, in practice spray roasting and retardation/ion-exchange are applied most frequently for spent pickling solution regeneration. As "waiting for their chance" solvent extraction, non-dispersive solvent extraction and membrane distillation should be indicated because they are well investigated and developed. Environmental and economic benefits of the methods presented in the review depend on the cost of chemicals and wastewater treatment, legislative regulations and cost of modernization of existing technologies or implementation of new ones. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  6. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  7. BIOPHYSICAL PARAMETERS DURING RADIOFREQUENCY CATHETER ABLATION OF SCAR-MEDIATED VENTRICULAR TACHYCARDIA: EPICARDIAL AND ENDOCARDIAL APPLICATIONS VIA MANUAL AND MAGNETIC NAVIGATION

    PubMed Central

    Bourke, Tara; Buch, Eric; Mathuria, Nilesh; Michowitz, Yoav; Yu, Ricky; Mandapati, Ravi; Shivkumar, Kalyanam; Tung, Roderick

    2014-01-01

    Background There is a paucity of data on biophysical parameters during radiofrequency ablation of scar-mediated ventricular tachycardia (VT). Methods and Results Data was collected from consecutive patients undergoing VT ablation with open-irrigation. Complete data was available for 372 lesions in 21 patients. The frequency of biophysical parameter changes were: >10Ω reduction (80%), bipolar EGM reduction (69%), while loss of capture was uncommon (32%). Unipolar injury current was seen in 72% of radiofrequency applications. Both EGM reduction and impedance drop were seen in 57% and a change in all 3 parameters was seen in only 20% of lesions. Late potentials were eliminated in 33%, reduced/modified in 56%, and remained after ablation in 11%. Epicardial lesions exhibited an impedance drop (90% vs 76%, p=0.002) and loss of capture (46% vs 27%, p<0.001) more frequently than endocardial lesions. Lesions delivered manually exhibited a >10Ω impedance drop (83% vs 71%, p=0.02) and an EGM reduction (71% vs 40%, p< 0.001) more frequently than lesions applied using magnetic navigation, although loss of capture, elimination of LPs, and a change in all 3 parameters were similarly observed. Conclusions VT ablation is inefficient as the majority of radiofrequency lesions do not achieve more than one targeted biophysical parameter. Only one-third of RF applications targeted at LPs result in complete elimination. Epicardial ablation within scar may be more effective than endocardial lesions and lesions applied manually may be more effective than lesions applied using magnetic navigation. New technologies directed at identifying and optimizing ablation effectiveness in scar are clinically warranted. PMID:24946895

  8. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    NASA Astrophysics Data System (ADS)

    Brandt, C.; Thakur, S. C.; Tynan, G. R.

    2016-04-01

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.

  9. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, C.; Max-Planck-Institute for Plasma Physics, Wendelsteinstr. 1, D-17491 Greifswald; Thakur, S. C.

    2016-04-15

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculationsmore » are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.« less

  10. Guided wave localization of damage via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Levine, Ross M.; Michaels, Jennifer E.; Lee, Sang Jun

    2012-05-01

    Ultrasonic guided waves are frequently applied for structural health monitoring and nondestructive evaluation of plate-like metallic and composite structures. Spatially distributed arrays of fixed piezoelectric transducers can be used to detect damage by recording and analyzing all pairwise signal combinations. By subtracting pre-recorded baseline signals, the effects due to scatterer interactions can be isolated. Given these residual signals, techniques such as delay-and-sum imaging are capable of detecting flaws, but do not exploit the expected sparse nature of damage. It is desired to determine the location of a possible flaw by leveraging the anticipated sparsity of damage; i.e., most of the structure is assumed to be damage-free. Unlike least-squares methods, L1-norm minimization techniques favor sparse solutions to inverse problems such as the one considered here of locating damage. Using this type of method, it is possible to exploit sparsity of damage by formulating the imaging process as an optimization problem. A model-based damage localization method is presented that simultaneously decomposes all scattered signals into location-based signal components. The method is first applied to simulated data to investigate sensitivity to both model mismatch and additive noise, and then to experimental data recorded from an aluminum plate with artificial damage. Compared to delay-and-sum imaging, results exhibit a significant reduction in both spot size and imaging artifacts when the model is reasonably well-matched to the data.

  11. Enhanced conformational sampling of nucleic acids by a new Hamiltonian replica exchange molecular dynamics approach.

    PubMed

    Curuksu, Jeremy; Zacharias, Martin

    2009-03-14

    Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.

  12. Applying Suffix Rules to Organization Name Recognition

    NASA Astrophysics Data System (ADS)

    Inui, Takashi; Murakami, Koji; Hashimoto, Taiichi; Utsumi, Kazuo; Ishikawa, Masamichi

    This paper presents a method for boosting the performance of the organization name recognition, which is a part of named entity recognition (NER). Although gazetteers (lists of the NEs) have been known as one of the effective features for supervised machine learning approaches on the NER task, the previous methods which have applied the gazetteers to the NER were very simple. The gazetteers have been used just for searching the exact matches between input text and NEs included in them. The proposed method generates regular expression rules from gazetteers, and, with these rules, it can realize a high-coverage searches based on looser matches between input text and NEs. To generate these rules, we focus on the two well-known characteristics of NE expressions; 1) most of NE expressions can be divided into two parts, class-reference part and instance-reference part, 2) for most of NE expressions the class-reference parts are located at the suffix position of them. A pattern mining algorithm runs on the set of NEs in the gazetteers, and some frequent word sequences from which NEs are constructed are found. Then, we employ only word sequences which have the class-reference part at the suffix position as suffix rules. Experimental results showed that our proposed method improved the performance of the organization name recognition, and achieved the 84.58 F-value for evaluation data.

  13. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    PubMed Central

    Mendes-Santos, Liana Chaves; Mograbi, Daniel; Spenciere, Bárbara; Charchat-Fichman, Helenice

    2015-01-01

    The Clock Drawing Test (CDT) is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. Objective The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989). Methods We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn") and Mini-Mental State Examination (MMSE) were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. Results and Conclusion A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated"), equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%). The CDT specific algorithm method used had high inter-rater reliability (p<0.01), and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging. PMID:29213954

  14. High-resolution melting (HRM) re-analysis of a polyposis patients cohort reveals previously undetected heterozygous and mosaic APC gene mutations.

    PubMed

    Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J

    2015-06-01

    Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.

  15. Evolution of strategic risks under future scenarios for improved utility master plans.

    PubMed

    Luís, Ana; Lickorish, Fiona; Pollard, Simon

    2016-01-01

    Integrated, long-term risk management in the water sector is poorly developed. Whilst scenario planning has been applied to singular issues (e.g. climate change), it often misses a link to risk management because the likelihood of impacts in the long-term are frequently unaccounted for in these analyses. Here we apply the morphological approach to scenario development for a case study utility, Empresa Portuguesa das Águas Livres (EPAL). A baseline portfolio of strategic risks threatening the achievement of EPAL's corporate objectives was evolved through the lens of three future scenarios, 'water scarcity', 'financial resource scarcity' and 'strong economic growth', built on drivers such as climate, demographic, economic, regulatory and technological changes and validated through a set of expert workshops. The results represent how the baseline set of risks might develop over a 30 year period, allowing threats and opportunities to be identified and enabling strategies for master plans to be devised. We believe this to be the first combined use of risk and futures methods applied to a portfolio of strategic risks in the water utility sector. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Estimated splash and training wall height requirements for stepped chutes applied to embankment dams

    USDA-ARS?s Scientific Manuscript database

    Aging embankment dams are commonly plagued with insufficient spillway capacity. To provide increased spillway capacity, stepped chutes are frequently applied as an overtopping protection system for embankment dams. Stepped chutes with sufficient length develops aerated flow. The aeration and flow...

  17. Early warning by near-real time disturbance monitoring (Invited)

    NASA Astrophysics Data System (ADS)

    Verbesselt, J.; Zeileis, A.; Herold, M.

    2013-12-01

    Near real-time monitoring of ecosystem disturbances is critical for rapidly assessing and addressing impacts on carbon dynamics, biodiversity, and socio-ecological processes. Satellite remote sensing enables cost-effective and accurate monitoring at frequent time steps over large areas. Yet, generic methods to detect disturbances within newly captured satellite images are lacking. We propose a multi-purpose time-series-based disturbance detection approach that identifies and models stable historical variation to enable change detection within newly acquired data. Satellite image time series of vegetation greenness provide a global record of terrestrial vegetation productivity over the past decades. Here, we assess and demonstrate the method by applying it to (1) real-world satellite greenness image time series between February 2000 and July 2011 covering Somalia to detect drought-related vegetation disturbances (2) landsat image time series to detect forest disturbances. First, results illustrate that disturbances are successfully detected in near real-time while being robust to seasonality and noise. Second, major drought-related disturbance corresponding with most drought-stressed regions in Somalia are detected from mid-2010 onwards. Third, the method can be applied to landsat image time series having a lower temporal data density. Furthermore the method can analyze in-situ or satellite data time series of biophysical indicators from local to global scale since it is fast, does not depend on thresholds and does not require time series gap filling. While the data and methods used are appropriate for proof-of-concept development of global scale disturbance monitoring, specific applications (e.g., drought or deforestation monitoring) mandates integration within an operational monitoring framework. Furthermore, the real-time monitoring method is implemented in open-source environment and is freely available in the BFAST package for R software. Information illustrating how to apply the method on satellite image time series are available at http://bfast.R-Forge.R-project.org/ and the example section of the bfastmonitor() function within the BFAST package.

  18. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  19. Geoelectric Characterization of Thermal Water Aquifers Using 2.5D Inversion of VES Measurements

    NASA Astrophysics Data System (ADS)

    Gyulai, Á.; Szűcs, P.; Turai, E.; Baracza, M. K.; Fejes, Z.

    2017-03-01

    This paper presents a short theoretical summary of the series expansion-based 2.5D combined geoelectric weighted inversion (CGWI) method and highlights the advantageous way with which the number of unknowns can be decreased due to the simultaneous characteristic of this inversion. 2.5D CGWI is an approximate inversion method for the determination of 3D structures, which uses the joint 2D forward modeling of dip and strike direction data. In the inversion procedure, the Steiner's most frequent value method is applied to the automatic separation of dip and strike direction data and outliers. The workflow of inversion and its practical application are presented in the study. For conventional vertical electrical sounding (VES) measurements, this method can determine the parameters of complex structures more accurately than the single inversion method. Field data show that the 2.5D CGWI which was developed can determine the optimal location for drilling an exploratory thermal water prospecting well. The novelty of this research is that the measured VES data in dip and strike direction are jointly inverted by the 2.5D CGWI method.

  20. Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.

    2017-02-01

    Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.

  1. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare

    PubMed Central

    Taylor, Michael J; McNicholas, Chris; Nicolay, Chris; Darzi, Ara; Bell, Derek; Reed, Julie E

    2014-01-01

    Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time. Results 73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles. Discussion To progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs. PMID:24025320

  2. Discovering significant evolution patterns from satellite image time series.

    PubMed

    Petitjean, François; Masseglia, Florent; Gançarski, Pierre; Forestier, Germain

    2011-12-01

    Satellite Image Time Series (SITS) provide us with precious information on land cover evolution. By studying these series of images we can both understand the changes of specific areas and discover global phenomena that spread over larger areas. Changes that can occur throughout the sensing time can spread over very long periods and may have different start time and end time depending on the location, which complicates the mining and the analysis of series of images. This work focuses on frequent sequential pattern mining (FSPM) methods, since this family of methods fits the above-mentioned issues. This family of methods consists of finding the most frequent evolution behaviors, and is actually able to extract long-term changes as well as short term ones, whenever the change may start and end. However, applying FSPM methods to SITS implies confronting two main challenges, related to the characteristics of SITS and the domain's constraints. First, satellite images associate multiple measures with a single pixel (the radiometric levels of different wavelengths corresponding to infra-red, red, etc.), which makes the search space multi-dimensional and thus requires specific mining algorithms. Furthermore, the non evolving regions, which are the vast majority and overwhelm the evolving ones, challenge the discovery of these patterns. We propose a SITS mining framework that enables discovery of these patterns despite these constraints and characteristics. Our proposal is inspired from FSPM and provides a relevant visualization principle. Experiments carried out on 35 images sensed over 20 years show the proposed approach makes it possible to extract relevant evolution behaviors.

  3. Application of nonparametric regression methods to study the relationship between NO2 concentrations and local wind direction and speed at background sites.

    PubMed

    Donnelly, Aoife; Misstear, Bruce; Broderick, Brian

    2011-02-15

    Background concentrations of nitrogen dioxide (NO(2)) are not constant but vary temporally and spatially. The current paper presents a powerful tool for the quantification of the effects of wind direction and wind speed on background NO(2) concentrations, particularly in cases where monitoring data are limited. In contrast to previous studies which applied similar methods to sites directly affected by local pollution sources, the current study focuses on background sites with the aim of improving methods for predicting background concentrations adopted in air quality modelling studies. The relationship between measured NO(2) concentration in air at three such sites in Ireland and locally measured wind direction has been quantified using nonparametric regression methods. The major aim was to analyse a method for quantifying the effects of local wind direction on background levels of NO(2) in Ireland. The method was expanded to include wind speed as an added predictor variable. A Gaussian kernel function is used in the analysis and circular statistics employed for the wind direction variable. Wind direction and wind speed were both found to have a statistically significant effect on background levels of NO(2) at all three sites. Frequently environmental impact assessments are based on short term baseline monitoring producing a limited dataset. The presented non-parametric regression methods, in contrast to the frequently used methods such as binning of the data, allow concentrations for missing data pairs to be estimated and distinction between spurious and true peaks in concentrations to be made. The methods were found to provide a realistic estimation of long term concentration variation with wind direction and speed, even for cases where the data set is limited. Accurate identification of the actual variation at each location and causative factors could be made, thus supporting the improved definition of background concentrations for use in air quality modelling studies. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. On Internal Validity in Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Pustejovsky, James E.

    2014-01-01

    Single-case designs are a class of research designs for evaluating intervention effects on individual cases. The designs are widely applied in certain fields, including special education, school psychology, clinical psychology, social work, and applied behavior analysis. The multiple baseline design (MBD) is the most frequently used single-case…

  5. Transformative Learning Approaches for Public Relations Pedagogy

    ERIC Educational Resources Information Center

    Motion, Judy; Burgess, Lois

    2014-01-01

    Public relations educators are frequently challenged by students' flawed perceptions of public relations. Two contrasting case studies are presented in this paper to illustrate how socially-oriented paradigms may be applied to a real-client project to deliver a transformative learning experience. A discourse-analytic approach is applied within the…

  6. Applied Research in Child and Adolescent Development: A Practical Guide

    ERIC Educational Resources Information Center

    Maholmes, Valerie, Ed.; Lomonaco, Carmela Gina, Ed.

    2010-01-01

    Developed for an NIH training institute, this volume is organized around the most frequently asked questions by researchers starting their careers in applied research in child and adolescent development. With contributions from the leading scholars in the field, actual research experiences highlight the challenges one faces in conducting such…

  7. Response to Intervention Blueprints: District Level Edition

    ERIC Educational Resources Information Center

    Elliott, Judy; Morrison, Diane

    2008-01-01

    Response to Intervention (RtI) is the practice of providing high quality instruction and interventions matched to student need, monitoring progress frequently to make decisions about changes in instruction or goals and applying student response data to important educational decisions. RtI should be applied to decisions in general, remedial and…

  8. Response to Intervention Blueprints: School Building Level Edition

    ERIC Educational Resources Information Center

    Kurns, Sharon; Tilly, W. David

    2008-01-01

    Response to Intervention (RtI) is the practice of providing high quality instruction and interventions matched to student need, monitoring progress frequently to make decisions about changes in instruction or goals and applying student response data to important educational decisions. RtI should be applied to decisions in general, remedial and…

  9. Brief Experimental Analyses of Academic Performance: Introduction to the Special Series

    ERIC Educational Resources Information Center

    McComas, Jennifer J.; Burns, Matthew K.

    2009-01-01

    Academic skills are frequent concerns in K-12 schools that could benefit from the application of applied behavior analysis (ABA). Brief experimental analysis (BEA) of academic performance is perhaps the most promising approach to apply ABA to student learning. Although research has consistently demonstrated the effectiveness of academic…

  10. How to eliminate the formation of chlorogenic acids artefacts during plants analysis? Sea sand disruption method (SSDM) in the HPLC analysis of chlorogenic acids and their native derivatives in plants.

    PubMed

    Wianowska, Dorota; Typek, Rafał; Dawidowicz, Andrzej L

    2015-09-01

    The analytical procedures for determining plant constituents involve the application of sample preparation methods to fully isolate and/or pre-concentrate the analyzed substances. High-temperature liquid extraction is still applied most frequently for this purpose. The present paper shows that high-temperature extraction cannot be applied for the analysis of chlorogenic acids (CQAs) and their derivatives in plants as it causes the CQAs transformation leading to erroneous quantitative estimations of these compounds. Experiments performed on different plants (black elder, hawthorn, nettle, yerba maté, St John's wort and green coffee) demonstrate that the most appropriate method for the estimation of CQAs/CQAs derivatives is sea sand disruption method (SSDM) because it does not induce any transformation and/or degradation processes in the analyzed substances. Owing to the SSDM method application we found that the investigated plants, besides four main CQAs, contain sixteen CQAs derivatives, among them three quinic acids. The application of SSDM in plant analysis not only allows to establish a true concentration of individual CQAs in the examined plants but also to determine which chlorogenic acids derivatives are native plant components and what is their concentration level. What is even more important, the application of SSDM in plant analysis allows to eliminate errors that may arise or might have arisen in the study of chlorogenic acids and their derivatives in plant metabolism. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Reply to Oreska et al ‘Comment on Geoengineering with seagrasses: is credit due where credit is given?’

    NASA Astrophysics Data System (ADS)

    Johannessen, Sophia C.; Macdonald, Robie W.

    2018-03-01

    In their comment on the review paper, ‘Geoengineering with seagrasses: is credit due where credit is given?,’ Oreska et al 2018 state that some of the concerns raised in the review ‘warrant serious consideration by the seagrass research community,’ but they argue that these concerns are either not relevant to the Voluntary Carbon Standard protocol, VM0033, or are already addressed by specific provisions in the protocol. The VM0033 protocol is a strong and detailed document that includes much of merit, but the methodology for determining carbon sequestration in sediment is flawed, both in the carbon stock change method and in the carbon burial method. The main problem with the carbon stock change method is that the labile carbon in the surface layer of sediments is vulnerable to remineralization and resuspension; it is not sequestered on the 100 year timescale required for carbon credits. The problem with the carbon burial method is chiefly in its application. The protocol does not explain how to apply 210Pb-dating to a core, leaving project proponents to apply the inappropriate methods frequently reported in the blue carbon literature, which result in overestimated sediment accumulation rates. Finally, the default emission factors permitted by the protocol are based on literature values that are themselves too high. All of these problems can be addressed, which should result in clearer, more rigorous guidelines for awarding carbon credits for the protection or restoration of seagrass meadows.

  12. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  13. Accounting for Dependence Induced by Weighted KNN Imputation in Paired Samples, Motivated by a Colorectal Cancer Study

    PubMed Central

    Suyundikov, Anvar; Stevens, John R.; Corcoran, Christopher; Herrick, Jennifer; Wolff, Roger K.; Slattery, Martha L.

    2015-01-01

    Missing data can arise in bioinformatics applications for a variety of reasons, and imputation methods are frequently applied to such data. We are motivated by a colorectal cancer study where miRNA expression was measured in paired tumor-normal samples of hundreds of patients, but data for many normal samples were missing due to lack of tissue availability. We compare the precision and power performance of several imputation methods, and draw attention to the statistical dependence induced by K-Nearest Neighbors (KNN) imputation. This imputation-induced dependence has not previously been addressed in the literature. We demonstrate how to account for this dependence, and show through simulation how the choice to ignore or account for this dependence affects both power and type I error rate control. PMID:25849489

  14. Ultra-sensitive, stable isotope assisted quantification of multiple urinary mycotoxin exposure biomarkers.

    PubMed

    Šarkanj, Bojan; Ezekiel, Chibundu N; Turner, Paul C; Abia, Wilfred A; Rychlik, Michael; Krska, Rudolf; Sulyok, Michael; Warth, Benedikt

    2018-08-17

    There is a critical need to better understand the patterns, levels and combinatory effects of exposures we are facing through our diet and environment. Mycotoxin mixtures are of particular concern due to chronic low dose exposures caused by naturally contaminated food. To facilitate new insights into their role in chronic disease, mycotoxins and their metabolites are quantified in bio-fluids as biomarkers of exposure. Here, we describe a highly sensitive urinary assay based on ultra-high performance liquid chromatography - tandem mass spectrometer (UHPLC-MS/MS) and 13 C-labelled or deuterated internal standards covering the most relevant regulated and emerging mycotoxins. Utilizing enzymatic pre-treatment, solid phase extraction and UHPLC separation, the sensitivity of the method was significantly higher (10-160x lower LODs) than in a previously described method used for comparison purpose, and stable isotopes provided compensation for challenging matrix effects. This method was in-house validated and applied to re-assess mycotoxin exposure in urine samples obtained from Nigerian children, adolescent and adults, naturally exposed through their regular diet. Owing to the methods high sensitivity, biomarkers were detected in all samples. The mycoestrogen zearalenone was the most frequently detected contaminant (82%) but also ochratoxin A (76%), aflatoxin M 1 (73%) and fumonisin B 1 (71%) were quantified in a large share of urines. Overall, 57% of 120 urines were contaminated with both, aflatoxin M 1 and fumonisin B 1 , and other co-exposures were frequent. These results clearly demonstrate the advanced performance of the method to assess lowest background exposures (pg mL -1 range) using a single, highly robust assay that will allow for the systematic investigation of low dose effects on human health. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  15. A rapid and simple method for the determination of psychoactive alkaloids by CE-UV: application to Peganum Harmala seed infusions.

    PubMed

    Tascón, Marcos; Benavente, Fernando; Vizioli, Nora M; Gagliardi, Leonardo G

    2017-04-01

    The β-carboline alkaloids of the harmala (HAlks) group are compounds widely spread in many natural sources, but found at relatively high levels in some specific plants like Peganum harmala (Syrian rue) or Banisteriopsis caapi. HAlks are a reversible Mono Amino Oxidase type A Inhibitor (MAOI) and, as a consequence, these plants or their extracts can be used to produce psychotropic effects when are combined with psychotropic drugs based on amino groups. Since the occurrence and the levels of the HAlks in natural sources are subject to significant variability, more widespread use is not clinical but recreational or ritual, for example B. caapi is a known part of the Ayahuasca ritual mixture. The lack of simple methods to control the variable levels of these compounds in natural sources restricts the possibilities to dose in strict quantities and, as a consequence, limits its use with pharmacological or clinical purposes. In this work, we present a fast, simple, and robust method of quantifying simultaneously the six HAlks more frequently found in plants, i.e., harmine, harmaline, harmol, harmalol, harmane, and norharmane, by capillary electrophoresis instruments equipped with the more common detector UV. The method is applied to analyze these HAlks in P. Harmala seeds infusion which is a frequent intake form for these HAlks. The method is validated in three different instruments in order to evaluate the transferability and to compare the performances between them. In this case, harmaline, harmine, and harmol were found in the infusion samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Sampling maternal care behaviour in domestic dogs: What's the best approach?

    PubMed

    Czerwinski, Veronika H; Smith, Bradley P; Hynd, Philip I; Hazel, Susan J

    2017-07-01

    Our understanding of the frequency and duration of maternal care behaviours in the domestic dog during the first two postnatal weeks is limited, largely due to the inconsistencies in the sampling methodologies that have been employed. In order to develop a more concise picture of maternal care behaviour during this period, and to help establish the sampling method that represents these behaviours best, we compared a variety of time sampling methods Six litters were continuously observed for a total of 96h over postnatal days 3, 6, 9 and 12 (24h per day). Frequent (dam presence, nursing duration, contact duration) and infrequent maternal behaviours (anogenital licking duration and frequency) were coded using five different time sampling methods that included: 12-h night (1800-0600h), 12-h day (0600-1800h), one hour period during the night (1800-0600h), one hour period during the day (0600-1800h) and a one hour period anytime. Each of the one hour time sampling method consisted of four randomly chosen 15-min periods. Two random sets of four 15-min period were also analysed to ensure reliability. We then determined which of the time sampling methods averaged over the three 24-h periods best represented the frequency and duration of behaviours. As might be expected, frequently occurring behaviours were adequately represented by short (oneh) sampling periods, however this was not the case with the infrequent behaviour. Thus, we argue that the time sampling methodology employed must match the behaviour of interest. This caution applies to maternal behaviour in altricial species, such as canids, as well as all systematic behavioural observations utilising time sampling methodology. Copyright © 2017. Published by Elsevier B.V.

  17. Trends analysis on research articles in the korean journal of medical education.

    PubMed

    Lee, Young Hee; Lee, Young-Mee; Kwon, Hyojin

    2012-12-01

    The purpose of this study was to examine the chronological changes and progress in medical education research in Korea and to identify the less investigated topics that need further study and improvement with regard to methodological quality. Of the 590 articles that were published from 1989 to 2010 in the Korean Journal of Medical Education, 386 original research papers were extracted for the analysis. The extracted papers were systematically reviewed using 2 analysis schemes that we developed: one scheme was designed to classify research topics, and the other determined the methodology that was used. The main results were as follows: The most popular research areas were curriculum, educational method, and evaluation in basic medical education; in contrast, studies that addressed postgraduate education, continuous professional development, and educational administration were less frequent; The most frequently studied topics were clinical performance/skills evaluation, clerkship, curriculum development, and problem-based learning, Quantitative studies predominated over qualitative studies and mixed methods (265 vs. 95 vs. 26). Two hundred forty papers were descriptive, cross-sectional studies, and 17 were experimental studies. Most qualitative studies were non-participation observational studies. In conclusion, there has been dramatic growth in the extent of medical education research in Korea in the past two decades. However, more studies that investigate the graduate medical education and the continuous professional development should be performed. Moreover, robust experimental designs and methods should be applied to provide stronger evidence that can practice best-evidence medical education.

  18. A Method for Identifying Prevalent Chemical Combinations in the U.S. Population

    PubMed Central

    Wambaugh, John F.; Ring, Caroline L.; Tornero-Velez, Rogelio; Setzer, R. Woodrow

    2017-01-01

    Background: Through the food and water they ingest, the air they breathe, and the consumer products with which they interact at home and at work, humans are exposed to tens of thousands of chemicals, many of which have not been evaluated to determine their potential toxicities. Furthermore, while current chemical testing tends to focus on individual chemicals, the exposures that people actually experience involve mixtures of chemicals. Unfortunately, the number of mixtures that can be formed from the thousands of environmental chemicals is enormous, and testing all of them would be impossible. Objectives: We seek to develop and demonstrate a method for identifying those mixtures that are most prevalent in humans. Methods: We applied frequent itemset mining, a technique traditionally used for market basket analysis, to biomonitoring data from the 2009–2010 cycle of the continuous National Health and Nutrition Examination Survey (NHANES) to identify combinations of chemicals that frequently co-occur in people. Results: We identified 90 chemical combinations consisting of relatively few chemicals that occur in at least 30% of the U.S. population, as well as three supercombinations consisting of relatively many chemicals that occur in a small but nonnegligible proportion of the U.S. population. Conclusions: We demonstrated how FIM can be used in conjunction with biomonitoring data to narrow a large number of possible chemical combinations down to a smaller set of prevalent chemical combinations. https://doi.org/10.1289/EHP1265 PMID:28858827

  19. The Legacy of Florence Nightingale's Environmental Theory: Nursing Research Focusing on the Impact of Healthcare Environments.

    PubMed

    Zborowsky, Terri

    2014-01-01

    The purpose of this paper is to explore nursing research that is focused on the impact of healthcare environments and that has resonance with the aspects of Florence Nightingale's environmental theory. Nurses have a unique ability to apply their observational skills to understand the role of the designed environment to enable healing in their patients. This affords nurses the opportunity to engage in research studies that have immediate impact on the act of nursing. Descriptive statistics were performed on 67 healthcare design-related research articles from 25 nursing journals to discover the topical areas of interest of nursing research today. Data were also analyzed to reveal the research designs, research methods, and research settings. These data are part of an ongoing study. Descriptive statistics reveal that topics and settings most frequently cited are in keeping with the current healthcare foci of patient care quality and safety in acute and intensive care environments. Research designs and methods most frequently cited are in keeping with the early progression of a knowledge area. A few assertions can be made as a result of this study. First, education is important to continue the knowledge development in this area. Second, multiple method research studies should continue to be considered as important to healthcare research. Finally, bedside nurses are in the best position possible to begin to help us all, through research, understand how the design environment impacts patients during the act of nursing. Evidence-based design, literature review, nursing.

  20. Interactional justice at work is related to sickness absence: a study using repeated measures in the Swedish working population.

    PubMed

    Leineweber, Constanze; Bernhard-Oettel, Claudia; Peristera, Paraskevi; Eib, Constanze; Nyberg, Anna; Westerlund, Hugo

    2017-12-08

    Research has shown that perceived unfairness contributes to higher rates of sickness absence. While shorter, but more frequent periods of sickness absence might be a possibility for the individual to get relief from high strain, long-term sickness absence might be a sign of more serious health problems. The Uncertainty Management Model suggests that justice is particularly important in times of uncertainty, e.g. perceived job insecurity. The present study investigated the association between interpersonal and informational justice at work with long and frequent sickness absence respectively, under conditions of job insecurity. Data were derived from the 2010, 2012, and 2014 biennial waves of the Swedish Longitudinal Occupational Survey of Health (SLOSH). The final analytic sample consisted of 19,493 individuals. We applied repeated measures regression analyses through generalized estimating equations (GEE), a method for longitudinal data that simultaneously analyses variables at different time points. We calculated risk of long and frequent sickness absence, respectively in relation to interpersonal and informational justice taking perceptions of job insecurity into account. We found informational and interpersonal justice to be associated with risk of long and frequent sickness absence independently of job insecurity and demographic variables. Results from autoregressive GEE provided some support for a causal relationship between justice perceptions and sickness absence. Contrary to expectations, we found no interaction between justice and job insecurity. Our results underline the need for fair and just treatment of employees irrespective of perceived job insecurity in order to keep the workforce healthy and to minimize lost work days due to sickness absence.

  1. Sampling naturally contaminated broiler carcasses for Salmonella by three different methods

    USDA-ARS?s Scientific Manuscript database

    Postchill neck skin (NS) maceration and whole carcass rinsing (WCR) are frequently used methods to detect salmonellae from commercially processed broilers. These are practical, nondestructive methods, but they are insensitive and may result in frequent false negatives (20 to 40%). NS samples only ...

  2. Histology assessment of bipolar coagulation and argon plasma coagulation on digestive tract

    PubMed Central

    Garrido, Teresa; Baba, Elisa R; Wodak, Stephanie; Sakai, Paulo; Cecconello, Ivan; Maluf-Filho, Fauze

    2014-01-01

    AIM: To analyze the effect of bipolar electrocoagulation and argon plasma coagulation on fresh specimens of gastrointestinal tract. METHODS: An experimental evaluation was performed at Hospital das Clinicas of the University of São Paulo, on 31 fresh surgical specimens using argon plasma coagulation and bipolar electrocoagulation at different time intervals. The depth of tissue damage was histopathologically analyzed by single senior pathologist unaware of the coagulation method and power setting applied. To analyze the results, the mucosa was divided in superficial mucosa (epithelial layer of the esophagus and superficial portion of the glandular layer of the stomach and colon) intermediate mucosa (until the lamina propria of the esophagus and until the bottom of the glandular layer of the stomach and colon) and muscularis mucosa. Necrosis involvement of the layers was compared in several combinations of power and time interval. RESULTS: Involvement of the intermediate mucosa of the stomach and of the muscularis mucosa of the three organs was more frequent when higher amounts of energy were used with argon plasma. In the esophagus and in the colon, injury of the intermediate mucosa was frequent, even when small amounts of energy were used. The use of bipolar electrocoagulation resulted in more frequent involvement of the intermediate mucosa and of the muscularis mucosa of the esophagus and of the colon when higher amounts of energy were used. In the stomach, these involvements were rare. The risk of injury of the muscularis propria was significant only in the colon when argon plasma coagulation was employed. CONCLUSION: Tissue damage after argon plasma coagulation is deeper than bipolar electrocoagulation. Both of them depend on the amount of energy used. PMID:25031789

  3. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  4. Application of EOF/PCA-based methods in the post-processing of GRACE derived water variations

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2010-05-01

    Two problems that users of monthly GRACE gravity field solutions face are 1) the presence of correlated noise in the Stokes coefficients that increases with harmonic degree and causes ‘striping', and 2) the fact that different physical signals are overlaid and difficult to separate from each other in the data. These problems are termed the signal-noise separation problem and the signal-signal separation problem. Methods that are based on principal component analysis and empirical orthogonal functions (PCA/EOF) have been frequently proposed to deal with these problems for GRACE. However, different strategies have been applied to different (spatial: global/regional, spectral: global/order-wise, geoid/equivalent water height) representations of the GRACE level 2 data products, leading to differing results and a general feeling that PCA/EOF-based methods are to be applied ‘with care'. In addition, it is known that conventional EOF/PCA methods force separated modes to be orthogonal, and that, on the other hand, to either EOFs or PCs an arbitrary orthogonal rotation can be applied. The aim of this paper is to provide a common theoretical framework and to study the application of PCA/EOF-based methods as a signal separation tool due to post-process GRACE data products. In order to investigate and illustrate the applicability of PCA/EOF-based methods, we have employed them on GRACE level 2 monthly solutions based on the Center for Space Research, University of Texas (CSR/UT) RL04 products and on the ITG-GRACE03 solutions from the University of Bonn, and on various representations of them. Our results show that EOF modes do reveal the dominating annual, semiannual and also long-periodic signals in the global water storage variations, but they also show how choosing different strategies changes the outcome and may lead to unexpected results.

  5. Product Quality Improvement Using FMEA for Electric Parking Brake (EPB)

    NASA Astrophysics Data System (ADS)

    Dumitrescu, C. D.; Gruber, G. C.; Tişcă, I. A.

    2016-08-01

    One of the most frequently used methods to improve product quality is complex FMEA. (Failure Modes and Effects Analyses). In the literature various FMEA is known, depending on the mode and depending on the targets; we mention here some of these names: Failure Modes and Effects Analysis Process, or analysis Failure Mode and Effects Reported (FMECA). Whatever option is supported by the work team, the goal of the method is the same: optimize product design activities in research, design processes, implementation of manufacturing processes, optimization of mining product to beneficiaries. According to a market survey conducted on parts suppliers to vehicle manufacturers FMEA method is used in 75%. One purpose of the application is that after the research and product development is considered resolved, any errors which may be detected; another purpose of applying the method is initiating appropriate measures to avoid mistakes. Achieving these two goals leads to a high level distribution in applying, to avoid errors already in the design phase of the product, thereby avoiding the emergence and development of additional costs in later stages of product manufacturing. During application of FMEA method using standardized forms; with their help will establish the initial assemblies of product structure, in which all components will be viewed without error. The work is an application of the method FMEA quality components to optimize the structure of the electrical parking brake (Electric Parching Brake - E.P.B). This is a component attached to the roller system which ensures automotive replacement of conventional mechanical parking brake while ensuring its comfort, functionality, durability and saves space in the passenger compartment. The paper describes the levels at which they appealed in applying FMEA, working arrangements in the 4 distinct levels of analysis, and how to determine the number of risk (Risk Priority Number); the analysis of risk factors and established authors who have imposed measures to reduce / eliminate risk completely exploiting this complex product.

  6. Concerns regarding 24-h sampling for formaldehyde, acetaldehyde, and acrolein using 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents

    NASA Astrophysics Data System (ADS)

    Herrington, Jason S.; Hays, Michael D.

    2012-08-01

    There is high demand for accurate and reliable airborne carbonyl measurement methods due to the human and environmental health impacts of carbonyls and their effects on atmospheric chemistry. Standardized 2,4-dinitrophenylhydrazine (DNPH)-based sampling methods are frequently applied for measuring gaseous carbonyls in the atmospheric environment. However, there are multiple short-comings associated with these methods that detract from an accurate understanding of carbonyl-related exposure, health effects, and atmospheric chemistry. The purpose of this brief technical communication is to highlight these method challenges and their influence on national ambient monitoring networks, and to provide a logical path forward for accurate carbonyl measurement. This manuscript focuses on three specific carbonyl compounds of high toxicological interest—formaldehyde, acetaldehyde, and acrolein. Further method testing and development, the revision of standardized methods, and the plausibility of introducing novel technology for these carbonyls are considered elements of the path forward. The consolidation of this information is important because it seems clear that carbonyl data produced utilizing DNPH-based methods are being reported without acknowledgment of the method short-comings or how to best address them.

  7. Effects of common seagrass restoration methods on ecosystem structure in subtropical seagrass meadows.

    PubMed

    Bourque, Amanda S; Fourqurean, James W

    2014-06-01

    Seagrass meadows near population centers are subject to frequent disturbance from vessel groundings. Common seagrass restoration methods include filling excavations and applying fertilizer to encourage seagrass recruitment. We sampled macrophytes, soil structure, and macroinvertebrate infauna at unrestored and recently restored vessel grounding disturbances to evaluate the effects of these restoration methods on seagrass ecosystem structure. After a year of observations comparing filled sites to both undisturbed reference and unrestored disturbed sites, filled sites had low organic matter content, nutrient pools, and primary producer abundance. Adding a nutrient source increased porewater nutrient pools at disturbed sites and in undisturbed meadows, but not at filled sites. Environmental predictors of infaunal community structure across treatments included soil texture and nutrient pools. At the one year time scale, the restoration methods studied did not result in convergence between restored and unrestored sites. Particularly in filled sites, soil conditions may combine to constrain rapid development of the seagrass community and associated infauna. Our study is important for understanding early recovery trajectories following restoration using these methods. Published by Elsevier Ltd.

  8. [Limiting a Medline/PubMed query to the "best" articles using the JCR relative impact factor].

    PubMed

    Avillach, P; Kerdelhué, G; Devos, P; Maisonneuve, H; Darmoni, S J

    2014-12-01

    Medline/PubMed is the most frequently used medical bibliographic research database. The aim of this study was to propose a new generic method to limit any Medline/PubMed query based on the relative impact factor and the A & B categories of the SIGAPS score. The entire PubMed corpus was used for the feasibility study, then ten frequent diseases in terms of PubMed indexing and the citations of four Nobel prize winners. The relative impact factor (RIF) was calculated by medical specialty defined in Journal Citation Reports. The two queries, which included all the journals in category A (or A OR B), were added to any Medline/PubMed query as a central point of the feasibility study. Limitation using the SIGAPS category A was larger than the when using the Core Clinical Journals (CCJ): 15.65% of PubMed corpus vs 8.64% for CCJ. The response time of this limit applied to the entire PubMed corpus was less than two seconds. For five diseases out of ten, limiting the citations with the RIF was more effective than with the CCJ. For the four Nobel prize winners, limiting the citations with the RIF was more effective than the CCJ. The feasibility study to apply a new filter based on the relative impact factor on any Medline/PubMed query was positive. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  9. Forecasting daily streamflow using online sequential extreme learning machines

    NASA Astrophysics Data System (ADS)

    Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.

    2016-06-01

    While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.

  10. The use of FDEM in hydrogeophysics: A review

    NASA Astrophysics Data System (ADS)

    Boaga, Jacopo

    2017-04-01

    Hydrogeophysics is a rapidly evolving discipline emerging from geophysical methods. Geophysical methods are nowadays able to illustrate not only the fabric and the structure of the underground, but also the subsurface processes that occur within it, as fluids dynamic and biogeochemical reactions. This is a growing wide inter-disciplinary field, specifically dedicated to revealing soil properties and monitoring processes of change due to soil/bio/atmosphere interactions. The discipline involves environmental, hydrological, agricultural research and counts application for several engineering purposes. The most frequently used techniques in the hydrogeophysical framework are the electric and electromagnetic methods because they are highly sensitive to soil physical properties such as texture, salinity, mineralogy, porosity and water content. Non-invasive techniques are applied in a number of problems related to characterization of subsurface hydrology and groundwater dynamic processes. Ground based methods, as electrical tomography, proved to obtain considerable resolution but they are difficult to extend to wider exploration purposes due to their logistical limitation. Methods that don't need electrical contact with soil can be, on the contrary, easily applied to broad areas. Among these methods, a rapidly growing role is played by frequency domain electro-magnetic (FDEM) survey. This is due thanks to the improvement of multi-frequency and multi-coils instrumentation, simple time-lapse repeatability, cheap and accurate topographical referencing, and the emerging development of inversion codes. From raw terrain apparent conductivity meter, FDEM survey is becoming a key tool for 3D soil characterization and dynamics observation in near surface hydrological studies. Dozens of papers are here summarized and presented, in order to describe the promising potential of the technique.

  11. Improved systematic tRNA gene annotation allows new insights into the evolution of mitochondrial tRNA structures and into the mechanisms of mitochondrial genome rearrangements

    PubMed Central

    Jühling, Frank; Pütz, Joern; Bernt, Matthias; Donath, Alexander; Middendorf, Martin; Florentz, Catherine; Stadler, Peter F.

    2012-01-01

    Transfer RNAs (tRNAs) are present in all types of cells as well as in organelles. tRNAs of animal mitochondria show a low level of primary sequence conservation and exhibit ‘bizarre’ secondary structures, lacking complete domains of the common cloverleaf. Such sequences are hard to detect and hence frequently missed in computational analyses and mitochondrial genome annotation. Here, we introduce an automatic annotation procedure for mitochondrial tRNA genes in Metazoa based on sequence and structural information in manually curated covariance models. The method, applied to re-annotate 1876 available metazoan mitochondrial RefSeq genomes, allows to distinguish between remaining functional genes and degrading ‘pseudogenes’, even at early stages of divergence. The subsequent analysis of a comprehensive set of mitochondrial tRNA genes gives new insights into the evolution of structures of mitochondrial tRNA sequences as well as into the mechanisms of genome rearrangements. We find frequent losses of tRNA genes concentrated in basal Metazoa, frequent independent losses of individual parts of tRNA genes, particularly in Arthropoda, and wide-spread conserved overlaps of tRNAs in opposite reading direction. Direct evidence for several recent Tandem Duplication-Random Loss events is gained, demonstrating that this mechanism has an impact on the appearance of new mitochondrial gene orders. PMID:22139921

  12. Frequently Asked Questions (FAQs) | Cancer Prevention Fellowship Program

    Cancer.gov

    Am I eligible? To be considered for the Cancer Prevention Fellowship Program (CPFP), you must meet eligibility criteria related to educational attainment, US citizenship/permanent residency status, and the duration of prior postdoctoral research experience. Refer to the Eligibility Requirements for details. How do I apply? You must apply through our online application process.

  13. The Devil's in the Delta

    ERIC Educational Resources Information Center

    Luyben, William L.

    2007-01-01

    Students frequently confuse and incorrectly apply the several "deltas" that are used in chemical engineering. The deltas come in three different flavors: "out minus in", "big minus little" and "now versus then." The first applies to a change in a stream property as the stream flows through a process. For example, the "[delta]H" in an energy…

  14. Constituents of Music and Visual-Art Related Pleasure - A Critical Integrative Literature Review.

    PubMed

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver's internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure seems to be part of core affect and hedonic tone modulated by stable personality variables, whereas in visual-art research pleasure is a result of the so called conceptual act depending on a chosen strategy to approach art. We encourage an integration of music and visual-art into to a multi-modal framework to promote a more versatile understanding of pleasure in response to aesthetic artifacts.

  15. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  16. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  17. Design and Development of the Learning Activities Questionnaire

    DTIC Science & Technology

    1980-08-01

    attention has been given to cognitive components of the study process. For example, Laycock and Russell (1941) found that among the 35 most frequently...suitable place to work. Very little attention centered upon the cognitive activities of the learner himself, with the exception of advice concerning the...learners in general most frequently apply, the 1,658 responses were divided into the five strategy categories. Rote strategies were reported most

  18. HCMM satellite to take earth's temperature

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The heat capacity mapping mission (HCMM), a low cost modular spacecraft built for the Applications Explorer Missions (AEM), was designed to allow scientists to determine the feasibility of using day/night thermal infrared remote sensor-derived data to: (1) discriminate various rock types and locate mineral resources; (2) measure and monitor surface soil moisture changes; (3) measure plant canopy temperatures at frequent intervals to determine transpiration of water and plant stress; and (4) measure urban heat islands. The design of the spacecraft (AEM-A), its payload, launch vehicle, orbit, and data collection and processing methods are described. Projects in which the HCMM data will be applied by 12 American and 12 foreign investigators are summarized.

  19. Classification of Scaffold Hopping Approaches

    PubMed Central

    Sun, Hongmao; Tawa, Gregory; Wallqvist, Anders

    2012-01-01

    The general goal of drug discovery is to identify novel compounds that are active against a preselected biological target with acceptable pharmacological properties defined by marketed drugs. Scaffold hopping has been widely applied by medicinal chemists to discover equipotent compounds with novel backbones that have improved properties. In this review, scaffold hopping is classified into four major categories, namely heterocycle replacements, ring opening or closure, peptidomimetics, and topology-based hopping. The structural diversity of original and final scaffolds with respect to each category will be reviewed. The advantages and limitations of small, medium, and large-step scaffold hopping will also be discussed. Software that is frequently used to facilitate different kinds of scaffold hopping methods will be summarized. PMID:22056715

  20. A generalized method for multiple robotic manipulator programming applied to vertical-up welding

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth R.; Cook, George E.; Andersen, Kristinn; Barnett, Robert Joel; Zein-Sabattou, Saleh

    1991-01-01

    The application is described of a weld programming algorithm for vertical-up welding, which is frequently desired for variable polarity plasma arc welding (VPPAW). The Basic algorithm performs three tasks simultaneously: control of the robotic mechanism so that proper torch motion is achieved while minimizing the sum-of-squares of joint displacement; control of the torch while the part is maintained in a desirable orientation; and control of the wire feed mechanism location with respect to the moving welding torch. Also presented is a modification of this algorithm which permits it to be used for vertical-up welding. The details of this modification are discussed and simulation examples are provided for illustration and verification.

  1. The Chameleon Effect: Characterization Challenges Due to the Variability of Nanoparticles and Their Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.

    Nanoparticles in a variety of forms are of increasing importance in fundamental research, technological and medical applications, and environmental or toxicology studies. Physical and chemical drivers that lead to multiple types of particle instabilities complicate both the ability to produce and consistently deliver well defined particles and their appropriate characterization, frequently leading to inconsistencies and conflicts in the published literature. This perspective suggests that provenance information, beyond that often recorded or reported, and application of a set of core characterization methods, including a surface sensitive technique, consistently applied at critical times can serve as tools in the effort minimize reproducibilitymore » issues.« less

  2. Sunscreen Use on the Dorsal Hands at the Beach

    PubMed Central

    Warren, Donald B.; Riahi, Ryan R.; Hobbs, Jason B.; Wagner, Richard F.

    2013-01-01

    Background. Since skin of the dorsal hands is a known site for the development of cutaneous squamous cell carcinoma, an epidemiologic investigation was needed to determine if beachgoers apply sunscreen to the dorsal aspect of their hands as frequently as they apply it to other skin sites. Aim. The aim of the current study was to compare the use of sunscreen on the dorsal hands to other areas of the body during subtropical late spring and summer sunlight exposure at the beach. Materials and Methods. A cross-sectional survey from a convenience sample of beachgoers was designed to evaluate respondent understanding and protective measures concerning skin cancer on the dorsal hands in an environment with high natural UVR exposure. Results. A total of 214 surveys were completed and analyzed. Less than half of subjects (105, 49%) applied sunscreen to their dorsal hands. Women applied sunscreen to the dorsal hands more than men (55% women versus 40% men, P = 0.04). Higher Fitzpatrick Skin Type respondents were less likely to protect their dorsal hands from ultraviolet radiation (P = 0.001). Conclusions. More public education focused on dorsal hand protection from ultraviolet radiation damage is necessary to reduce the risk for squamous cell carcinomas of the hands. PMID:23840956

  3. The practical and pedagogical advantages of an ambigraphic nucleic acid notation.

    PubMed

    Rozak, David A

    2006-01-01

    The universally applied IUPAC notation for nucleic acids was adopted primarily to facilitate the mental association of G, A, T, C, and the related ambiguity characters with the bases they represent. However it is possible to create a notation that offers greater support for the basic manipulations and analyses to which genetic sequences frequently are subjected. By designing a nucleic acid notation around ambigrams, it is possible to simplify the frequently applied process of reverse complementation and aid the visualization of palindromes. The ambigraphic notation presented here also uses common orthographic features such as stems and loops to highlight guanine and cytosine rich regions, support the derivation of ambiguity characters, and aid educators in teaching the fundamentals of molecular genetics.

  4. A method for evaluating discoverability and navigability of recommendation algorithms.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  5. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  6. Integrated flood hazard assessment based on spatial ordered weighted averaging method considering spatial heterogeneity of risk preference.

    PubMed

    Xiao, Yangfan; Yi, Shanzhen; Tang, Zhongqian

    2017-12-01

    Flood is the most common natural hazard in the world and has caused serious loss of life and property. Assessment of flood prone areas is of great importance for watershed management and reduction of potential loss of life and property. In this study, a framework of multi-criteria analysis (MCA) incorporating geographic information system (GIS), fuzzy analytic hierarchy process (AHP) and spatial ordered weighted averaging (OWA) method was developed for flood hazard assessment. The factors associated with geographical, hydrological and flood-resistant characteristics of the basin were selected as evaluation criteria. The relative importance of the criteria was estimated through fuzzy AHP method. The OWA method was utilized to analyze the effects of different risk attitudes of the decision maker on the assessment result. The spatial ordered weighted averaging method with spatially variable risk preference was implemented in the GIS environment to integrate the criteria. The advantage of the proposed method is that it has considered spatial heterogeneity in assigning risk preference in the decision-making process. The presented methodology has been applied to the area including Hanyang, Caidian and Hannan of Wuhan, China, where flood events occur frequently. The outcome of flood hazard distribution presents a tendency of high risk towards populated and developed areas, especially the northeast part of Hanyang city, which has suffered frequent floods in history. The result indicates where the enhancement projects should be carried out first under the condition of limited resources. Finally, sensitivity of the criteria weights was analyzed to measure the stability of results with respect to the variation of the criteria weights. The flood hazard assessment method presented in this paper is adaptable for hazard assessment of a similar basin, which is of great significance to establish counterplan to mitigate life and property losses. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Evaluation of qPCR curve analysis methods for reliable biomarker discovery: bias, resolution, precision, and implications.

    PubMed

    Ruijter, Jan M; Pfaffl, Michael W; Zhao, Sheng; Spiess, Andrej N; Boggy, Gregory; Blom, Jochen; Rutledge, Robert G; Sisti, Davide; Lievens, Antoon; De Preter, Katleen; Derveaux, Stefaan; Hellemans, Jan; Vandesompele, Jo

    2013-01-01

    RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR) has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery, we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods, available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods (http://qPCRDataMethods.hfrc.nl). Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Navigation in the electronic health record: A review of the safety and usability literature.

    PubMed

    Roman, Lisette C; Ancker, Jessica S; Johnson, Stephen B; Senathirajah, Yalini

    2017-03-01

    Inefficient navigation in electronic health records has been shown to increase users' cognitive load, which may increase potential for errors, reduce efficiency, and increase fatigue. However, navigation has received insufficient recognition and attention in the electronic health record (EHR) literature as an independent construct and contributor to overall usability. Our aims in this literature review were to (1) assess the prevalence of navigation-related topics within the EHR usability and safety research literature, (2) categorize types of navigation actions within the EHR, (3) capture relationships between these navigation actions and usability principles, and (4) collect terms and concepts related to EHR navigation. Our goal was to improve access to navigation-related research in usability. We applied scoping literature review search methods with the assistance of a reference librarian to identify articles published since 1996 that reported evaluation of the usability or safety of an EHR user interface via user test, analytic methods, or inspection methods. The 4336 references collected from MEDLINE, EMBASE, Engineering Village, and expert referrals were de-duplicated and screened for relevance, and navigation-related concepts were abstracted from the 21 articles eligible for review using a standard abstraction form. Of the 21 eligible articles, 20 (95%) mentioned navigation in results and discussion of usability evaluations. Navigation between pages of the EHR was the more frequently documented type of navigation (86%) compared to navigation within a single page (14%). Navigation actions (e.g., scrolling through a medication list) were frequently linked to specific usability heuristic violations, among which flexibility and efficiency of use, recognition rather than recall, and error prevention were most common. Discussion of navigation was prevalent in results across all types of evaluation methods among the articles reviewed. Navigating between multiple screens was frequently identified as a usability barrier. The lack of standard terminology created some challenges to identifying and comparing articles. We observed that usability researchers are frequently capturing navigation-related issues even in articles that did not explicitly state navigation as a focus. Capturing and synthesizing the literature on navigation is challenging because of the lack of uniform vocabulary. Navigation is a potential target for normative recommendations for improved interaction design for safer systems. Future research in this domain, including development of normative recommendations for usability design and evaluation, will be facilitated by development of a standard terminology for describing EHR navigation. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Rapid Colorimetric Method Reveals Fraudulent Substitutions in Sea Urchin Roe Marketed in Sardinia (Italy).

    PubMed

    Meloni, Domenico; Spina, Antonio; Satta, Gianluca; Chessa, Vittorio

    2016-06-25

    In recent years, besides the consumption of fresh sea urchin specimens, the demand of minimally-processed roe has grown considerably. This product has made frequent consumption in restaurants possible and frauds are becoming widespread with the partial replacement of sea urchin roe with surrogates that are similar in colour. One of the main factors that determines the quality of the roe is its colour and small differences in colour scale cannot be easily discerned by the consumers. In this study we have applied a rapid colorimetric method for reveal the fraudulent partial substitution of semi-solid sea urchin roe with liquid egg yolk. Objective assessment of whiteness (L*), redness (a*), yellowness (b*), hue (h*), and chroma (C*) was carried out with a digital spectrophotometer using the CIE L*a*b* colour measurement system. The colorimetric method highlighted statistically significant differences among sea urchin roe and liquid egg yolk that could be easily discerned quantitatively.

  10. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  11. Exploring the evolution of node neighborhoods in Dynamic Networks

    NASA Astrophysics Data System (ADS)

    Orman, Günce Keziban; Labatut, Vincent; Naskali, Ahmet Teoman

    2017-09-01

    Dynamic Networks are a popular way of modeling and studying the behavior of evolving systems. However, their analysis constitutes a relatively recent subfield of Network Science, and the number of available tools is consequently much smaller than for static networks. In this work, we propose a method specifically designed to take advantage of the longitudinal nature of dynamic networks. It characterizes each individual node by studying the evolution of its direct neighborhood, based on the assumption that the way this neighborhood changes reflects the role and position of the node in the whole network. For this purpose, we define the concept of neighborhood event, which corresponds to the various transformations such groups of nodes can undergo, and describe an algorithm for detecting such events. We demonstrate the interest of our method on three real-world networks: DBLP, LastFM and Enron. We apply frequent pattern mining to extract meaningful information from temporal sequences of neighborhood events. This results in the identification of behavioral trends emerging in the whole network, as well as the individual characterization of specific nodes. We also perform a cluster analysis, which reveals that, in all three networks, one can distinguish two types of nodes exhibiting different behaviors: a very small group of active nodes, whose neighborhood undergo diverse and frequent events, and a very large group of stable nodes.

  12. Development of positive control materials for DNA-based detection of cystic fibrosis: Cloning and sequencing of 31 mutations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iovannisci, D.; Brown, C.; Winn-Deen, E.

    1994-09-01

    The cloning and sequencing of the gene associated with cystic fibrosis (CF) now provides the opportunity for earlier detection and carrier screening through DNA-based detection schemes. To date, over 300 mutations have been reported to the CF Consortium; however, only 30 mutations have been observed frequently enough world-wide to warrant routine screening. Many of these mutations are not available as cloned material or as established tissue culture cell lines to aid in the development of DNA-based detection assays. We have therefore cloned the 30 most frequently reported mutations, plus the mutation R347H due to its association with male infertility (31more » mutations, total). Two approaches were employed: direct PCR amplification, where mutations were available from patient sources, and site-directed PCR mutagenesis of normal genomic DNA to generate the remaining mutations. After amplification, products were cloned into a sequencing vector, bacterial transformants were screened by a novel method (PCR/oligonucleotide litigation assay/sequence-coded separation), and plamid DNA sequences determined by automated fluorescent methods on the Applied Biosystems 373A. Mixing of the clones allows the construction of artificial genotypes useful as positive control material for assay validation. A second round of mutagenesis, resulting in the construction of plasmids bearing multiple mutations, will be evaluated for their utility as reagent control materials in kit development.« less

  13. Interrelationships between Marijuana Demand and Discounting of Delayed Rewards: Convergence in Behavioral Economic Methods

    PubMed Central

    Aston, Elizabeth R.; Metrik, Jane; Amlung, Michael; Kahler, Christopher W.; MacKillop, James

    2016-01-01

    Background Distinct behavioral economic domains, including high perceived drug value (demand) and delay discounting (DD), have been implicated in the initiation of drug use and the progression to dependence. However, it is unclear whether frequent marijuana users conform to a “reinforcer pathology” addiction model wherein marijuana demand and DD jointly increase risk for problematic marijuana use and cannabis dependence (CD). Methods Participants (n=88, 34% female, 14% cannabis dependent) completed a marijuana purchase task at baseline. A delay discounting task was completed following placebo marijuana cigarette (0% THC) administration during a separate experimental session. Results Marijuana demand and DD were quantified using area under the curve (AUC). In multiple regression models, demand uniquely predicted frequency of marijuana use while DD did not. In contrast, DD uniquely predicted CD symptom count while demand did not. There were no significant interactions between demand and DD in either model. Conclusions These findings suggest that frequent marijuana users exhibit key constituents of the reinforcer pathology model: high marijuana demand and steep discounting of delayed rewards. However, demand and DD appear to be independent rather than synergistic risk factors for elevated marijuana use and risk for progression to CD. Findings also provide support for using AUC as a singular marijuana demand metric, particularly when also examining other behavioral economic constructs that apply similar statistical approaches, such as DD, to support analytic methodological convergence. PMID:27810657

  14. Establishing traceability of photometric absorbance values for accurate measurements of the haemoglobin concentration in blood

    NASA Astrophysics Data System (ADS)

    Witt, K.; Wolf, H. U.; Heuck, C.; Kammel, M.; Kummrow, A.; Neukammer, J.

    2013-10-01

    Haemoglobin concentration in blood is one of the most frequently measured analytes in laboratory medicine. Reference and routine methods for the determination of the haemoglobin concentration in blood are based on the conversion of haeme, haemoglobin and haemiglobin species into uniform end products. The total haemoglobin concentration in blood is measured using the absorbance of the reaction products. Traceable absorbance measurement values on the highest metrological level are a prerequisite for the calibration and evaluation of procedures with respect to their suitability for routine measurements and their potential as reference measurement procedures. For this purpose, we describe a procedure to establish traceability of spectral absorbance measurements for the haemiglobincyanide (HiCN) method and for the alkaline haematin detergent (AHD) method. The latter is characterized by a higher stability of the reaction product. In addition, the toxic hazard of cyanide, which binds to the iron ion of the haem group and thus inhibits the oxygen transport, is avoided. Traceability is established at different wavelengths by applying total least-squares analysis to derive the conventional quantity values for the absorbance from the measured values. Extrapolation and interpolation are applied to get access to the spectral regions required to characterize the Q-absorption bands of the HiCN and AHD methods, respectively. For absorbance values between 0.3 and 1.8, the contributions of absorbance measurements to the total expanded uncertainties (95% level of confidence) of absorbance measurements range from 1% to 0.4%.

  15. Are mitochondria a permanent source of reactive oxygen species?

    PubMed

    Staniek, K; Nohl, H

    2000-11-20

    The observation that in isolated mitochondria electrons may leak out of the respiratory chain to form superoxide radicals (O(2)(radical-)) has prompted the assumption that O(2)(radical-) formation is a compulsory by-product of respiration. Since mitochondrial O(2)(radical-) formation under homeostatic conditions could not be demonstrated in situ so far, conclusions drawn from isolated mitochondria must be considered with precaution. The present study reveals a link between electron deviation from the respiratory chain to oxygen and the coupling state in the presence of antimycin A. Another important factor is the analytical system applied for the detection of activated oxygen species. Due to the presence of superoxide dismutase in mitochondria, O(2)(radical-) release cannot be realistically determined in intact mitochondria. We therefore followed the release of the stable dismutation product H(2)O(2) by comparing most frequently used H(2)O(2) detection methods. The possible interaction of the detection systems with the respiratory chain was avoided by a recently developed method, which was compared with conventional methods. Irrespective of the methods applied, the substrates used for respiration and the state of respiration established, intact mitochondria could not be made to release H(2)O(2) from dismutating O(2)(radical-). Although regular mitochondrial respiration is unlikely to supply single electrons for O(2)(radical-) formation our study does not exclude the possibility of the respiratory chain becoming a radical source under certain conditions.

  16. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  17. Nearshore Measurements From a Small UAV.

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Brodie, K. L.; Spore, N.

    2016-02-01

    Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.

  18. Iterative Tensor Voting for Perceptual Grouping of Ill-Defined Curvilinear Structures: Application to Adherens Junctions

    PubMed Central

    Loss, Leandro A.; Bebis, George; Parvin, Bahram

    2012-01-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is applied to delineation of adherens junctions imaged through fluorescence microscopy. This class of membrane-bound macromolecules maintains tissue structural integrity and cell-cell interactions. Visually, it exhibits fibrous patterns that may be diffused, punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates. PMID:21421432

  19. A nuclear method to authenticate Buddha images

    NASA Astrophysics Data System (ADS)

    Khaweerat, S.; Ratanatongchai, W.; Channuie, J.; Wonglee, S.; Picha, R.; Promping, J.; Silva, K.; Liamsuwan, T.

    2015-05-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 105 n cm-2s-1 was applied. NAAR needed a higher neutron flux of 1012 n cm-2 s-1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand.

  20. An arbitrary-order staggered time integrator for the linear acoustic wave equation

    NASA Astrophysics Data System (ADS)

    Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo

    2018-02-01

    We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.

  1. A Synthesis of Students' Theses in the Accredited HHSI Master's Programme.

    PubMed

    Kinnunen, Ulla-Mari; Saranto, Kaija

    2018-01-01

    Education in Health Informatics (HI) has been a key priority to guarantee knowledge and skills for professionals working in healthcare settings. One of the early academic models to teach HI are the recommendations provided by the International Medical Informatics Association. The paper describes the curriculum developed for master's degrees and the status of a paradigm used in informatics education, as well as research in the health and human services fields. The aim is to synthesise the methodological focuses in students' theses and discuss the future needs for development. The paradigm guides informatics research. The research focuses, questions and applied research methods were coded for 152 master's degree theses. Based on the results, the most often used method was qualitative. The most frequent research area was steering and organising of information management in work processes. The results guide teachers in supervising the theses of the Health and Human Services Informatics (HHSI) programme and tutoring new students.

  2. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  3. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  4. Non-invasive collection and analysis of semen in wild macaques.

    PubMed

    Thomsen, Ruth

    2014-04-01

    Assessments of primate male fertility via semen analyses are so far restricted to captivity. This study describes a non-invasive method to collect and analyse semen in wild primates, based on fieldwork with Yakushima macaques (Macaca fuscata yakui). Over nine mating seasons between 1993 and 2010, 128 masturbatory ejaculations were recorded in 21 males of 5 study troops, and in 11 non-troop males. In 55%, ejaculate volume was directly estimated, and in 37%, pH-value, sperm vitality, numbers, morphology and swimming velocity could also be determined. This approach of assessing semen production rates and individual male fertility can be applied to other primate taxa, in particular to largely terrestrial populations where males masturbate frequently, such as macaques and baboons. Furthermore, since explanations of male reproductive skew in non-human primate populations have until now ignored the potential role of semen quality, the method presented here will also help to answer this question.

  5. Applications of operant learning theory to the management of challenging behavior after traumatic brain injury.

    PubMed

    Wood, Rodger Ll; Alderman, Nick

    2011-01-01

    For more than 3 decades, interventions derived from learning theory have been delivered within a neurobehavioral framework to manage challenging behavior after traumatic brain injury with the aim of promoting engagement in the rehabilitation process and ameliorating social handicap. Learning theory provides a conceptual structure that facilitates our ability to understand the relationship between challenging behavior and environmental contingencies, while accommodating the constraints upon learning imposed by impaired cognition. Interventions derived from operant learning theory have most frequently been described in the literature because this method of associational learning provides good evidence for the effectiveness of differential reinforcement methods. This article therefore examines the efficacy of applying operant learning theory to manage challenging behavior after TBI as well as some of the limitations of this approach. Future developments in the application of learning theory are also considered.

  6. DynaMIT: the dynamic motif integration toolkit

    PubMed Central

    Dassi, Erik; Quattrone, Alessandro

    2016-01-01

    De-novo motif search is a frequently applied bioinformatics procedure to identify and prioritize recurrent elements in sequences sets for biological investigation, such as the ones derived from high-throughput differential expression experiments. Several algorithms have been developed to perform motif search, employing widely different approaches and often giving divergent results. In order to maximize the power of these investigations and ultimately be able to draft solid biological hypotheses, there is the need for applying multiple tools on the same sequences and merge the obtained results. However, motif reporting formats and statistical evaluation methods currently make such an integration task difficult to perform and mostly restricted to specific scenarios. We thus introduce here the Dynamic Motif Integration Toolkit (DynaMIT), an extremely flexible platform allowing to identify motifs employing multiple algorithms, integrate them by means of a user-selected strategy and visualize results in several ways; furthermore, the platform is user-extendible in all its aspects. DynaMIT is freely available at http://cibioltg.bitbucket.org. PMID:26253738

  7. Density dependence governs when population responses to multiple stressors are magnified or mitigated.

    PubMed

    Hodgson, Emma E; Essington, Timothy E; Halpern, Benjamin S

    2017-10-01

    Population endangerment typically arises from multiple, potentially interacting anthropogenic stressors. Extensive research has investigated the consequences of multiple stressors on organisms, frequently focusing on individual life stages. Less is known about population-level consequences of exposure to multiple stressors, especially when exposure varies through life. We provide the first theoretical basis for identifying species at risk of magnified effects from multiple stressors across life history. By applying a population modeling framework, we reveal conditions under which population responses from stressors applied to distinct life stages are either magnified (synergistic) or mitigated. We find that magnification or mitigation critically depends on the shape of density dependence, but not the life stage in which it occurs. Stressors are always magnified when density dependence is linear or concave, and magnified or mitigated when it is convex. Using Bayesian numerical methods, we estimated the shape of density dependence for eight species across diverse taxa, finding support for all three shapes. © 2017 by the Ecological Society of America.

  8. Surface inspection system for industrial components based on shape from shading minimization approach

    NASA Astrophysics Data System (ADS)

    Kotan, Muhammed; Öz, Cemil

    2017-12-01

    An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.

  9. A virtual computer lab for distance biomedical technology education.

    PubMed

    Locatis, Craig; Vega, Anibal; Bhagwat, Medha; Liu, Wei-Li; Conde, Jose

    2008-03-13

    The National Library of Medicine's National Center for Biotechnology Information offers mini-courses which entail applying concepts in biochemistry and genetics to search genomics databases and other information sources. They are highly interactive and involve use of 3D molecular visualization software that can be computationally taxing. Methods were devised to offer the courses at a distance so as to provide as much functionality of a computer lab as possible, the venue where they are normally taught. The methods, which can be employed with varied videoconferencing technology and desktop sharing software, were used to deliver mini-courses at a distance in pilot applications where students could see demonstrations by the instructor and the instructor could observe and interact with students working at their remote desktops. Student ratings of the learning experience and comments to open ended questions were similar to those when the courses are offered face to face. The real time interaction and the instructor's ability to access student desktops from a distance in order to provide individual assistance and feedback were considered invaluable. The technologies and methods mimic much of the functionality of computer labs and may be usefully applied in any context where content changes frequently, training needs to be offered on complex computer applications at a distance in real time, and where it is necessary for the instructor to monitor students as they work.

  10. Analysis of the vortices in the inner flow of reversible pump turbine with the new omega vortex identification method

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-ning; Liu, Kai-hua; Li, Jin-wei; Xian, Hai-zhen; Du, Xiao-ze

    2018-05-01

    Reversible pump turbines are widely employed in the pumped hydro energy storage power plants. The frequent shifts among various operational modes for the reversible pump turbines pose various instability problems, e.g., the strong pressure fluctuation, the shaft swing, and the impeller damage. The instability is related to the vortices generated in the channels of the reversible pump turbines in the generating mode. In the present paper, a new omega vortex identification method is applied to the vortex analysis of the reversible pump turbines. The main advantage of the adopted algorithm is that it is physically independent of the selected values for the vortex identification in different working modes. Both weak and strong vortices can be identified by setting the same omega value in the whole passage of the reversible pump turbine. Five typical modes (turbine mode, runaway mode, turbine brake mode, zero-flow-rate mode and reverse pump mode) at several typical guide vane openings are selected for the analysis and comparisons. The differences between various modes and different guide vane openings are compared both qualitatively in terms of the vortex distributions and quantitatively in terms of the areas of the vortices in the reversible pump turbines. Our findings indicate that the new omega method could be successfully applied to the vortex identification in the reversible pump turbines.

  11. Passive forensics for copy-move image forgery using a method based on DCT and SVD.

    PubMed

    Zhao, Jie; Guo, Jichang

    2013-12-10

    As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Treatment of unstable distal radius fractures with Ilizarov circular, nonbridging external fixator.

    PubMed

    Tyllianakis, Minos; Mylonas, Spyros; Saridis, Alkis; Kallivokas, Alkiviadis; Kouzelis, Antonis; Megas, Panagiotis

    2010-03-01

    Unstable distal radius fractures remain a challenge for the treating orthopaedic surgeon. We present a retrospective follow-up study (mean follow-up 12.5 months) of 20 patients with 21 unstable distal radius fractures that were reduced in a closed manner and stabilized with a nonbridging Ilizarov external fixator. Subsequent insertion of olive wires for interfragmentary compression was performed in cases with intra-articular fractures. According to the overall evaluation proposed by Gartland and Werley scoring system 12 wrists were classified as excellent, 6 as good, 2 as fair and 1 as poor. Grade II pin-tract infection in distal fracture fragment was detected in 3 wires from a total of 78 (3.8%) and in 4 half pins out of a total of 9 (44.4%). Pronation was the most frequently impaired movement. This was restricted in 4 patients (19%) in whom a radioulnar transfixing wire was applied. Symptoms of irritation of superficial sensory branch of the radial nerve occurred in 3 patients with an olive wire applied in a closed manner in the distal fragment. Ilizarov method yields functional results comparable to that of other methods whilst it avoids wrist immobilization, open reduction and reoperation for implant removal. The method is associated with a low rate of major complication and satisfactory functional outcome. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Software Dependability and Safety Evaluations ESA's Initiative

    NASA Astrophysics Data System (ADS)

    Hernek, M.

    ESA has allocated funds for an initiative to evaluate Dependability and Safety methods of Software. The objectives of this initiative are; · More extensive validation of Safety and Dependability techniques for Software · Provide valuable results to improve the quality of the Software thus promoting the application of Dependability and Safety methods and techniques. ESA space systems are being developed according to defined PA requirement specifications. These requirements may be implemented through various design concepts, e.g. redundancy, diversity etc. varying from project to project. Analysis methods (FMECA. FTA, HA, etc) are frequently used during requirements analysis and design activities to assure the correct implementation of system PA requirements. The criticality level of failures, functions and systems is determined and by doing that the critical sub-systems are identified, on which dependability and safety techniques are to be applied during development. Proper performance of the software development requires the development of a technical specification for the products at the beginning of the life cycle. Such technical specification comprises both functional and non-functional requirements. These non-functional requirements address characteristics of the product such as quality, dependability, safety and maintainability. Software in space systems is more and more used in critical functions. Also the trend towards more frequent use of COTS and reusable components pose new difficulties in terms of assuring reliable and safe systems. Because of this, its dependability and safety must be carefully analysed. ESA identified and documented techniques, methods and procedures to ensure that software dependability and safety requirements are specified and taken into account during the design and development of a software system and to verify/validate that the implemented software systems comply with these requirements [R1].

  14. Landslide Susceptibility Analysis by the comparison and integration of Random Forest and Logistic Regression methods; application to the disaster of Nova Friburgo - Rio de Janeiro, Brasil (January 2011)

    NASA Astrophysics Data System (ADS)

    Esposito, Carlo; Barra, Anna; Evans, Stephen G.; Scarascia Mugnozza, Gabriele; Delaney, Keith

    2014-05-01

    The study of landslide susceptibility by multivariate statistical methods is based on finding a quantitative relationship between controlling factors and landslide occurrence. Such studies have become popular in the last few decades thanks to the development of geographic information systems (GIS) software and the related improved data management. In this work we applied a statistical approach to an area of high landslide susceptibility mainly due to its tropical climate and geological-geomorphological setting. The study area is located in the south-east region of Brazil that has frequently been affected by flood and landslide hazard, especially because of heavy rainfall events during the summer season. In this work we studied a disastrous event that occurred on January 11th and 12th of 2011, which involved Região Serrana (the mountainous region of Rio de Janeiro State) and caused more than 5000 landslides and at least 904 deaths. In order to produce susceptibility maps, we focused our attention on an area of 93,6 km2 that includes Nova Friburgo city. We utilized two different multivariate statistic methods: Logistic Regression (LR), already widely used in applied geosciences, and Random Forest (RF), which has only recently been applied to landslide susceptibility analysis. With reference to each mapping unit, the first method (LR) results in a probability of landslide occurrence, while the second one (RF) gives a prediction in terms of % of area susceptible to slope failure. With this aim in mind, a landslide inventory map (related to the studied event) has been drawn up through analyses of high-resolution GeoEye satellite images, in a GIS environment. Data layers of 11 causative factors have been created and processed in order to be used as continuous numerical or discrete categorical variables in statistical analysis. In particular, the logistic regression method has frequent difficulties in managing numerical continuous and discrete categorical variables together; therefore in our work we tried different methods to process categorical variables , until we obtained a statistically significant model. The outcomes of the two statistical methods (RF and LR) have been tested with a spatial validation and gave us two susceptibility maps. The significance of the models is quantified in terms of Area Under ROC Curve (AUC resulted in 0.81 for RF model and in 0.72 for LR model). In the first instance, a graphical comparison of the two methods shows a good correspondence between them. Further, we integrated results in a unique susceptibility map which maintains both information of probability of occurrence and % of area of landslide detachment, resulting from LR and RF respectively. In fact, in view of a landslide susceptibility classification of the study area, the former is less accurate but gives easily classifiable results, while the latter is more accurate but the results can be only subjectively classified. The obtained "integrated" susceptibility map preserves information about the probability that a given % of area could fail for each mapping unit.

  15. Department of Defense Suicide Event Report (DoDSER) Calendar Year 2011 Annual Report

    DTIC Science & Technology

    2012-11-15

    methods were most frequently drug overdose and use of military issue firearm. Drug and alcohol use during suicide events was less frequent among deployed...use of non-military issue firearms and hanging. Suicide attempt methods were most frequently drug overdose , harming oneself with a sharp or blunt...communicating intent to suicide (e.g., text). Prescription Overdose 176 40.00% 42.86% 39.13% 36 19.15% 21.94% 23.05% Used, no overdose 27

  16. [Diet supplements in nutrition of sport mastery school students].

    PubMed

    Seidler, Teresa; Sobczak, Anna

    2012-01-01

    In Polish society, for some time now, a growing interest in supplementation of the diet has been observed. This problem addresses particularly to sportsmen and physically active persons. It is often due to belief that customary diet does not supply organism with necessary food ingredients. There are also some threats connected with supplementation of the diet. Problems addressed to supplementation of the diet are particularly important for young sportsmen, including students of sport mastery schools. The aim of the study was the evaluation of the diet supplementation used by the students of sport mastery school in Western Pomeranian district. The study was carried out in the group of 76 students, aged 15 to 19, practicing walleyball (girls n = 39) and football (boys n = 37) at the sport mastery school in Police (western Pomeranian district). The interview method has been applied. A significance of differences, for the analysed factor, due to a sport discipline practiced was calculated based on Chi2 (Statistica 9). The results of the study confirmed the students of sport mastery school to supplement their diets. The diet supplementation being more frequent for boys (67.6%) with magnesium (57-64%) noted as the most frequently used supplement, followed with vitamin-mineral agents and L-carnitine. Essential differences were noted for reasons of diet supplementation and sources of information used on supplements between the sport disciplines practiced. It can be stated, based on the obtained results, that for supplementation of the diet among students of sport mastery school in Police is popular, even though there was no previous recognition of its necessity. The most frequent supplements users were football players with magnesium being the most frequently chosen supplement. Based on the above a regular training of sportsmen, including also coaches training young people, on the rational feeding habits would be advisable.

  17. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking

    PubMed Central

    Ferrari, Rafaela G.; Panzenhagen, Pedro H. N.; Conte-Junior, Carlos A.

    2017-01-01

    Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST) has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models). Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking. PMID:29312260

  18. Phenotypic and Genotypic Eligible Methods for Salmonella Typhimurium Source Tracking.

    PubMed

    Ferrari, Rafaela G; Panzenhagen, Pedro H N; Conte-Junior, Carlos A

    2017-01-01

    Salmonellosis is one of the most common causes of foodborne infection and a leading cause of human gastroenteritis. Throughout the last decade, Salmonella enterica serotype Typhimurium (ST) has shown an increase report with the simultaneous emergence of multidrug-resistant isolates, as phage type DT104. Therefore, to successfully control this microorganism, it is important to attribute salmonellosis to the exact source. Studies of Salmonella source attribution have been performed to determine the main food/food-production animals involved, toward which, control efforts should be correctly directed. Hence, the election of a ST subtyping method depends on the particular problem that efforts must be directed, the resources and the data available. Generally, before choosing a molecular subtyping, phenotyping approaches such as serotyping, phage typing, and antimicrobial resistance profiling are implemented as a screening of an investigation, and the results are computed using frequency-matching models (i.e., Dutch, Hald and Asymmetric Island models). Actually, due to the advancement of molecular tools as PFGE, MLVA, MLST, CRISPR, and WGS more precise results have been obtained, but even with these technologies, there are still gaps to be elucidated. To address this issue, an important question needs to be answered: what are the currently suitable subtyping methods to source attribute ST. This review presents the most frequently applied subtyping methods used to characterize ST, analyses the major available microbial subtyping attribution models and ponders the use of conventional phenotyping methods, as well as, the most applied genotypic tools in the context of their potential applicability to investigates ST source tracking.

  19. Fluorescence Spectroscopy and Chemometric Modeling for Bioprocess Monitoring

    PubMed Central

    Faassen, Saskia M.; Hitzmann, Bernd

    2015-01-01

    On-line sensors for the detection of crucial process parameters are desirable for the monitoring, control and automation of processes in the biotechnology, food and pharma industry. Fluorescence spectroscopy as a highly developed and non-invasive technique that enables the on-line measurements of substrate and product concentrations or the identification of characteristic process states. During a cultivation process significant changes occur in the fluorescence spectra. By means of chemometric modeling, prediction models can be calculated and applied for process supervision and control to provide increased quality and the productivity of bioprocesses. A range of applications for different microorganisms and analytes has been proposed during the last years. This contribution provides an overview of different analysis methods for the measured fluorescence spectra and the model-building chemometric methods used for various microbial cultivations. Most of these processes are observed using the BioView® Sensor, thanks to its robustness and insensitivity to adverse process conditions. Beyond that, the PLS-method is the most frequently used chemometric method for the calculation of process models and prediction of process variables. PMID:25942644

  20. REMOVING BIASES IN RESOLVED STELLAR MASS MAPS OF GALAXY DISKS THROUGH SUCCESSIVE BAYESIAN MARGINALIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-García, Eric E.; González-Lópezlira, Rosa A.; Bruzual A, Gustavo

    2017-01-20

    Stellar masses of galaxies are frequently obtained by fitting stellar population synthesis models to galaxy photometry or spectra. The state of the art method resolves spatial structures within a galaxy to assess the total stellar mass content. In comparison to unresolved studies, resolved methods yield, on average, higher fractions of stellar mass for galaxies. In this work we improve the current method in order to mitigate a bias related to the resolved spatial distribution derived for the mass. The bias consists in an apparent filamentary mass distribution and a spatial coincidence between mass structures and dust lanes near spiral arms.more » The improved method is based on iterative Bayesian marginalization, through a new algorithm we have named Bayesian Successive Priors (BSP). We have applied BSP to M51 and to a pilot sample of 90 spiral galaxies from the Ohio State University Bright Spiral Galaxy Survey. By quantitatively comparing both methods, we find that the average fraction of stellar mass missed by unresolved studies is only half what previously thought. In contrast with the previous method, the output BSP mass maps bear a better resemblance to near-infrared images.« less

  1. Morphea

    MedlinePlus

    ... hyperpigmentation/discoloration of the affected skin frequently remains. Disability from damage of underlying structure such as muscle ... the vitamin D analog calcipotriene which is also applied topically. Other treatments could be considered but are ...

  2. Dynamics of Genome Rearrangement in Bacterial Populations

    PubMed Central

    Darling, Aaron E.; Miklós, István; Ragan, Mark A.

    2008-01-01

    Genome structure variation has profound impacts on phenotype in organisms ranging from microbes to humans, yet little is known about how natural selection acts on genome arrangement. Pathogenic bacteria such as Yersinia pestis, which causes bubonic and pneumonic plague, often exhibit a high degree of genomic rearrangement. The recent availability of several Yersinia genomes offers an unprecedented opportunity to study the evolution of genome structure and arrangement. We introduce a set of statistical methods to study patterns of rearrangement in circular chromosomes and apply them to the Yersinia. We constructed a multiple alignment of eight Yersinia genomes using Mauve software to identify 78 conserved segments that are internally free from genome rearrangement. Based on the alignment, we applied Bayesian statistical methods to infer the phylogenetic inversion history of Yersinia. The sampling of genome arrangement reconstructions contains seven parsimonious tree topologies, each having different histories of 79 inversions. Topologies with a greater number of inversions also exist, but were sampled less frequently. The inversion phylogenies agree with results suggested by SNP patterns. We then analyzed reconstructed inversion histories to identify patterns of rearrangement. We confirm an over-representation of “symmetric inversions”—inversions with endpoints that are equally distant from the origin of chromosomal replication. Ancestral genome arrangements demonstrate moderate preference for replichore balance in Yersinia. We found that all inversions are shorter than expected under a neutral model, whereas inversions acting within a single replichore are much shorter than expected. We also found evidence for a canonical configuration of the origin and terminus of replication. Finally, breakpoint reuse analysis reveals that inversions with endpoints proximal to the origin of DNA replication are nearly three times more frequent. Our findings represent the first characterization of genome arrangement evolution in a bacterial population evolving outside laboratory conditions. Insight into the process of genomic rearrangement may further the understanding of pathogen population dynamics and selection on the architecture of circular bacterial chromosomes. PMID:18650965

  3. Blastocystosis in patients with gastrointestinal symptoms: a case–control study

    PubMed Central

    2012-01-01

    Background Blastocystosis is a frequent bowel disease. We planned to to evaluate the prevalence of Blastocystis spp. in patients who applied to the same internal medicine-gastroenterology clinic with or without gastrointestinal complaints to reveal the association of this parasite with diagnosed IBS and IBD. Methods A total of 2334 patients with gastrointestinal symptoms composed the study group, which included 335 patients with diagnosed inflammatory bowel disease and 877 with irritable bowel syndrome. Patients without any gastrointestinal symptoms or disease (n = 192) composed the control group. Parasite presence was investigated by applying native-Lugol and formol ethyl acetate concentration to stool specimens, and trichrome staining method in suspicious cases. Results Blastocystis spp. was detected in 134 patients (5.74%) in the study group and 6 (3.12%) in the control group (p = 0.128). In the study group, Blastocystis spp. was detected at frequencies of 8.7% in ulcerative colitis (24/276), 6.78% in Crohn’s disease (4/59), 5.82% in irritable bowel syndrome (51/877), and 4.9% in the remaining patients with gastrointestinal symptoms (55/1122). Blastocystis spp. was detected at a statistically significant ratio in the inflammatory bowel disease (odds ratio [OR] = 2.824; 95% confidence interval [CI]: 1.149-6.944; p = 0.019) and ulcerative colitis (OR = 2.952; 95% CI: 1.183-7.367; p = 0.016) patients within this group compared to controls. There were no statistically significant differences between the control group and Crohn’s disease or irritable bowel syndrome patients in terms Blastocystis spp. frequency (p = 0.251, p = 0.133). Conclusions Blastocystosis was more frequent in patients with inflammatory bowel disease, especially those with ulcerative colitis. Although symptomatic irritable bowel syndrome and Crohn’s disease patients had higher rates of Blastocystis spp. infection, the differences were not significant when compared to controls. PMID:22963003

  4. The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016

    PubMed Central

    Kang, Hye Won

    2017-01-01

    The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist. PMID:29124046

  5. The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016.

    PubMed

    Jung, Eun-Jin; Kim, Young-Suk; Choi, Jung-Wa; Kang, Hye Won; Chang, Un-Jae

    2017-10-01

    The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist.

  6. Effect of a school-based oral health education programme in Wuhan City, Peoples Republic of China.

    PubMed

    Petersen, Poul Erik; Peng, Bin; Tai, Baojun; Bian, Zhuan; Fan, Mingwen

    2004-02-01

    To assess oral health outcomes of a school-based oral health education (OHE) programme on children, mothers and schoolteachers in China, and to evaluate the methods applied and materials used. The WHO Health Promoting Schools Project applied to primary schoolchildren in 3 experimental and 3 control schools in Hongshan District, Wuhan City, Central China, with a 3-year follow-up. Data on dental caries, gingival bleeding and behaviour were collected. 803 children and their mothers, and 369 teachers were included at baseline in 1998. After three years, 666 children and their mothers (response rate 83%), and 347 teachers (response rate 94%) remained. DMFT/DMFS increments were comparable but the f/F components were higher among children in experimental schools than in control schools and the gingival bleeding score was, similarly, significantly lower. More children in experimental schools adopted regular oral health behaviour such as toothbrushing, recent dental visits, use of fluoride toothpaste, with less frequent consumption of cakes/biscuits compared to controls. In experimental schools, mothers showed significant beneficial oral health developments, while teachers showed higher oral health knowledge and more positive attitudes, also being satisfied with training workshops, methods applied, materials used and involvement with children in OHE. The programme had positive effects on gingival bleeding score and oral health behaviour of children, and on oral health knowledge and attitudes of mothers and teachers. No positive effect on dental caries incidence rate was demonstrated by the OHE programme.

  7. Parallel steady state studies on a milliliter scale accelerate fed-batch bioprocess design for recombinant protein production with Escherichia coli.

    PubMed

    Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk

    2016-11-01

    In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.

  8. Implementation of a model for identifying Essentially Derived Varieties in vegetatively propagated Calluna vulgaris varieties.

    PubMed

    Borchert, Thomas; Krueger, Joerg; Hohe, Annette

    2008-08-20

    Variety protection is of high relevance for the horticultural community and juridical cases have become more frequent in a globalized economy due to essential derivation of varieties. This applies equally to Calluna vulgaris, a vegetatively propagated species from the Ericaceae family that belongs to the top-selling pot plants in Europe. We therefore analyzed the genetic diversity of 74 selected varieties and genotypes of C. vulgaris and 3 of Erica spp. by means of RAPD and iSSR fingerprinting using 168 mono- and polymorphisms. The same data set was utilized to generate a system to reliably identify Essentially Derived Varieties (EDVs) in C. vulgaris, which was adapted from a method suggested for lettuce and barley. This system was developed, validated and used for selected tests of interest in C. vulgaris. As expected following personal communications with breeders, a very small genetic diversity became evident within C. vulgaris when investigated using our molecular methods. Thus, a dendrogram-based assay to detect Essentially Derived Varieties in this species is not suitable, although varieties are propagated vegetatively. In contrast, the system applied in lettuce, which itself applies pairwise comparisons using appropriate reference sets, proved functional with this species. The narrow gene pool detected in C. vulgaris may be the genetic basis for juridical conflicts between breeders. We successfully tested a methodology for identification of Essentially Derived Varieties in highly identical C. vulgaris genotypes and recommend this for future proof of essential derivation in C. vulgaris and other vegetatively propagated crops.

  9. Muscle Cramps

    MedlinePlus

    ... severe Happen frequently Don't get better with stretching and drinking enough fluids Last a long time ... able to find some relief from cramps by Stretching or gently massaging the muscle Applying heat when ...

  10. Comparison of five segmentation tools for 18F-fluoro-deoxy-glucose-positron emission tomography-based target volume definition in head and neck cancer.

    PubMed

    Schinagl, Dominic A X; Vogel, Wouter V; Hoffmann, Aswin L; van Dalen, Jorn A; Oyen, Wim J; Kaanders, Johannes H A M

    2007-11-15

    Target-volume delineation for radiation treatment to the head and neck area traditionally is based on physical examination, computed tomography (CT), and magnetic resonance imaging. Additional molecular imaging with (18)F-fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) may improve definition of the gross tumor volume (GTV). In this study, five methods for tumor delineation on FDG-PET are compared with CT-based delineation. Seventy-eight patients with Stages II-IV squamous cell carcinoma of the head and neck area underwent coregistered CT and FDG-PET. The primary tumor was delineated on CT, and five PET-based GTVs were obtained: visual interpretation, applying an isocontour of a standardized uptake value of 2.5, using a fixed threshold of 40% and 50% of the maximum signal intensity, and applying an adaptive threshold based on the signal-to-background ratio. Absolute GTV volumes were compared, and overlap analyses were performed. The GTV method of applying an isocontour of a standardized uptake value of 2.5 failed to provide successful delineation in 45% of cases. For the other PET delineation methods, volume and shape of the GTV were influenced heavily by the choice of segmentation tool. On average, all threshold-based PET-GTVs were smaller than on CT. Nevertheless, PET frequently detected significant tumor extension outside the GTV delineated on CT (15-34% of PET volume). The choice of segmentation tool for target-volume definition of head and neck cancer based on FDG-PET images is not trivial because it influences both volume and shape of the resulting GTV. With adequate delineation, PET may add significantly to CT- and physical examination-based GTV definition.

  11. Clustering XML Documents Using Frequent Subtrees

    NASA Astrophysics Data System (ADS)

    Kutty, Sangeetha; Tran, Tien; Nayak, Richi; Li, Yuefeng

    This paper presents an experimental study conducted over the INEX 2008 Document Mining Challenge corpus using both the structure and the content of XML documents for clustering them. The concise common substructures known as the closed frequent subtrees are generated using the structural information of the XML documents. The closed frequent subtrees are then used to extract the constrained content from the documents. A matrix containing the term distribution of the documents in the dataset is developed using the extracted constrained content. The k-way clustering algorithm is applied to the matrix to obtain the required clusters. In spite of the large number of documents in the INEX 2008 Wikipedia dataset, the proposed frequent subtree-based clustering approach was successful in clustering the documents. This approach significantly reduces the dimensionality of the terms used for clustering without much loss in accuracy.

  12. A relational metric, its application to domain analysis, and an example analysis and model of a remote sensing domain

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1995-01-01

    An objective and quantitative method has been developed for deriving models of complex and specialized spheres of activity (domains) from domain-generated verbal data. The method was developed for analysis of interview transcripts, incident reports, and other text documents whose original source is people who are knowledgeable about, and participate in, the domain in question. To test the method, it is applied here to a report describing a remote sensing project within the scope of the Earth Observing System (EOS). The method has the potential to improve the designs of domain-related computer systems and software by quickly providing developers with explicit and objective models of the domain in a form which is useful for design. Results of the analysis include a network model of the domain, and an object-oriented relational analysis report which describes the nodes and relationships in the network model. Other products include a database of relationships in the domain, and an interactive concordance. The analysis method utilizes a newly developed relational metric, a proximity-weighted frequency of co-occurrence. The metric is applied to relations between the most frequently occurring terms (words or multiword entities) in the domain text, and the terms found within the contexts of these terms. Contextual scope is selectable. Because of the discriminating power of the metric, data reduction from the association matrix to the network is simple. In addition to their value for design. the models produced by the method are also useful for understanding the domains themselves. They can, for example, be interpreted as models of presence in the domain.

  13. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    PubMed

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Estimating small amplitude tremor sources

    NASA Astrophysics Data System (ADS)

    Katakami, S.; Ito, Y.; Ohta, K.

    2017-12-01

    Various types of slow earthquakes have been recently observed at both the updip and downdip edges of the coseismic slip areas [Obara and Kato, 2016]. Frequent occurrence of slow earthquakes may help us to reveal the physics underlying megathrust events as useful analogs. Maeda and Obara [2009] estimated spatiotemporal distribution of seismic energy radiation from low-frequency tremors. They applied their method to only the tremors, whose hypocenters had been decided with multiple station method. However, recently Katakami et al. (2016) identified a lot of continuous tremors with small amplitude that were not recorded multiple stations. These small events should be important to reveal the whole slow earthquake activity and to understand strain condition around a plate boundary in subduction zones. First, we apply the modified frequency scanning method (mFSM) at a single station to NIED Hi-net data in the southwestern Japan to understand whole tremor activity which were included weak signal tremors. Second, we developed a method to identify the tremor source area by using the difference of apparent tremor energy at each station by mFSM. We estimated the apparent source tremor energy after correcting both site amplification factor and geometrical spreading. Finally we calculate a tremor source area if the difference of apparent tremor energy between each pair of sites is the smallest. We checked a validity of this analysis by using only tremors which were already detected by envelope correlation method [Idehara et al., 2014]. We calculated the average amplitude as apparent tremor energy in 5 minutes window after occurring tremor at each station. Our results almost consistent to hypocenters which were determined the envelope correlation method. We successfully determined apparent tremor source areas of weak continuous tremors after estimating possible tremor occurrence time windows by using mFSM.

  15. Methods applied in cost-effectiveness models for treatment strategies in type 2 diabetes mellitus and their use in Health Technology Assessments: a systematic review of the literature from 2008 to 2013.

    PubMed

    Charokopou, M; Sabater, F J; Townsend, R; Roudaut, M; McEwan, P; Verheggen, B G

    2016-01-01

    To identify and compare health-economic models that were developed to evaluate the cost-effectiveness of treatments for type 2 diabetes mellitus (T2DM), and their use within Health Technology Assessments (HTAs). In total, six commonly used databases were searched for articles published between October 2008 and January 2013, using a protocolized search strategy and inclusion criteria. The websites of HTA organizations in nine countries, and proceedings from five relevant conferences, were also reviewed. The identified new health-economic models were qualitatively assessed using six criteria that were developed based on technical components, and characteristics related to the disease or the treatments being assessed. Finally, the number of times the models were applied within HTA reports, published literature, and/or major conferences was determined. Thirteen new models were identified and reviewed in depth. Most of these were based on identical key data sources, and applied a similar model structure, either using Markov modeling or microsimulation techniques. The UKPDS equations and panel regressions were frequently used to estimate the occurrence of diabetes-related complications and the probability of developing risk factors in the long term. The qualitative assessment demonstrated that the CARDIFF, Sheffield T2DM and ECHO T2DM models seem technically equipped to appropriately assess the long-term health-economic consequences of chronic treatments for patients with T2DM. It was observed that the CORE model is the most widely described in literature and conferences, and the most often applied model within HTA submissions, followed by the CARDIFF and UKPDS models. This research provides an overview of T2DM models that were developed between 2008 and January 2013. The outcomes of the qualitative assessments, combined with frequent use in local reimbursement decisions, prove the applicability of the CORE, CARDIFF and UKPDS models to address decision problems related to the long-term clinical and economic consequences of new and existing T2DM treatments.

  16. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  17. Examining Stress in Graduate Assistants: Combining Qualitative and Quantitative Survey Methods

    ERIC Educational Resources Information Center

    Mazzola, Joseph J.; Walker, Erin J.; Shockley, Kristen M.; Spector, Paul E.

    2011-01-01

    The aim of this study was to employ qualitative and quantitative survey methods in a concurrent mixed model design to assess stressors and strains in graduate assistants. The stressors most frequently reported qualitatively were work overload, interpersonal conflict, and organizational constraints; the most frequently reported psychological…

  18. Methods for environmental change; an exploratory study.

    PubMed

    Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris

    2012-11-28

    While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.

  19. Spatio-temporal variational characteristics analysis of heavy metals pollution in water of the typical northern rivers, China

    NASA Astrophysics Data System (ADS)

    Lu, Hongwei; Yu, Sen

    2018-04-01

    The rapid urbanization and industrialization in developing countries have increased pollution by heavy metals, which is a concern for human health and the environment. In this study, according to the data obtained from the monitoring stations in the Songhua River basin, the multivariate statistical analysis methods are applied to the hydrological data of the Songhua River basin in order to examine the relation between human activities and the spatio-temporal change of heavy metals (Pb and Cu) in water. By comparing the concentrations at different flow periods, the minimum Pb concentrations are found to have occurred most frequently in low flow periods while the maximum values mostly appeared in average flow periods. Moreover, the minimum Cu concentration in the water frequently occurred in high flow periods. The results show there are low Pb and Cu concentrations in upstream and downstream sections and high concentrations in mid-stream sections, and high concentrations are most frequently measured in the sections of Ashihe' downstream and estuary. Moreover, we have predicted the future (during 2018-2025) trend of the change for the heavy metals pollution in the rivers. The results demonstrated intense human activities are the most important factor causing jump features of typical heavy metal pollution in the different periods for different sections of this study area. The research would provide decision-making and planning for the Songhua River basin during the period of China's 13th Five-Year Plan.

  20. Dry skin - self-care

    MedlinePlus

    ... or showers frequently Washing your hands often Some soaps and detergents Skin conditions, such as eczema and ... apply your moisturizer. Avoid skin care products and soaps that contain alcohol, fragrances, dyes, or other chemicals. ...

  1. Inference on periodicity of circadian time series.

    PubMed

    Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A

    2013-09-01

    Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.

  2. Identifying X-consumers using causal recipes: "whales" and "jumbo shrimps" casino gamblers.

    PubMed

    Woodside, Arch G; Zhang, Mann

    2012-03-01

    X-consumers are the extremely frequent (top 2-3%) users who typically consume 25% of a product category. This article shows how to use fuzzy-set qualitative comparative analysis (QCA) to provide "causal recipes" sufficient for profiling X-consumers accurately. The study extends Dik Twedt's "heavy-half" product users for building theory and strategies to nurture or control X-behavior. The study here applies QCA to offer configurations that are sufficient in identifying "whales" and "jumbo shrimps" among X-casino gamblers. The findings support the principle that not all X-consumers are alike. The theory and method are applicable for identifying the degree of consistency and coverage of alternative X-consumers among users of all product-service category and brands.

  3. Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

    NASA Technical Reports Server (NTRS)

    Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

    2005-01-01

    Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

  4. Electron tomography reveals the fibril structure and lipid interactions in amyloid deposits

    PubMed Central

    Kollmer, Marius; Meinhardt, Katrin; Haupt, Christian; Liberta, Falk; Wulff, Melanie; Linder, Julia; Handl, Lisa; Heinrich, Liesa; Loos, Cornelia; Schmidt, Matthias; Syrovets, Tatiana; Simmet, Thomas; Westermark, Per; Westermark, Gunilla T.; Horn, Uwe; Schmidt, Volker; Walther, Paul; Fändrich, Marcus

    2016-01-01

    Electron tomography is an increasingly powerful method to study the detailed architecture of macromolecular complexes or cellular structures. Applied to amyloid deposits formed in a cell culture model of systemic amyloid A amyloidosis, we could determine the structural morphology of the fibrils directly in the deposit. The deposited fibrils are arranged in different networks, and depending on the relative fibril orientation, we can distinguish between fibril meshworks, fibril bundles, and amyloid stars. These networks are frequently infiltrated by vesicular lipid inclusions that may originate from the death of the amyloid-forming cells. Our data support the role of nonfibril components for constructing fibril deposits and provide structural views of different types of lipid–fibril interactions. PMID:27140609

  5. Balinese women in a changing society.

    PubMed

    Suryani, Luh Ketut

    2004-01-01

    Balinese women face the dilemma of maintaining their vital role amid a rapidly changing society. In Bali, the primary female role is one of fostering balance and harmony within families. The Balinese people view women not from the vantage of career success but rather from the vantage of whether they can produce good quality children, and can work as part of a family team. Balinese men and women work together as partners. Indeed, men are not enemies; the genders help and need each other. Values underlying emancipation for women clash with traditional values, leading to frequent misunderstandings. Emancipation advocates neglect those elements necessary for complementing Balinese values. Applying educational and preventative methods, as well as therapeutic innovations to such problems, is helpful at all levels of society.

  6. An ansatz for solving nonlinear partial differential equations in mathematical physics.

    PubMed

    Akbar, M Ali; Ali, Norhashidah Hj Mohd

    2016-01-01

    In this article, we introduce an ansatz involving exact traveling wave solutions to nonlinear partial differential equations. To obtain wave solutions using direct method, the choice of an appropriate ansatz is of great importance. We apply this ansatz to examine new and further general traveling wave solutions to the (1+1)-dimensional modified Benjamin-Bona-Mahony equation. Abundant traveling wave solutions are derived including solitons, singular solitons, periodic solutions and general solitary wave solutions. The solutions emphasize the nobility of this ansatz in providing distinct solutions to various tangible phenomena in nonlinear science and engineering. The ansatz could be more efficient tool to deal with higher dimensional nonlinear evolution equations which frequently arise in many real world physical problems.

  7. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    PubMed Central

    Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468

  8. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    PubMed

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  9. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  10. Classification of buildings mold threat using electronic nose

    NASA Astrophysics Data System (ADS)

    Łagód, Grzegorz; Suchorab, Zbigniew; Guz, Łukasz; Sobczuk, Henryk

    2017-07-01

    Mold is considered to be one of the most important features of Sick Building Syndrome and is an important problem in current building industry. In many cases it is caused by the rising moisture of building envelopes surface and exaggerated humidity of indoor air. Concerning historical buildings it is mostly caused by outdated raising techniques among that is absence of horizontal isolation against moisture and hygroscopic materials applied for construction. Recent buildings also suffer problem of mold risk which is caused in many cases by hermetization leading to improper performance of gravitational ventilation systems that make suitable conditions for mold development. Basing on our research there is proposed a method of buildings mold threat classification using electronic nose, based on a gas sensors array which consists of MOS sensors (metal oxide semiconductor). Used device is frequently applied for air quality assessment in environmental engineering branches. Presented results show the interpretation of e-nose readouts of indoor air sampled in rooms threatened with mold development in comparison with clean reference rooms and synthetic air. Obtained multivariate data were processed, visualized and classified using a PCA (Principal Component Analysis) and ANN (Artificial Neural Network) methods. Described investigation confirmed that electronic nose - gas sensors array supported with data processing enables to classify air samples taken from different rooms affected with mold.

  11. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    PubMed Central

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  12. Towards European urinalysis guidelines. Introduction of a project under European Confederation of Laboratory Medicine.

    PubMed

    Kouri, T T; Gant, V A; Fogazzi, G B; Hofmann, W; Hallander, H O; Guder, W G

    2000-07-01

    Improved standardized performance is needed because urinalysis continues to be one of the most frequently requested laboratory tests. Since 1997, the European Confederation of Laboratory Medicine (ECLM) has been supporting an interdisciplinary project aiming to produce European urinalysis guidelines. More than seventy clinical chemists, microbiologists and ward-based clinicians, as well as representatives of manufacturers are taking part. These guidelines aim to improve the quality and consistency of chemical urinalysis, particle counting and bacterial culture by suggesting optimal investigative processes that could be applied in Europe. The approach is based on medical needs for urinalysis. The importance of the pre-analytical stage for total quality is stressed by detailed illustrative advice for specimen collection. Attention is also given to emerging automated technology. For cost containment reasons, both optimum (ideal) procedures and minimum analytical approaches are suggested. Since urinalysis mostly lacks genuine reference methods (primary reference measurement procedures; Level 4), a novel classification of the methods is proposed: comparison measurement procedures (Level 3), quantitative routine procedures (Level 2), and ordinal scale examinations (Level 1). Stepwise strategies are suggested to save costs, applying different rules for general and specific patient populations. New analytical quality specifications have been created. After a consultation period, the final written text will be published in full as a separate document.

  13. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    PubMed

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  14. Dependence of paracentric inversion rate on tract length.

    PubMed

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-04-03

    We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted.

  15. BMAA extraction of cyanobacteria samples: which method to choose?

    PubMed

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  16. Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication

    PubMed Central

    Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon

    2013-01-01

    Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143

  17. Dependence of paracentric inversion rate on tract length

    PubMed Central

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-01-01

    Background We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. Results We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. Conclusion The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted. PMID:17407601

  18. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  19. Air curtain development: an energy harvesting solution for hinged doors

    NASA Astrophysics Data System (ADS)

    Dayal, Vineed; Lee, Soobum

    2017-04-01

    The paper proposes a fully mechanical air curtain system that will be powered solely by harvested energy from common hinged doors. The average person uses this type of door several times a day with an almost unconscious amount of applied force and effort. This leads to a high potential of energy to be harvested in doorways that see high traffic and frequent operation7 . Frequently opened door entry ways have always been regarded as a major element that causes significant energy loss and contaminated air conditions in buildings6 . Private companies, particularly those with warehouses, have introduced commercial electrical air curtains to block the open entrances from invading cold air11. This project intends to introduce an original design of air curtain which operates fans only when the door opens and closes, by directly converting door motion to fan rotation without any electronic motor or power cable. The air stream created by this device will prevent the transfer of outside air and contaminants. Research will be conducted to determine the most efficient method of harvesting energy from door use, and the prototyping process will be conducted to meet the required performance of current air curtain models.

  20. Health-related quality of life questionnaires in lung cancer trials: a systematic literature review

    PubMed Central

    2013-01-01

    Background Lung cancer is one of the leading causes of cancer deaths. Treatment goals are the relief of symptoms and the increase of overall survival. With the rising number of treatment alternatives, the need for comparable assessments of health-related quality of life (HRQoL) parameters grows. The aim of this paper was to identify and describe measurement instruments applied in lung cancer patients under drug therapy. Methods We conducted a systematic literature review at the beginning of 2011 using the electronic database Pubmed. Results A total of 43 studies were included in the review. About 17 different measurement instruments were identified, including 5 generic, 5 cancer-specific, 4 lung cancer-specific and 3 symptom-specific questionnaires. In 29 studies at least 2 instruments were used. In most cases these were cancer and lung cancer-specific ones. The most frequently used instruments are the EORTC QLQ-C30 and its lung cancer modules LC13 or LC17. Only 5 studies combined (lung) cancer-specific questionnaires with generic instruments. Conclusions The EORTC-C30 and EORTC-LC13 are the most frequently used health-related quality of life measurement instruments in pharmacological lung cancer trials. PMID:23680096

  1. Using existing case-mix methods to fund trauma cases.

    PubMed

    Monakova, Julia; Blais, Irene; Botz, Charles; Chechulin, Yuriy; Picciano, Gino; Basinski, Antoni

    2010-01-01

    Policymakers frequently face the need to increase funding in isolated and frequently heterogeneous (clinically and in terms of resource consumption) patient subpopulations. This article presents a methodologic solution for testing the appropriateness of using existing grouping and weighting methodologies for funding subsets of patients in the scenario where a case-mix approach is preferable to a flat-rate based payment system. Using as an example the subpopulation of trauma cases of Ontario lead trauma hospitals, the statistical techniques of linear and nonlinear regression models, regression trees, and spline models were applied to examine the fit of the existing case-mix groups and reference weights for the trauma cases. The analyses demonstrated that for funding Ontario trauma cases, the existing case-mix systems can form the basis for rational and equitable hospital funding, decreasing the need to develop a different grouper for this subset of patients. This study confirmed that Injury Severity Score is a poor predictor of costs for trauma patients. Although our analysis used the Canadian case-mix classification system and cost weights, the demonstrated concept of using existing case-mix systems to develop funding rates for specific subsets of patient populations may be applicable internationally.

  2. Numerical chromosomal aberrations in Hodgkin's disease detected by in situ hybridisation on routine paraffin sections.

    PubMed Central

    Pringle, J H; Shaw, J A; Gillies, A; Lauder, I

    1997-01-01

    AIMS: To visualise directly numerical chromosomal aberrations and polyploidy in both Hodgkin and Reed Sternberg (HRS) cells and background cells from cases of Hodgkin's disease using in situ hybridisation. METHODS: Non-isotopic DNA in situ hybridisation was applied to interphase cell nuclei of Hodgkin's disease within routine paraffin embedded tissue sections. Two a satellite DNA probes, specific for chromosomes 3 and 12, were used to evaluate the feasibility of this approach. Double labelling with immunocytochemical detection of the CD30 antigen was used to identify HRS cells. Cytogenetic normal diploid and triploid placental tissue served as controls. RESULTS: The eight cases of Hodgkin's disease investigated displayed frequent polysomy, while the majority of background cells showed disomy signals. CONCLUSIONS: Numerical chromosomal aberrations were detected in HRS cells from eight cases of Hodgkin's disease by in situ hybridisation. These data show that in Hodgkin's disease HRS cells frequently display polyploidy compared with background cells and are, therefore, probably the only neoplastic component in this disease. Correlations between polysomy and tumour type or grade could not be made from these data owing to the limited number of cases examined and to problems with interpreting data from truncated nuclei. Images PMID:9306933

  3. Mapping the literature of radiation therapy

    PubMed Central

    Delwiche, Frances A.

    2013-01-01

    Objective: This study characterizes the literature of the radiation therapy profession, identifies the journals most frequently cited by authors writing in this discipline, and determines the level of coverage of these journals by major bibliographic indexes. Method: Cited references from three discipline-specific source journals were analyzed according to the Mapping the Literature of Allied Health Project Protocol of the Nursing and Allied Health Resources Section of the Medical Library Association. Bradford's Law of Scattering was applied to all journal references to identify the most frequently cited journal titles. Results: Journal references constituted 77.8% of the total, with books, government documents, Internet sites, and miscellaneous sources making up the remainder. Although a total of 908 journal titles were cited overall, approximately one-third of the journal citations came from just 11 journals. MEDLINE and Scopus provided the most comprehensive indexing of the journal titles in Zones 1 and 2. The source journals were indexed only by CINAHL and Scopus. Conclusion: The knowledgebase of radiation therapy draws heavily from the fields of oncology, radiology, medical physics, and nursing. Discipline-specific publications are not currently well covered by major indexing services, and those wishing to conduct comprehensive literature searches should search multiple resources. PMID:23646027

  4. In silico pharmacology for drug discovery: applications to targets and beyond

    PubMed Central

    Ekins, S; Mestres, J; Testa, B

    2007-01-01

    Computational (in silico) methods have been developed and widely applied to pharmacology hypothesis development and testing. These in silico methods include databases, quantitative structure-activity relationships, similarity searching, pharmacophores, homology models and other molecular modeling, machine learning, data mining, network analysis tools and data analysis tools that use a computer. Such methods have seen frequent use in the discovery and optimization of novel molecules with affinity to a target, the clarification of absorption, distribution, metabolism, excretion and toxicity properties as well as physicochemical characterization. The first part of this review discussed the methods that have been used for virtual ligand and target-based screening and profiling to predict biological activity. The aim of this second part of the review is to illustrate some of the varied applications of in silico methods for pharmacology in terms of the targets addressed. We will also discuss some of the advantages and disadvantages of in silico methods with respect to in vitro and in vivo methods for pharmacology research. Our conclusion is that the in silico pharmacology paradigm is ongoing and presents a rich array of opportunities that will assist in expediating the discovery of new targets, and ultimately lead to compounds with predicted biological activity for these novel targets. PMID:17549046

  5. Minimum stiffness criteria for ring frame stiffeners of space launch vehicles

    NASA Astrophysics Data System (ADS)

    Friedrich, Linus; Schröder, Kai-Uwe

    2016-12-01

    Frame stringer-stiffened shell structures show high load carrying capacity in conjunction with low structural mass and are for this reason frequently used as primary structures of aerospace applications. Due to the great number of design variables, deriving suitable stiffening configurations is a demanding task and needs to be realized using efficient analysis methods. The structural design of ring frame stringer-stiffened shells can be subdivided into two steps. One, the design of a shell section between two ring frames. Two, the structural design of the ring frames such that a general instability mode is avoided. For sizing stringer-stiffened shell sections, several methods were recently developed, but existing ring frame sizing methods are mainly based on empirical relations or on smeared models. These methods do not mandatorily lead to reliable designs and in some cases the lightweight design potential of stiffened shell structures can thus not be exploited. In this paper, the explicit physical behaviour of ring frame stiffeners of space launch vehicles at the onset of panel instability is described using mechanical substitute models. Ring frame stiffeners of a stiffened shell structure are sized applying existing methods and the method suggested in this paper. To verify the suggested method and to demonstrate its potential, geometrically non-linear finite element analyses are performed using detailed finite element models.

  6. Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method

    NASA Astrophysics Data System (ADS)

    Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.

    2017-11-01

    Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.

  7. Machine cost analysis using the traditional machine-rate method and ChargeOut!

    Treesearch

    E. M. (Ted) Bilek

    2009-01-01

    Forestry operations require ever more use of expensive capital equipment. Mechanization is frequently necessary to perform cost-effective and safe operations. Increased capital should mean more sophisticated capital costing methodologies. However the machine rate method, which is the costing methodology most frequently used, dates back to 1942. CHARGEOUT!, a recently...

  8. Applying Regression Analysis to Problems in Institutional Research.

    ERIC Educational Resources Information Center

    Bohannon, Tom R.

    1988-01-01

    Regression analysis is one of the most frequently used statistical techniques in institutional research. Principles of least squares, model building, residual analysis, influence statistics, and multi-collinearity are described and illustrated. (Author/MSE)

  9. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  10. Mobile Applications for Patient-centered Care Coordination: A Review of Human Factors Methods Applied to their Design, Development, and Evaluation

    PubMed Central

    Westbrook, J. I.

    2015-01-01

    Summary Objectives To examine if human factors methods were applied in the design, development, and evaluation of mobile applications developed to facilitate aspects of patient-centered care coordination. Methods We searched MEDLINE and EMBASE (2013-2014) for studies describing the design or the evaluation of a mobile health application that aimed to support patients’ active involvement in the coordination of their care. Results 34 papers met the inclusion criteria. Applications ranged from tools that supported self-management of specific conditions (e.g. asthma) to tools that provided coaching or education. Twelve of the 15 papers describing the design or development of an app reported the use of a human factors approach. The most frequently used methods were interviews and surveys, which often included an exploration of participants’ current use of information technology. Sixteen papers described the evaluation of a patient application in practice. All of them adopted a human factors approach, typically an examination of the use of app features and/or surveys or interviews which enquired about patients’ views of the effects of using the app on their behaviors (e.g. medication adherence), knowledge, and relationships with healthcare providers. No study in our review assessed the impact of mobile applications on health outcomes. Conclusion The potential of mobile health applications to assist patients to more actively engage in the management of their care has resulted in a large number of applications being developed. Our review showed that human factors approaches are nearly always adopted to some extent in the design, development, and evaluation of mobile applications. PMID:26293851

  11. Direct comparison of repeated soil inventory and carbon flux budget to detect soil carbon stock changes in grassland

    NASA Astrophysics Data System (ADS)

    Ammann, C.; Leifeld, J.; Neftel, A.; Fuhrer, J.

    2012-04-01

    Experimental assessment of soil carbon (C) stock changes over time is typically based on the application of either one of two methods, namely (i) repeated soil inventory and (ii) determination of the ecosystem C budget or net biome productivity (NBP) by continuous measurement of CO2 exchange in combination with quantification of other C imports and exports. However, there exist hardly any published study hitherto that directly compared the results of both methods. Here, we applied both methods in parallel to determine C stock changes of two temperate grassland fields previously converted from long-term cropland. The grasslands differed in management intensity with either intensive management (high fertilization, frequent cutting) or extensive management (no fertilization, less frequent cutting). Soil organic C stocks (0-45 cm depth) were quantified at the beginning (2001) and the end (2006) of a 5 year observational period using the equivalent soil mass approach. For the same period and in both fields, NBP was quantified from net CO2 fluxes monitored using eddy covariance systems, and measured C import by organic fertilizer and C export by harvest. Both NBP and repeated soil inventories revealed a consistent and significant difference between management systems of 170 ± 48 and 253 ± 182 g C m-2 a-1, respectively. For both fields, the inventory method showed a tendency towards higher C loss/smaller C gain than NBP. In the extensive field, a significant C loss was observed by the inventory but not by the NBP approach. Thus both, flux measurements and repeated soil sampling, seem to be adequate and equally suited for detecting relative management effects. However, the suitability for tracking absolute changes in SOC could not be proven for neither of the two methods. Overall, our findings stress the need for more direct comparisons to evaluate whether the observed difference in the outcome of the two approaches reflects a general methodological bias, which would have important implications for regional terrestrial C budgets.

  12. Improved model quality assessment using ProQ2.

    PubMed

    Ray, Arjun; Lindahl, Erik; Wallner, Björn

    2012-09-10

    Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.

  13. Interferometric imaging of crustal structure from wide-angle multicomponent OBS-airgun data

    NASA Astrophysics Data System (ADS)

    Shiraishi, K.; Fujie, G.; Sato, T.; Abe, S.; Asakawa, E.; Kodaira, S.

    2015-12-01

    In wide-angle seismic surveys with ocean bottom seismograph (OBS) and airgun, surface-related multiple reflections and upgoing P-to-S conversions are frequently observed. We applied two interferometric imaging methods to the multicomponent OBS data in order to highly utilize seismic signals for subsurface imaging.First, seismic interferometry (SI) is applied to vertical component in order to obtain reflection profile with multiple reflections. By correlating seismic traces on common receiver records, pseudo seismic data are generated with virtual sources and receivers located on all original shot positions. We adopt the deconvolution SI because source and receiver spectra can be canceled by spectral division. Consequently, gapless reflection images from just below the seafloor to the deeper are obtained.Second, receiver function (RF) imaging is applied to multicomponent OBS data in order to image P-to-S conversion boundary. Though RF is commonly applied to teleseismic data, our purpose is to extract upgoing PS converted waves from wide-angle OBS data. The RF traces are synthesized by deconvolution of radial and vertical components at same OBS location for each shot. Final section obtained by stacking RF traces shows the PS conversion boundaries beneath OBSs. Then, Vp/Vs ratio can be estimated by comparing one-way traveltime delay with two-way traveltime of P wave reflections.We applied these methods to field data sets; (a) 175 km survey in Nankai trough subduction zone using 71 OBSs with from 1 km to 10 km intervals and 878 shots with 200 m interval, and (b) 237 km survey in northwest pacific ocean with almost flat layers before subduction using 25 OBSs with 6km interval and 1188 shots with 200 m interval. In our study, SI imaging with multiple reflections is highly applicable to OBS data even in a complex geological setting, and PS conversion boundary is well imaged by RF imaging and Vp/Vs ratio distribution in sediment is estimated in case of simple structure.

  14. Targeted next generation sequencing of mucosal melanomas identifies frequent NF1 and RAS mutations.

    PubMed

    Cosgarea, Ioana; Ugurel, Selma; Sucker, Antje; Livingstone, Elisabeth; Zimmer, Lisa; Ziemer, Mirjana; Utikal, Jochen; Mohr, Peter; Pfeiffer, Christiane; Pföhler, Claudia; Hillen, Uwe; Horn, Susanne; Schadendorf, Dirk; Griewank, Klaus G; Roesch, Alexander

    2017-06-20

    Mucosal melanoma represents ~1% of all melanomas, frequently having a poor prognosis due to diagnosis at a late stage of disease. Mucosal melanoma differs from cutaneous melanoma not only in terms of poorer clinical outcome but also on the molecular level having e.g. less BRAF and more frequent KIT mutations than cutaneous melanomas. For the majority of mucosal melanomas oncogenic driver mutations remain unknown. In our study, 75 tumor tissues from patients diagnosed with mucosal melanoma were analyzed, applying a targeted next generation sequencing panel covering 29 known recurrently mutated genes in melanoma. NF1 and RAS mutations were identified as the most frequently mutated genes occurring in 18.3% and 16.9% of samples, respectively. Mutations in BRAF were identified in 8.4% and KIT in 7.0% of tumor samples. Our study identifies NF1 as the most frequently occurring driver mutation in mucosal melanoma. RAS alterations, consisting of NRAS and KRAS mutations, were the second most frequent mutation type. BRAF and KIT mutations were rare with frequencies below 10% each. Our data indicate that in mucosal melanomas RAS/NF1 alterations are frequent, implying a significant pathogenetic role for MAPK and potentially PI3K pathway activation in these tumors.

  15. Targeted next generation sequencing of mucosal melanomas identifies frequent NF1 and RAS mutations

    PubMed Central

    Cosgarea, Ioana; Ugurel, Selma; Sucker, Antje; Livingstone, Elisabeth; Zimmer, Lisa; Ziemer, Mirjana; Utikal, Jochen; Mohr, Peter; Pfeiffer, Christiane; Pföhler, Claudia; Hillen, Uwe; Horn, Susanne; Schadendorf, Dirk

    2017-01-01

    Purpose Mucosal melanoma represents ~1% of all melanomas, frequently having a poor prognosis due to diagnosis at a late stage of disease. Mucosal melanoma differs from cutaneous melanoma not only in terms of poorer clinical outcome but also on the molecular level having e.g. less BRAF and more frequent KIT mutations than cutaneous melanomas. For the majority of mucosal melanomas oncogenic driver mutations remain unknown. Experimental Design and Results In our study, 75 tumor tissues from patients diagnosed with mucosal melanoma were analyzed, applying a targeted next generation sequencing panel covering 29 known recurrently mutated genes in melanoma. NF1 and RAS mutations were identified as the most frequently mutated genes occurring in 18.3% and 16.9% of samples, respectively. Mutations in BRAF were identified in 8.4% and KIT in 7.0% of tumor samples. Conclusions Our study identifies NF1 as the most frequently occurring driver mutation in mucosal melanoma. RAS alterations, consisting of NRAS and KRAS mutations, were the second most frequent mutation type. BRAF and KIT mutations were rare with frequencies below 10% each. Our data indicate that in mucosal melanomas RAS/NF1 alterations are frequent, implying a significant pathogenetic role for MAPK and potentially PI3K pathway activation in these tumors. PMID:28380455

  16. Rating Curve Estimation from Local Levels and Upstream Discharges

    NASA Astrophysics Data System (ADS)

    Franchini, M.; Mascellani, G.

    2003-04-01

    Current technology allows for low cost and easy level measurements while the discharge measurements are still difficult and expensive. Thus, these are rarely performed and usually not in flood conditions because of lack of safety and difficulty in activating the measurement team in due time. As a consequence, long series of levels are frequently available without the corresponding discharge values. However, for the purpose of planning, management of water resources and real time flood forecasting, discharge is needed and it is therefore essential to convert local levels into discharge values by using the appropriate rating curve. Over this last decade, several methods have been proposed to relate local levels at a site of interest to data recorded at a river section located upstream where a rating curve is available. Some of these methods are based on a routing approach which uses the Muskingum model structure in different ways; others are based on the entropy concepts. Lately, fuzzy logic has been applied more and more frequently in the framework of hydraulic and hydrologic problems and this has prompted to the authors to use it for synthesising the rating curves. A comparison between all these strategies is performed, highlighting the difficulties and advantages of each of them, with reference to a long reach of the Po river in Italy, where several hydrometers and the relevant rating curves are available, thus allowing for both a parameterization and validation of the different strategies.

  17. An innovative system for 3D clinical photography in the resource-limited settings

    PubMed Central

    2014-01-01

    Background Kaposi’s sarcoma (KS) is the most frequently occurring cancer in Mozambique among men and the second most frequently occurring cancer among women. Effective therapeutic treatments for KS are poorly understood in this area. There is an unmet need to develop a simple but accurate tool for improved monitoring and diagnosis in a resource-limited setting. Standardized clinical photographs have been considered to be an essential part of the evaluation. Methods When a therapeutic response is achieved, nodular KS often exhibits a reduction of the thickness without a change in the base area of the lesion. To evaluate the vertical space along with other characters of a KS lesion, we have created an innovative imaging system with a consumer light-field camera attached to a miniature “photography studio” adaptor. The image file can be further processed by computational methods for quantification. Results With this novel imaging system, each high-quality 3D image was consistently obtained with a single camera shot at bedside by minimally trained personnel. After computational processing, all-focused photos and measurable 3D parameters were obtained. More than 80 KS image sets were processed in a semi-automated fashion. Conclusions In this proof-of-concept study, the feasibility to use a simple, low-cost and user-friendly system has been established for future clinical study to monitor KS therapeutic response. This 3D imaging system can be also applied to obtain standardized clinical photographs for other diseases. PMID:24929434

  18. Reduced turning frequency and delayed poultry manure addition reduces N loss from sugarcane compost.

    PubMed

    Bryndum, S; Muschler, R; Nigussie, A; Magid, J; de Neergaard, A

    2017-07-01

    Composting is an effective method to recycle biodegradable waste as soil amendment in smallholder farming systems. Although all essential plant nutrients are found in compost, a substantial amount of nitrogen is lost during composting. This study therefore investigated the potential of reducing N losses by (i) delaying the addition of nitrogen-rich substrates (i.e. poultry manure), and (ii) reducing the turning frequency during composting. Furthermore, we tested the effect of compost application method on nitrogen mineralization. Sugarcane-waste was composted for 54days with addition of poultry manure at the beginning (i.e. early addition) or after 21days of composting (delayed addition). The compost pile was then turned either every three or nine days. Composts were subsequently applied to soil as (i) homogeneously mixed, or (ii) stratified, and incubated for 28days to test the effect of compost application on nitrogen mineralization. The results showed that delayed addition of poultry manure reduced total nitrogen loss by 33% and increased mineral nitrogen content by >200% compared with early addition. Similarly, less frequent turning reduced total N loss by 12% compared with frequent turning. Stratified placement of compost did not enhance N mineralization compared to a homogeneous mixing. Our results suggested that simple modifications of the composting process (i.e. delayed addition and/or turning frequency) could significantly reduce N losses and improve the plant-nutritional value of compost. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Multi-residue determination of 47 organic compounds in water, soil, sediment and fish-Turia River as case study.

    PubMed

    Carmona, Eric; Andreu, Vicente; Picó, Yolanda

    2017-11-30

    A sensitive and reliable method based on solid-liquid extraction (SLE) using McIlvaine-Na 2 EDTA buffer (pH=4.5)-methanol and solid-phase extraction (SPE) clean up prior to ultra-high-performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS) was applied to determine 47 organic contaminants in fish, soil and sediments. The SPE procedure to clean-up the extracts was also used as extraction method to determine these compounds in water. Recoveries ranged from 38 to 104% for all matrices with RSDs<30%. Limits of Quantification for the target compounds were in the range of 10-50ng/g for soil, 2-40ng/g for sediment, 5-30ng/g for fish and 0.3-26ng/L for water. Furthermore, the proposed method was compared to QuEChERS (widely used for environmental matrices) that involves extraction with buffered acetonitrile (pH 5.5) and dispersive SPE clean-up. The results obtained (recoveries>50% for 36 compounds in front of 9, matrix effect<20% for 31 compounds against 21, and LOQs <25ngg -1 for 38 compounds against 22) indicates that the proposed method is more efficient than QuEChERS, The method was applied to monitoring these compounds along the Turia River. In river waters, Paracetamol (175ngL -1 ), ibuprofen (153ngL -1 ) and bisphenol A (41ngL -1 ) were the compounds most frequently detected while in sediments were vildagliptin (7ngg -1 ) and metoprolol (31ngg -1 ) and in fish, bisphenol A (33ngg -1 ) or sulfamethoxazole (13ngg -1 ). Copyright © 2017 Elsevier B.V. All rights reserved.

  20. An equation-free probabilistic steady-state approximation: dynamic application to the stochastic simulation of biochemical reaction networks.

    PubMed

    Salis, Howard; Kaznessis, Yiannis N

    2005-12-01

    Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.

  1. The application of systems thinking concepts, methods, and tools to global health practices: An analysis of case studies.

    PubMed

    Wilkinson, Jessica; Goff, Morgan; Rusoja, Evan; Hanson, Carl; Swanson, Robert Chad

    2018-06-01

    This review of systems thinking (ST) case studies seeks to compile and analyse cases from ST literature and provide practitioners with a reference for ST in health practice. Particular attention was given to (1) reviewing the frequency and use of key ST terms, methods, and tools in the context of health, and (2) extracting and analysing longitudinal themes across cases. A systematic search of databases was conducted, and a total of 36 case studies were identified. A combination of integrative and inductive qualitative approaches to analysis was used. Most cases identified took place in high-income countries and applied ST retrospectively. The most commonly used ST terms were agent/stakeholder/actor (n = 29), interdependent/interconnected (n = 28), emergence (n = 26), and adaptability/adaptation (n = 26). Common ST methods and tools were largely underutilized. Social network analysis was the most commonly used method (n = 4), and innovation or change management history was the most frequently used tool (n = 11). Four overarching themes were identified; the importance of the interdependent and interconnected nature of a health system, characteristics of leaders in a complex adaptive system, the benefits of using ST, and barriers to implementing ST. This review revealed that while much has been written about the potential benefits of applying ST to health, it has yet to completely transition from theory to practice. There is however evidence of the practical use of an ST lens as well as specific methods and tools. With clear examples of ST applications, the global health community will be better equipped to understand and address key health challenges. © 2017 John Wiley & Sons, Ltd.

  2. Applying 2-D resistivity imaging and ground penetrating radar (GPR) methods to identify infiltration of water in the ground surface

    NASA Astrophysics Data System (ADS)

    Yusof, Azim Hilmy Mohamad; Azman, Muhamad Iqbal Mubarak Faharul; Ismail, Nur Azwin; Ismail, Noer El Hidayah

    2017-07-01

    Infiltration of water into the soil mostly happens in area near to the ocean or area where rain occurred frequently. This paper explains about the water infiltration process that occurred vertically and horizontally at the subsurface layer. Infiltration act as an indicator of the soil's ability to allow water movement into and through the soil profile. This research takes place at Teluk Kumbar, Pulau Pinang, area that located near to the sea. Thus, infiltration process occurs actively. The study area consists of unconsolidated marine clay, sand and gravel deposits. Furthermore, the methods used for this research is 2-D Resistivity Imaging by using Wenner-Schlumberger array with 2.5 m minimum electrode spacing, and the second method is Ground Penetrating Radar (GPR) with antenna frequency of 250MHz. 2-D Resistivity Imaging is used to investigate the subsurface layer of the soil. Other than that, this method can also be used to investigate the water infiltration that happens horizontally. GPR is used to investigate shallow subsurface layer and to investigate the water infiltration from above. The results of inversion model of 2-D Resistivity Imaging shows that the subsurface layer at distance of 0 m to 20 m are suspected to be salt water intrusion zone due to the resistivity value of 0 Ω.m to 1 Ω.m. As for the radargram results from the GPR, the anomaly seems to be blurry and unclear, and EM waves signal can only penetrate up to 1.5 m depth. This feature shows that the subsurface layer is saturated with salt water. Applying 2-D resistivity imaging and GPR method were implemented to each other in identifying infiltration of water in the ground surface.

  3. Identification of flame transfer functions in the presence of intrinsic thermoacoustic feedback and noise

    NASA Astrophysics Data System (ADS)

    Jaensch, Stefan; Merk, Malte; Emmert, Thomas; Polifke, Wolfgang

    2018-05-01

    The Large Eddy Simulation/System Identification (LES/SI) approach is a general and efficient numerical method for deducing a Flame Transfer Function (FTF) from the LES of turbulent reacting flow. The method may be summarised as follows: a simulated flame is forced with a broadband excitation signal. The resulting fluctuations of the reference velocity and of the global heat release rate are post-processed via SI techniques in order to estimate a low-order model of the flame dynamics. The FTF is readily deduced from the low-order model. The SI method most frequently applied in aero- and thermo-acoustics has been Wiener-Hopf Inversion (WHI). This method is known to yield biased estimates in situations with feedback, thus it was assumed that non-reflective boundary conditions are required to generate accurate results with the LES/SI approach. Recent research has shown that the FTF is part of the so-called Intrinsic ThermoAcoustic (ITA) feedback loop. Hence, identifying an FTF from a compressible LES is always a closed-loop problem, and consequently one should expect that the WHI would yield biased results. However, several studies proved that WHI results compare favourably with validation data. To resolve this apparent contradiction, a variety of identification methods are compared against each other, including models designed for closed-loop identification. In agreement with theory, we show that the estimate given by WHI does not converge to the actual FTF. Fortunately, the error made is small if excitation amplitudes can be set such that the signal-to-noise ratio is large, but not large enough to trigger nonlinear flame dynamics. Furthermore, we conclude that non-reflective boundary conditions are not essentially necessary to apply the LES/SI approach.

  4. Constituents of Music and Visual-Art Related Pleasure – A Critical Integrative Literature Review

    PubMed Central

    Tiihonen, Marianne; Brattico, Elvira; Maksimainen, Johanna; Wikgren, Jan; Saarikallio, Suvi

    2017-01-01

    The present literature review investigated how pleasure induced by music and visual-art has been conceptually understood in empirical research over the past 20 years. After an initial selection of abstracts from seven databases (keywords: pleasure, reward, enjoyment, and hedonic), twenty music and eleven visual-art papers were systematically compared. The following questions were addressed: (1) What is the role of the keyword in the research question? (2) Is pleasure considered a result of variation in the perceiver’s internal or external attributes? (3) What are the most commonly employed methods and main variables in empirical settings? Based on these questions, our critical integrative analysis aimed to identify which themes and processes emerged as key features for conceptualizing art-induced pleasure. The results demonstrated great variance in how pleasure has been approached: In the music studies pleasure was often a clear object of investigation, whereas in the visual-art studies the term was often embedded into the context of an aesthetic experience, or used otherwise in a descriptive, indirect sense. Music studies often targeted different emotions, their intensity or anhedonia. Biographical and background variables and personality traits of the perceiver were often measured. Next to behavioral methods, a common method was brain imaging which often targeted the reward circuitry of the brain in response to music. Visual-art pleasure was also frequently addressed using brain imaging methods, but the research focused on sensory cortices rather than the reward circuit alone. Compared with music research, visual-art research investigated more frequently pleasure in relation to conscious, cognitive processing, where the variations of stimulus features and the changing of viewing modes were regarded as explanatory factors of the derived experience. Despite valence being frequently applied in both domains, we conclude, that in empirical music research pleasure seems to be part of core affect and hedonic tone modulated by stable personality variables, whereas in visual-art research pleasure is a result of the so called conceptual act depending on a chosen strategy to approach art. We encourage an integration of music and visual-art into to a multi-modal framework to promote a more versatile understanding of pleasure in response to aesthetic artifacts. PMID:28775697

  5. Using Continuous Glucose Monitoring Data and Detrended Fluctuation Analysis to Determine Patient Condition

    PubMed Central

    Thomas, Felicity; Signal, Matthew; Chase, J. Geoffrey

    2015-01-01

    Patients admitted to critical care often experience dysglycemia and high levels of insulin resistance, various intensive insulin therapy protocols and methods have attempted to safely normalize blood glucose (BG) levels. Continuous glucose monitoring (CGM) devices allow glycemic dynamics to be captured much more frequently (every 2-5 minutes) than traditional measures of blood glucose and have begun to be used in critical care patients and neonates to help monitor dysglycemia. In an attempt to obtain a better insight relating biomedical signals and patient status, some researchers have turned toward advanced time series analysis methods. In particular, Detrended Fluctuation Analysis (DFA) has been a topic of many recent studies in to glycemic dynamics. DFA investigates the “complexity” of a signal, how one point in time changes relative to its neighboring points, and DFA has been applied to signals like the inter-beat-interval of human heartbeat to differentiate healthy and pathological conditions. Analyzing the glucose metabolic system with such signal processing tools as DFA has been enabled by the emergence of high quality CGM devices. However, there are several inconsistencies within the published work applying DFA to CGM signals. Therefore, this article presents a review and a “how-to” tutorial of DFA, and in particular its application to CGM signals to ensure the methods used to determine complexity are used correctly and so that any relationship between complexity and patient outcome is robust. PMID:26134835

  6. Analyzing Large Gene Expression and Methylation Data Profiles Using StatBicRM: Statistical Biclustering-Based Rule Mining

    PubMed Central

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data-matrix. Finally, we have also included the integrated analysis of gene expression and methylation for determining epigenetic effect (viz., effect of methylation) on gene expression level. PMID:25830807

  7. Testing a novel method for improving wayfinding by means of a P3b Virtual Reality Visual Paradigm in normal aging.

    PubMed

    de Tommaso, Marina; Ricci, Katia; Delussi, Marianna; Montemurno, Anna; Vecchio, Eleonora; Brunetti, Antonio; Bevilacqua, Vitoantonio

    2016-01-01

    We propose a virtual reality (VR) model, reproducing a house environment, where color modification of target places, obtainable by home automation in a real ambient, was tested by means of a P3b paradigm. The target place (bathroom door) was designed to be recognized during a virtual wayfinding in a realistic reproduction of a house environment. Different color and luminous conditions, easily obtained in the real ambient from a remote home automation control, were applied to the target and standard places, all the doors being illuminated in white (W), and only target doors colored with a green (G) or red (R) spotlight. Three different Virtual Environments (VE) were depicted, as the bathroom was designed in the aisle (A), living room (L) and bedroom (B). EEG was recorded from 57 scalp electrodes in 10 healthy subjects in the 60-80 year age range (O-old group) and 12 normal cases in the 20-30 year age range (Y-young group). In Young group, all the target stimuli determined a significant increase in P3b amplitude on the parietal, occipital and central electrodes compared to frequent stimuli condition, whatever was the color of the target door, while in elderly group the P3b obtained by the green and red colors was significantly different from the frequent stimulus, on the parietal, occipital, and central derivations, while the White stimulus did not evoke a significantly larger P3b with respect to frequent stimulus. The modulation of P3b amplitude, obtained by color and luminance change of target place, suggests that cortical resources, able to compensate the age-related progressive loss of cognitive performance, need to be facilitated even in normal elderly. The event-related responses obtained by virtual reality may be a reliable method to test the environmental feasibility to age-related cognitive changes.

  8. Analyzing large gene expression and methylation data profiles using StatBicRM: statistical biclustering-based rule mining.

    PubMed

    Maulik, Ujjwal; Mallik, Saurav; Mukhopadhyay, Anirban; Bandyopadhyay, Sanghamitra

    2015-01-01

    Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-based rule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data-matrix. Finally, we have also included the integrated analysis of gene expression and methylation for determining epigenetic effect (viz., effect of methylation) on gene expression level.

  9. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Berberine

    MedlinePlus

    ... several plants including European barberry, goldenseal, goldthread, Oregon grape, phellodendron, and tree tumeric. People take berberine for heart failure. Some people apply berberine directly to the skin to treat burns and to the eye to treat trachoma, a bacterial infection that frequently ...

  11. Rapid screening of selective serotonin re-uptake inhibitors in urine samples using solid-phase microextraction gas chromatography-mass spectrometry.

    PubMed

    Salgado-Petinal, Carmen; Lamas, J Pablo; Garcia-Jares, Carmen; Llompart, Maria; Cela, Rafael

    2005-07-01

    In this paper a solid-phase microextraction-gas chromatography-mass spectrometry (SPME-GC-MS) method is proposed for a rapid analysis of some frequently prescribed selective serotonin re-uptake inhibitors (SSRI)-venlafaxine, fluvoxamine, mirtazapine, fluoxetine, citalopram, and sertraline-in urine samples. The SPME-based method enables simultaneous determination of the target SSRI after simple in-situ derivatization of some of the target compounds. Calibration curves in water and in urine were validated and statistically compared. This revealed the absence of matrix effect and, in consequence, the possibility of quantifying SSRI in urine samples by external water calibration. Intra-day and inter-day precision was satisfactory for all the target compounds (relative standard deviation, RSD, <14%) and the detection limits achieved were <0.4 ng mL(-1) urine. The time required for the SPME step and for GC analysis (30 min each) enables high throughput. The method was applied to real urine samples from different patients being treated with some of these pharmaceuticals. Some SSRI metabolites were also detected and tentatively identified.

  12. The truth about mouse, human, worms and yeast

    PubMed Central

    2004-01-01

    Genome comparisons are behind the powerful new annotation methods being developed to find all human genes, as well as genes from other genomes. Genomes are now frequently being studied in pairs to provide cross-comparison datasets. This 'Noah's Ark' approach often reveals unsuspected genes and may support the deletion of false-positive predictions. Joining mouse and human as the cross-comparison dataset for the first two mammals are: two Drosophila species, D. melanogaster and D. pseudoobscura; two sea squirts, Ciona intestinalis and Ciona savignyi; four yeast (Saccharomyces) species; two nematodes, Caenorhabditis elegans and Caenorhabditis briggsae; and two pufferfish (Takefugu rubripes and Tetraodon nigroviridis). Even genomes like yeast and C. elegans, which have been known for more than five years, are now being significantly improved. Methods developed for yeast or nematodes will now be applied to mouse and human, and soon to additional mammals such as rat and dog, to identify all the mammalian protein-coding genes. Current large disparities between human Unigene predictions (127,835 genes) and gene-scanning methods (45,000 genes) still need to be resolved. This will be the challenge during the next few years. PMID:15601543

  13. The truth about mouse, human, worms and yeast.

    PubMed

    Nelson, David R; Nebert, Daniel W

    2004-01-01

    Genome comparisons are behind the powerful new annotation methods being developed to find all human genes, as well as genes from other genomes. Genomes are now frequently being studied in pairs to provide cross-comparison datasets. This 'Noah's Ark' approach often reveals unsuspected genes and may support the deletion of false-positive predictions. Joining mouse and human as the cross-comparison dataset for the first two mammals are: two Drosophila species, D. melanogaster and D. pseudoobscura; two sea squirts, Ciona intestinalis and Ciona savignyi; four yeast (Saccharomyces) species; two nematodes, Caenorhabditis elegans and Caenorhabditis briggsae; and two pufferfish (Takefugu rubripes and Tetraodon nigroviridis). Even genomes like yeast and C. elegans, which have been known for more than five years, are now being significantly improved. Methods developed for yeast or nematodes will now be applied to mouse and human, and soon to additional mammals such as rat and dog, to identify all the mammalian protein-coding genes. Current large disparities between human Unigene predictions (127,835 genes) and gene-scanning methods (45,000 genes) still need to be resolved. This will be the challenge during the next few years.

  14. A Double-Coil TMS Method to Assess Corticospinal Excitability Changes at a Near-Simultaneous Time in the Two Hands during Movement Preparation

    PubMed Central

    Wilhelm, Emmanuelle; Quoilin, Caroline; Petitjean, Charlotte; Duque, Julie

    2016-01-01

    Background: Many previous transcranial magnetic stimulation (TMS) studies have investigated corticospinal excitability changes occurring when choosing which hand to use for an action, one of the most frequent decision people make in daily life. So far, these studies have applied single-pulse TMS eliciting motor-evoked potential (MEP) in one hand when this hand is either selected or non-selected. Using such method, hand choices were shown to entail the operation of two inhibitory mechanisms, suppressing MEPs in the targeted hand either when it is non-selected (competition resolution, CR) or selected (impulse control, IC). However, an important limitation of this “Single-Coil” method is that MEPs are elicited in selected and non-selected conditions during separate trials and thus those two settings may not be completely comparable. Moreover, a more important problem is that MEPs are computed in relation to the movement of different hands. The goal of the present study was to test a “Double-Coil” method to evaluate IC and CR preceding the same hand responses by applying Double-Coil TMS over the two primary motor cortices (M1) at a near-simultaneous time (1 ms inter-pulse interval). Methods: MEPs were obtained in the left (MEPLEFT) and right (MEPRIGHT) hands while subjects chose between left and right hand key-presses in blocks using a Single-Coil or a Double-Coil method; in the latter blocks, TMS was either applied over left M1 first (TMSLRM1 group, n = 12) or right M1 first (TMSRLM1 group, n = 12). Results: MEPLEFT were suppressed preceding both left (IC) and right (CR) hand responses whereas MEPRIGHT were only suppressed preceding left (CR) but not right (IC) hand responses. This result was observed regardless of whether Single-Coil or Double-Coil TMS was applied in the two subject groups. However, in the TMSLRM1 group, the MEP suppression was attenuated in Double-Coil compared to Single-Coil blocks for both IC and CR, when probed with MEPLEFT (elicited by the second pulse). Conclusions: Although Double-Coil TMS may be a reliable method to assess bilateral motor excitability provided that a RM1-LM1 pulse order is used, further experiments are required to understand the reduced MEPLEFT changes in Double-Coil blocks when the LM1-RM1 pulse order was used. PMID:27014020

  15. Ventilation in the patient with unilateral lung disease.

    PubMed

    Thomas, A R; Bryce, T L

    1998-10-01

    Severe ULD presents a challenge in ventilator management because of the marked asymmetry in the mechanics of the two lungs. The asymmetry may result from significant decreases or increases in the compliance of the involved lung. Traditional ventilator support may fail to produce adequate gas exchange in these situations and has the potential to cause further deterioration. Fortunately, conventional techniques can be safely and effectively applied in the majority of cases without having to resort to less familiar and potentially hazardous forms of support. In those circumstances when conventional ventilation is unsuccessful in restoring adequate gas exchange, lateral positioning and ILV have proved effective at improving and maintaining gas exchange. Controlled trials to guide clinical decision making are lacking. In patients who have processes associated with decreased compliance in the involved lung, lateral positioning may be a simple method of improving gas exchange but is associated with many practical limitations. ILV in these patients is frequently successful when differential PEEP is applied with the higher pressure to the involved lung. In patients in whom the pathology results in distribution of ventilation favoring the involved lung, particularly BPF, ILV can be used to supply adequate support while minimizing flow through the fistula and allowing it to close. The application of these techniques should be undertaken with an understanding of the pathophysiology of the underlying process; the reported experience with these techniques, including indications and successfully applied methods; and the potential problems encountered with their use. Fortunately, these modalities are infrequently required, but they provide a critical means of support when conventional techniques fail.

  16. Seasonal Dynamics of Microcystis spp. and Their Toxigenicity as Assessed by qPCR in a Temperate Reservoir

    PubMed Central

    Martins, António; Moreira, Cristiana; Vale, Micaela; Freitas, Marisa; Regueiras, Ana; Antunes, Agostinho; Vasconcelos, Vitor

    2011-01-01

    Blooms of toxic cyanobacteria are becoming increasingly frequent, mainly due to water quality degradation. This work applied qPCR as a tool for early warning of microcystin(MC)-producer cyanobacteria and risk assessment of water supplies. Specific marker genes for cyanobacteria, Microcystis and MC-producing Microcystis, were quantified to determine the genotypic composition of the natural Microcystis population. Correlations between limnological parameters, pH, water temperature, dissolved oxygen and conductivity and MC concentrations as well as Microcystis abundance were assessed. A negative significant correlation was observed between toxic (with mcy genes) to non-toxic (without mcy genes) genotypes ratio and the overall Microcystis density. The highest proportions of toxic Microcystis genotypes were found 4–6 weeks before and 8–10 weeks after the peak of the bloom, with the lowest being observed at its peak. These results suggest positive selection of non-toxic genotypes under favorable environmental growth conditions. Significant positive correlations could be found between quantity of toxic genotypes and MC concentration, suggesting that the method applied can be useful to predict potential MC toxicity risk. No significant correlation was found between the limnological parameters measured and MC concentrations or toxic genotypes proportions indicating that other abiotic and biotic factors should be governing MC production and toxic genotypes dynamics. The qPCR method here applied is useful to rapidly estimate the potential toxicity of environmental samples and so, it may contribute to the more efficient management of water use in eutrophic systems. PMID:22072994

  17. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  18. Avoid lost discoveries, because of violations of standard assumptions, by using modern robust statistical methods.

    PubMed

    Wilcox, Rand; Carlson, Mike; Azen, Stan; Clark, Florence

    2013-03-01

    Recently, there have been major advances in statistical techniques for assessing central tendency and measures of association. The practical utility of modern methods has been documented extensively in the statistics literature, but they remain underused and relatively unknown in clinical trials. Our objective was to address this issue. STUDY DESIGN AND PURPOSE: The first purpose was to review common problems associated with standard methodologies (low power, lack of control over type I errors, and incorrect assessments of the strength of the association). The second purpose was to summarize some modern methods that can be used to circumvent such problems. The third purpose was to illustrate the practical utility of modern robust methods using data from the Well Elderly 2 randomized controlled trial. In multiple instances, robust methods uncovered differences among groups and associations among variables that were not detected by classic techniques. In particular, the results demonstrated that details of the nature and strength of the association were sometimes overlooked when using ordinary least squares regression and Pearson correlation. Modern robust methods can make a practical difference in detecting and describing differences between groups and associations between variables. Such procedures should be applied more frequently when analyzing trial-based data. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Dynamic Models Applied to Landslides: Study Case Angangueo, MICHOACÁN, MÉXICO.

    NASA Astrophysics Data System (ADS)

    Torres Fernandez, L.; Hernández Madrigal, V. M., , Dr; Capra, L.; Domínguez Mota, F. J., , Dr

    2017-12-01

    Most existing models for landslide zonification are static type, do not consider the dynamic behavior of the trigger factor. This results in a limited representation of the actual zonation of slope instability, present a short-term validity, cańt be applied for the design of early warning systems, etc. Particularly in Mexico, these models are static because they do not consider triggering factor such as precipitation. In this work, we present a numerical evaluation to know the landslide susceptibility, based on probabilistic methods. Which are based on the generation of time series, which are generated from the meteorological stations, having limited information an interpolation is made to generate the simulation of the precipitation in the zone. The obtained information is integrated in PCRaster and in conjunction with the conditioning factors it is possible to generate a dynamic model. This model will be applied for landslide zoning in the municipality of Angangueo, characterized by frequent logging of debris and mud flow, translational and rotational landslides, detonated by atypical precipitations, such as those recorded in 2010. These caused economic losses and humans. With these models, it would be possible to generate probable scenarios that help the Angangueo's population to reduce the risks and to carry out actions of constant resilience activities.

  20. Applying Applied Ethics through ethics consulting.

    PubMed

    Moore, W

    2010-04-01

    Applied Ethics is frequently described as a discipline of philosophy that concerns itself with the application of moral theories such as deontology and utilitarianism to real world dilemmas. However, these applications often remain restricted to the academic world. The focus of new versions ethics consulting has since the mid-1980s shifted from what the ethicist knows to what the ethicist does or enables. This shift remodelled the ethicist's role to that of a facilitator in an inherently social process of moral inquiry. Applying these developments in the Namibian context has already proved to be of great value to the local health care industry. (c) 2010. Published by Elsevier Ltd.

  1. Switching and optimizing control for coal flotation process based on a hybrid model

    PubMed Central

    Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang

    2017-01-01

    Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305

  2. Pelagic and benthic communities of the Antarctic ecosystem of Potter Cove: Genomics and ecological implications.

    PubMed

    Abele, D; Vazquez, S; Buma, A G J; Hernandez, E; Quiroga, C; Held, C; Frickenhaus, S; Harms, L; Lopez, J L; Helmke, E; Mac Cormack, W P

    2017-06-01

    Molecular technologies are more frequently applied in Antarctic ecosystem research and the growing amount of sequence-based information available in databases adds a new dimension to understanding the response of Antarctic organisms and communities to environmental change. We apply molecular techniques, including fingerprinting, and amplicon and metagenome sequencing, to understand biodiversity and phylogeography to resolve adaptive processes in an Antarctic coastal ecosystem from microbial to macrobenthic organisms and communities. Interpretation of the molecular data is not only achieved by their combination with classical methods (pigment analyses or microscopy), but furthermore by combining molecular with environmental data (e.g., sediment characteristics, biogeochemistry or oceanography) in space and over time. The studies form part of a long-term ecosystem investigation in Potter Cove on King-George Island, Antarctica, in which we follow the effects of rapid retreat of the local glacier on the cove ecosystem. We formulate and encourage new approaches to integrate molecular tools into Antarctic ecosystem research, environmental conservation actions, and polar ocean observatories. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Rank-based testing of equal survivorship based on cross-sectional survival data with or without prospective follow-up.

    PubMed

    Chan, Kwun Chuen Gary; Qin, Jing

    2015-10-01

    Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. The JAK2/STAT5 signaling pathway as a potential therapeutic target in canine mastocytoma

    PubMed Central

    Keller, Alexandra; Wingelhofer, Bettina; Peter, Barbara; Bauer, Karin; Berger, Daniela; Gamperl, Susanne; Reifinger, Martin; Cerny-Reiterer, Sabine; Moriggl, Richard; Willmann, Michael; Valent, Peter; Hadzijusufovic, Emir

    2018-01-01

    Background Mastocytoma are frequently diagnosed cutaneous neoplasms in dogs. In non-resectable mastocytoma patients, novel targeted drugs are often applied. The transcription factor STAT5 has been implicated in the survival of human neoplastic mast cells (MC). Our study evaluated the JAK2/STAT5 pathway as a novel target in canine mastocytoma. Materials and Methods We employed inhibitors of JAK2 (R763, TG101348, AZD1480, ruxolitinib) and STAT5 (pimozide, piceatannol) and evaluated their effects on 2 mastocytoma cell lines, C2 and NI-1. Results Activated JAK2 and STAT5 were detected in both cell lines. The drugs applied were found to inhibit proliferation and survival in these cells with the following rank-order of potency: R763 > TG101348 > AZD1480 > pimozide > ruxolitinib > piceatannol. Moreover, synergistic anti-neoplastic effects were obtained by combining pimozide with KIT-targeting drugs (toceranib, masitinib, nilotinib, midostaurin) in NI-1 cells. Conclusion The JAK2/STAT5 pathway is a novel potential target of therapy in canine mastocytoma. PMID:28397975

  5. Working Together to Connect Care: a metropolitan tertiary emergency department and community care program.

    PubMed

    Harcourt, Debra; McDonald, Clancy; Cartlidge-Gann, Leonie; Burke, John

    2017-03-02

    Objective Frequent attendance by people to an emergency department (ED) is a global concern. A collaborative partnership between an ED and the primary and community healthcare sectors has the potential to improve care for the person who frequently attends the ED. The aims of the Working Together to Connect Care program are to decrease the number of presentations by providing focused community support and to integrate all healthcare services with the goal of achieving positive, patient-centred and directed outcomes. Methods A retrospective analysis of ED data for 2014 and 2015 was used to ascertain the characteristics of the potential program cohort. The definition used to identify a 'frequent attendee' was more than four presentations to an ED in 1 month. This analysis was used to develop the processes now known as the Working Together to Connect Care program. This program includes participant identification by applying the definition, flagging of potential participants in the ED IT system, case review and referral to community services by ED staff, case conferencing facilitated within the ED and individualised, patient centred case management provided by government and non-government community services. Results Two months after the date of commencement of the Working Together to Connect Care program there are 31 active participants in the program: 10 are on the Mental Health pathway, and one is on the No Consent pathway. On average there are three people recruited to the program every week. The establishment of a new program for supporting frequent attendees of an ED has had its challenges. Identifying systems that support people in their community has been an early positive outcome of this project. Conclusion It is expected that data regarding the number of ED presentations, potential fiscal savings and client outcomes will be available in 2017. What is known about the topic? Frequent attendance at EDs is a global issue and although the number of 'super users' is small compared with non-frequent users, the presentations are high. People in the frequent attendee group will often seek care from multiple EDs for, in the main, mental health issues and substance abuse. Furthermore, frequent ED users are vulnerable and experience higher mortality, hospital admissions and out-patient visits than non-frequent users. Aggressive and assertive outreach, intense coordination of services by integrated care teams, and the need for non-medical resources, such as supportive housing, have positive outcomes for this group of people. What does this paper add? This study uses international research findings in an Australian setting to provide a testing of the generalisability of an assertive and collaborative ED and community case management approach for supporting people who frequent a metropolitan ED. What are the implications for practitioners? The chronicling of a process undertaken to affect change in a health care setting supports practitioners when developing processes for this cohort across different ED contexts.

  6. Retained energy-based coding for EEG signals.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-09-01

    The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Preservation of live cultures of basidiomycetes - recent methods.

    PubMed

    Homolka, Ladislav

    2014-02-01

    Basidiomycetes are used in industrial processes, in basic or applied research, teaching, systematic and biodiversity studies. Efficient work with basidiomycete cultures requires their reliable source, which is ensured by their safe long-term storage. Repeated subculturing, frequently used for the preservation, is time-consuming, prone to contamination, and does not prevent genetic and physiological changes during long-term maintenance. Various storage methods have been developed in order to eliminate these disadvantages. Besides lyophilization (unsuitable for the majority of basidiomycetes), cryopreservation at low temperatures seems to be a very efficient way to attain this goal. Besides survival, another requirement for successful maintenance of fungal strains is the ability to preserve their features unchanged. An ideal method has not been created so far. Therefore it is highly desirable to develop new or improve the current preservation methods, combining advantages and eliminate disadvantages of individual techniques. Many reviews on preservation of microorganisms including basidiomycetes have been published, but the progress in the field requires an update. Although herbaria specimens of fungi (and of basidiomycetes in particular) are very important for taxonomic and especially typological studies, this review is limited to live fungal cultures. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  8. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  9. Reliability of the Inverse Water Volumetry Method to Measure the Volume of the Upper Limb.

    PubMed

    Beek, Martinus A; te Slaa, Alexander; van der Laan, Lijckle; Mulder, Paul G H; Rutten, Harm J T; Voogd, Adri C; Luiten, Ernest J T; Gobardhan, Paul D

    2015-06-01

    Lymphedema of the upper extremity is a common side effect of lymph node dissection or irradiation of the axilla. Several techniques are being applied in order to examine the presence and severity of lymphedema. Measurement of circumference of the upper extremity is most frequently performed. An alternative is the water-displacement method. The aim of this study was to determine the reliability and the reproducibility of the "Inverse Water Volumetry apparatus" (IWV-apparatus) for the measurement of arm volumes. The IWV-apparatus is based on the water-displacement method. Measurements were performed by three breast cancer nurse practitioners on ten healthy volunteers in three weekly sessions. The intra-class correlation coefficient, defined as the ratio of the subject component to the total variance, equaled 0.99. The reliability index is calculated as 0.14 kg. This indicates that only changes in a patient's arm volume measurement of more than 0.14 kg would represent a true change in arm volume, which is about 6% of the mean arm volume of 2.3 kg. The IWV-apparatus proved to be a reliable and reproducible method to measure arm volume.

  10. The use of a water seal to manage air leaks after a pulmonary lobectomy: a retrospective study.

    PubMed

    Okamoto, Junichi; Okamoto, Tatsuro; Fukuyama, Yasuro; Ushijima, Chie; Yamaguchi, Masafumi; Ichinose, Yukito

    2006-08-01

    The methods for managing chest drainage tubes during the postoperative period differ among thoracic surgeons and, as a result, the optimal method remains controversial. We reviewed 170 consecutive patients undergoing a pulmonary lobectomy for either primary lung cancer or metastatic lung cancer from January 1998 to December 2002. After the operation, the chest drainage tube was placed on a suction pump with a negative pressure of -10 cmH(2)O in 120 patients before 2001, while such drainage tubes were kept on water seal in 47 cases mainly since 2001. Regarding the preoperative and postoperative variables, postoperative air leak as well as the video-assisted thoracic surgery (VATS) procedure were more frequently observed in the water seal group than in the suction group (p=0.01580, p<0.001, respectively). In comparing these different populations, each Kaplan-Meier curve, which presented the duration of the postoperative air leak seemed to be similar between the two methods. These observations suggest that applying chest tubes on water seal seems to be an effective method for preventing postoperative air leak in clinical practice. However, a prospective randomized trial using a larger series of patients is warranted for this subject.

  11. Word Sense Disambiguation in Bangla Language Using Supervised Methodology with Necessary Modifications

    NASA Astrophysics Data System (ADS)

    Pal, Alok Ranjan; Saha, Diganta; Dash, Niladri Sekhar; Pal, Antara

    2018-05-01

    An attempt is made in this paper to report how a supervised methodology has been adopted for the task of word sense disambiguation in Bangla with necessary modifications. At the initial stage, the Naïve Bayes probabilistic model that has been adopted as a baseline method for sense classification, yields moderate result with 81% accuracy when applied on a database of 19 (nineteen) most frequently used Bangla ambiguous words. On experimental basis, the baseline method is modified with two extensions: (a) inclusion of lemmatization process into of the system, and (b) bootstrapping of the operational process. As a result, the level of accuracy of the method is slightly improved up to 84% accuracy, which is a positive signal for the whole process of disambiguation as it opens scope for further modification of the existing method for better result. The data sets that have been used for this experiment include the Bangla POS tagged corpus obtained from the Indian Languages Corpora Initiative, and the Bangla WordNet, an online sense inventory developed at the Indian Statistical Institute, Kolkata. The paper also reports about the challenges and pitfalls of the work that have been closely observed and addressed to achieve expected level of accuracy.

  12. A method of assessing the efficacy of hand sanitizers: use of real soil encountered in the food service industry.

    PubMed

    Charbonneau, D L; Ponte, J M; Kochanowski, B A

    2000-04-01

    In many outbreaks of foodborne illness, the food worker has been implicated as the source of the infection. To decrease the likelihood of cross-contamination, food workers must clean and disinfect their hands frequently. To ensure their effectiveness, hand disinfectants should be tested using rigorous conditions that mimic normal use. Currently, several different methods are used to assess the efficacy of hand disinfectants. However, most of these methods were designed with the health care worker in mind and do not model the specific contamination situations encountered by the food worker. To fill this void, we developed a model that uses soil from fresh meat and a means of quantifying bacteria that is encountered and transferred during food preparation activities. Results of studies using various doses of para-chloro-meta-xylenol and triclosan confirm that the method is reproducible and predictable in measuring the efficacy of sanitizers. Consistent, dose-dependent results were obtained with relatively few subjects. Other studies showed that washing hands with a mild soap and water for 20 s was more effective than applying a 70% alcohol hand sanitizer.

  13. Aeroacoustic directivity via wave-packet analysis of mean or base flows

    NASA Astrophysics Data System (ADS)

    Edstrand, Adam; Schmid, Peter; Cattafesta, Louis

    2017-11-01

    Noise pollution is an ever-increasing problem in society, and knowledge of the directivity patterns of the sound radiation is required for prediction and control. Directivity is frequently determined through costly numerical simulations of the flow field combined with an acoustic analogy. We introduce a new computationally efficient method of finding directivity for a given mean or base flow field using wave-packet analysis (Trefethen, PRSA 2005). Wave-packet analysis approximates the eigenvalue spectrum with spectral accuracy by modeling the eigenfunctions as wave packets. With the wave packets determined, we then follow the method of Obrist (JFM, 2009), which uses Lighthill's acoustic analogy to determine the far-field sound radiation and directivity of wave-packet modes. We apply this method to a canonical jet flow (Gudmundsson and Colonius, JFM 2011) and determine the directivity of potentially unstable wave packets. Furthermore, we generalize the method to consider a three-dimensional flow field of a trailing vortex wake. In summary, we approximate the disturbances as wave packets and extract the directivity from the wave-packet approximation in a fraction of the time of standard aeroacoustic solvers. ONR Grant N00014-15-1-2403.

  14. Motion Artefacts in MRI: a Complex Problem with Many Partial Solutions

    PubMed Central

    Zaitsev, Maxim; Maclaren, Julian.; Herbst, Michael

    2015-01-01

    Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artefacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artefacts, but no single method can be applied in all imaging situations. Instead, a ‘toolbox’ of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artefacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artefacts, with the aim of aiding artefact detection and mitigation in particular clinical situations. PMID:25630632

  15. Motion artifacts in MRI: A complex problem with many partial solutions.

    PubMed

    Zaitsev, Maxim; Maclaren, Julian; Herbst, Michael

    2015-10-01

    Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artifacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artifacts, but no single method can be applied in all imaging situations. Instead, a "toolbox" of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artifacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artifacts, with the aim of aiding artifact detection and mitigation in particular clinical situations. © 2015 Wiley Periodicals, Inc.

  16. Multivariate analysis of longitudinal rates of change.

    PubMed

    Bryan, Matthew; Heagerty, Patrick J

    2016-12-10

    Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. An immune-inspired semi-supervised algorithm for breast cancer diagnosis.

    PubMed

    Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong

    2016-10-01

    Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  19. Convergence of Newton's method for a single real equation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.

  20. MNA of Metals and In Situ Bioremediation

    EPA Science Inventory

    Monitored Natural Attenuation (MNA) is a frequently applied remediation option for organic contaminants in groundwater, especially fuel hydrocarbons and chlorinated compounds. Current lines of research examine whether or not MNA is more broadly applicable to inorganic contaminan...

  1. 43 CFR 3275.16 - What standards apply to installing and maintaining meters?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; (2) You must calibrate meters measuring steam or hot water flow with a turbine, vortex, ultrasonics... frequent; and (3) You must calibrate meters measuring steam or hot water flow with an orifice plate...

  2. 43 CFR 3275.16 - What standards apply to installing and maintaining meters?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; (2) You must calibrate meters measuring steam or hot water flow with a turbine, vortex, ultrasonics... frequent; and (3) You must calibrate meters measuring steam or hot water flow with an orifice plate...

  3. 43 CFR 3275.16 - What standards apply to installing and maintaining meters?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; (2) You must calibrate meters measuring steam or hot water flow with a turbine, vortex, ultrasonics... frequent; and (3) You must calibrate meters measuring steam or hot water flow with an orifice plate...

  4. 43 CFR 3275.16 - What standards apply to installing and maintaining meters?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; (2) You must calibrate meters measuring steam or hot water flow with a turbine, vortex, ultrasonics... frequent; and (3) You must calibrate meters measuring steam or hot water flow with an orifice plate...

  5. Mapping the structure of animal behavior

    NASA Astrophysics Data System (ADS)

    Berman, Gordon; Choi, Daniel; Bialek, William; Shaevitz, Joshua

    2014-03-01

    Most animals possess the ability to actuate a vast diversity of movements, ostensibly constrained only by morphology and physics. In practice, however, a frequent assumption in behavioral science is that most of an animal's activities can be described in terms of a small set of stereotyped motifs. Here we introduce a method for mapping the behavioral space of organisms, relying only upon the underlying structure of postural movement data to organize and classify behaviors. Applying our method to movies of size closely-related species of freely-behaving fruit flies, we find a wide variety of non-stereotyped and stereo-typed behaviors, spanning a wide range of time scales. We observe subtle behavioral differences between these species, identifying the some of the effects of phylogenic history on behavior. Moreover, we find that the transitions between the observed behaviors display a hierarchical syntax, with similar behaviors likely to transition between each other, but with a long time scale of memory. These results suggest potential mechanisms for the evolution of behavior and for the neural control of movements.

  6. Static Buckling Model Tests and Elasto-plastic Finite Element Analysis of a Pile in Layers with Various Thicknesses

    NASA Astrophysics Data System (ADS)

    Okajima, Kenji; Imai, Junichi; Tanaka, Tadatsugu; Iida, Toshiaki

    Damage to piles in the liquefied ground is frequently reported. Buckling by the excess vertical load could be one of the causes of the pile damage, as well as the lateral flow of the ground and the lateral load at the pile head. The buckling mechanism is described as a complicated interaction between the pile deformation by the vertical load and the earth pressure change cased by the pile deformation. In this study, series of static buckling model tests of a pile were carried out in dried sand ground with various thickness of the layer. Finite element analysis was applied to the test results to verify the effectiveness of the elasto-plastic finite element analysis combining the implicit-explicit mixed type dynamic relaxation method with the return mapping method to the pile buckling problems. The test results and the analysis indicated the possibility that the buckling load of a pile decreases greatly where the thickness of the layer increases.

  7. One- and two-dimensional pulse electron paramagnetic resonance spectroscopy: concepts and applications.

    PubMed

    Van Doorslaer, S; Schweiger, A

    2000-06-01

    During the last two decades, the possibilities of pulse electron paramagnetic resonance (EPR) and pulse electron nuclear double resonance (ENDOR) spectroscopy have increased tremendously. While at the beginning of the 1980s pulse-EPR and ENDOR applications were still a rarity, the techniques are now very frequently applied in chemistry, physics, materials science, biology and mineralogy. This is mainly due to the considerable efforts invested in the last few years on instrument development and pulse-sequence design. Pulse-EPR spectrometers are now commercially available, which enables many research groups to use these techniques. In this work, an overview of state-of-the-art pulse EPR and ENDOR spectroscopy is given. The rapid expansion of the field, however, does not allow us to give an exhaustive record of all the pulse methods introduced so far. After a brief and very qualitative description of the basic principles of pulse EPR, we discuss some of the experiments in more detail and illustrate the potential of the methods with a number of selected applications.

  8. Machine learning based job status prediction in scientific clusters

    DOE PAGES

    Yoo, Wucherl; Sim, Alex; Wu, Kesheng

    2016-09-01

    Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less

  9. Landslides Monitoring on Salt Deposits Using Geophysical Methods, Case study - Slanic Prahova, Romania

    NASA Astrophysics Data System (ADS)

    Ovidiu, Avram; Rusu, Emil; Maftei, Raluca-Mihaela; Ulmeanu, Antonio; Scutelnicu, Ioan; Filipciuc, Constantina; Tudor, Elena

    2017-12-01

    Electrometry is most frequently applied geophysical method to examine dynamical phenomena related to the massive salt presence due to resistivity contrasts between salt, salt breccia and geological covering formations. On the vertical resistivity sections obtained with VES devices these three compartments are clearly differentiates by high resistivity for the massive salt, very low for salt breccia and variable for geological covering formations. When the land surface is inclined, shallow formations are moving gravitationally on the salt back, producing a landslide. Landslide monitoring involves repeated periodically measurements of geoelectrical profiles into a grid covering the slippery surface, in the same conditions (climate, electrodes position, instrument and measurement parameters). The purpose of monitoring landslides in Slanic Prahova area, was to detect the changes in resistivity distribution profiles to superior part of subsoil measured in 2014 and 2015. Measurement grid include several representative cross sections in susceptibility to landslides point of view. The results are graphically represented by changing the distribution of topography and resistivity differences between the two sets of geophysical measurements.

  10. B-spline tight frame based force matching method

    NASA Astrophysics Data System (ADS)

    Yang, Jianbin; Zhu, Guanhua; Tong, Dudu; Lu, Lanyuan; Shen, Zuowei

    2018-06-01

    In molecular dynamics simulations, compared with popular all-atom force field approaches, coarse-grained (CG) methods are frequently used for the rapid investigations of long time- and length-scale processes in many important biological and soft matter studies. The typical task in coarse-graining is to derive interaction force functions between different CG site types in terms of their distance, bond angle or dihedral angle. In this paper, an ℓ1-regularized least squares model is applied to form the force functions, which makes additional use of the B-spline wavelet frame transform in order to preserve the important features of force functions. The B-spline tight frames system has a simple explicit expression which is useful for representing our force functions. Moreover, the redundancy of the system offers more resilience to the effects of noise and is useful in the case of lossy data. Numerical results for molecular systems involving pairwise non-bonded, three and four-body bonded interactions are obtained to demonstrate the effectiveness of our approach.

  11. Machine learning based job status prediction in scientific clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex; Wu, Kesheng

    Large high-performance computing systems are built with increasing number of components with more CPU cores, more memory, and more storage space. At the same time, scientific applications have been growing in complexity. Together, they are leading to more frequent unsuccessful job statuses on HPC systems. From measured job statuses, 23.4% of CPU time was spent to the unsuccessful jobs. Here, we set out to study whether these unsuccessful job statuses could be anticipated from known job characteristics. To explore this possibility, we have developed a job status prediction method for the execution of jobs on scientific clusters. The Random Forestsmore » algorithm was applied to extract and characterize the patterns of unsuccessful job statuses. Experimental results show that our method can predict the unsuccessful job statuses from the monitored ongoing job executions in 99.8% the cases with 83.6% recall and 94.8% precision. Lastly, this prediction accuracy can be sufficiently high that it can be used to mitigation procedures of predicted failures.« less

  12. Determination of artificial sweeteners in beverages with green mobile phases and high temperature liquid chromatography-tandem mass spectrometry.

    PubMed

    Ordoñez, Edgar Y; Rodil, Rosario; Quintana, José Benito; Cela, Rafael

    2015-02-15

    A new analytical procedure involving the use of water and a low percentage of ethanol combined to high temperature liquid chromatography-tandem mass spectrometry has been developed for the determination of nine high-intensity sweeteners in a variety of drink samples. The method permitted the analysis in 23min (including column reequilibration) and consuming only 0.85mL of a green organic solvent (ethanol). This methodology provided limits of detection (after 50-fold dilution) in the 0.05-10mg/L range, with recoveries (obtained from five different types of beverages) being in the 86-110% range and relative standard deviation values lower than 12%. Finally, the method was applied to 25 different samples purchased in Spain, where acesulfame and sucralose were the most frequently detected analytes (>50% of the samples) and cyclamate was found over the legislation limit set by the European Union in a sample and at the regulation boundary in three others. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  14. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  15. Predicting Physical Interactions between Protein Complexes*

    PubMed Central

    Clancy, Trevor; Rødland, Einar Andreas; Nygard, Ståle; Hovig, Eivind

    2013-01-01

    Protein complexes enact most biochemical functions in the cell. Dynamic interactions between protein complexes are frequent in many cellular processes. As they are often of a transient nature, they may be difficult to detect using current genome-wide screens. Here, we describe a method to computationally predict physical interactions between protein complexes, applied to both humans and yeast. We integrated manually curated protein complexes and physical protein interaction networks, and we designed a statistical method to identify pairs of protein complexes where the number of protein interactions between a complex pair is due to an actual physical interaction between the complexes. An evaluation against manually curated physical complex-complex interactions in yeast revealed that 50% of these interactions could be predicted in this manner. A community network analysis of the highest scoring pairs revealed a biologically sensible organization of physical complex-complex interactions in the cell. Such analyses of proteomes may serve as a guide to the discovery of novel functional cellular relationships. PMID:23438732

  16. A Crack Growth Evaluation Method for Interacting Multiple Cracks

    NASA Astrophysics Data System (ADS)

    Kamaya, Masayuki

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e. g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks.

  17. Dietary patterns analysis using data mining method. An application to data from the CYKIDS study.

    PubMed

    Lazarou, Chrystalleni; Karaolis, Minas; Matalas, Antonia-Leda; Panagiotakos, Demosthenes B

    2012-11-01

    Data mining is a computational method that permits the extraction of patterns from large databases. We applied the data mining approach in data from 1140 children (9-13 years), in order to derive dietary habits related to children's obesity status. Rules emerged via data mining approach revealed the detrimental influence of the increased consumption of soft dinks, delicatessen meat, sweets, fried and junk food. For example, frequent (3-5 times/week) consumption of all these foods increases the risk for being obese by 75%, whereas in children who have a similar dietary pattern, but eat >2 times/week fish and seafood the risk for obesity is reduced by 33%. In conclusion patterns revealed from data mining technique refer to specific groups of children and demonstrate the effect on the risk associated with obesity status when a single dietary habit might be modified. Thus, a more individualized approach when translating public health messages could be achieved. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. A PCR primer bank for quantitative gene expression analysis.

    PubMed

    Wang, Xiaowei; Seed, Brian

    2003-12-15

    Although gene expression profiling by microarray analysis is a useful tool for assessing global levels of transcriptional activity, variability associated with the data sets usually requires that observed differences be validated by some other method, such as real-time quantitative polymerase chain reaction (real-time PCR). However, non-specific amplification of non-target genes is frequently observed in the latter, confounding the analysis in approximately 40% of real-time PCR attempts when primer-specific labels are not used. Here we present an experimentally validated algorithm for the identification of transcript-specific PCR primers on a genomic scale that can be applied to real-time PCR with sequence-independent detection methods. An online database, PrimerBank, has been created for researchers to retrieve primer information for their genes of interest. PrimerBank currently contains 147 404 primers encompassing most known human and mouse genes. The primer design algorithm has been tested by conventional and real-time PCR for a subset of 112 primer pairs with a success rate of 98.2%.

  19. Laboratory Studies of Atmospheric Heterogeneous Chemistry

    NASA Technical Reports Server (NTRS)

    Keyser, L. F.; Leu, M-T.

    1993-01-01

    In the laboratory, ice films formed by freezing from the liquid or more frequently by deposition from the vapor phase have been used to simulate stratospheric cloud surfaces for measurements of reaction and uptake rates. To obtain intrinsic surface reaction probabilities that can be used in atmospheric models, the area of the film surface that actually takes part in the reaction must be known. It is important to know not only the total surface area but also the film morphology in order to determine where and how the surface is situated and, thus, what fraction of it is available for reaction. Information on the structure of these ice films has been obtained by using several experimental methods. In the sections that follow, these methods will be discussed, then the results will be used to construct a working model of the ice films, and finally the model will be applied to an experimental study of HC1 uptake by H_2O ice.

  20. Dynamic cardiac PET imaging: extraction of time-activity curves using ICA and a generalized Gaussian distribution model.

    PubMed

    Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi

    2013-01-01

    Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.

  1. Simultaneous quantification of poly-dispersed anionic, amphoteric and nonionic surfactants in simulated wastewater samples using C18 high-performance liquid chromatography-quadrupole ion-trap mass spectrometry

    NASA Technical Reports Server (NTRS)

    Levine, Lanfang H.; Garland, Jay L.; Johnson, Jodie V.

    2005-01-01

    This paper describes the development of a guantitative method for direct and simultaneous determination of three frequently encountered surfactants, amphoteric (cocoamphoacetate, CAA), anionic (sodium laureth sulfate, SLES), and nonionic (alcohol ethoxylate, AE) using a reversed-phase C18 HPLC coupled with an ESI ion-trap mass spectrometer (MS). Chemical composition, ionization characteristics and fragmentation pathways of the surfactants are presented. Positive ESI was effective for all three surfactants in agueous methanol buffered with ammonium acetate. The method enables rapid determinations in small sample volumes containing inorganic salts (up to 3.5 g L(-1)) and multiple classes of surfactants with high specificity by applying surfactant specific tandem mass spectrometric strategies. It has dynamic linear ranges of 2-60, 1.5-40, 0.8-56 mg L(-1) with R2 egual or greater than 0.999, 0.98 and 0.999 (10 microL injection) for CAA, SLES, and AE, respectively.

  2. hctsa: A Computational Framework for Automated Time-Series Phenotyping Using Massive Feature Extraction.

    PubMed

    Fulcher, Ben D; Jones, Nick S

    2017-11-22

    Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Simultaneous quantitation of acetylsalicylic acid and clopidogrel along with their metabolites in human plasma using liquid chromatography tandem mass spectrometry.

    PubMed

    Chhonker, Yashpal S; Pandey, Chandra P; Chandasana, Hardik; Laxman, Tulsankar Sachin; Prasad, Yarra Durga; Narain, V S; Dikshit, Madhu; Bhatta, Rabi S

    2016-03-01

    The interest in therapeutic drug monitoring has increased over the last few years. Inter- and intra-patient variability in pharmacokinetics, plasma concentration related toxicity and success of therapy have stressed the need of frequent therapeutic drug monitoring of the drugs. A sensitive, selective and rapid liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) method was developed for the simultaneous quantification of acetylsalicylic acid (aspirin), salicylic acid, clopidogrel and carboxylic acid metabolite of clopidogrel in human plasma. The chromatographic separations were achieved on Waters Symmetry Shield(TM) C18 column (150 × 4.6 mm, 5 µm) using 3.5 mm ammonium acetate (pH 3.5)-acetonitrile (10:90, v/v) as mobile phase at a flow rate of 0.75 mL/min. The present method was successfully applied for therapeutic drug monitoring of aspirin and clopidogrel in 67 patients with coronary artery disease. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Metal-free, flexible triboelectric generator based on MWCNT mesh film and PDMS layers

    NASA Astrophysics Data System (ADS)

    Hwang, Hayoung; Lee, Kang Yeol; Shin, Dongjoon; Shin, Jungho; Kim, Sangtae; Choi, Wonjoon

    2018-06-01

    We demonstrate a metal-free triboelectric energy harvester consisted of MWCNT mesh film and PDMS layer. Upon touch from a finger, the single electrode-mode energy harvester generates up to 27.0 W/m2 output power at 10 MΩ matched impedance. The device generates stable power upon touch by bare fingers or gloved fingers. Using copper counter electrode results in decreased power output, due to the weaker tendency in triboelectrification. The power output also scales with the pressure applied by the finger. The intertwined, condensed MWCNT network acts as a flexible yet effective current collector, with resistance across the device less than 10 Ω. This current collector possesses strong corrosion resistance and stability against potential oxidation, while its metal counterpart may undergo oxidation over extended exposure to air or frequent fracture upon straining. The flexible device form may be applied to various curved or irregular surfaces that undergo frequent human touches.

  5. The Behavioral Health Role in Nursing Facility Social Work.

    PubMed

    Myers, Dennis R; Rogers, Robin K; LeCrone, Harold H; Kelley, Katherine

    2017-09-01

    Types of compromised resident behaviors licensed nursing facility social workers encounter, the behavioral health role they enact, and effective practices they apply have not been the subject of systematic investigation. Analyses of 20 in-depth interviews with Bachelor of Social Work (BSW)/Master of Social Work (MSW) social workers averaging 8.8 years of experience identified frequently occurring resident behaviors: physical and verbal aggression/disruption, passive disruption, socially and sexually inappropriateness. Six functions of the behavioral health role were care management, educating, investigating, preventing, mediating, and advocating. Skills most frequently applied were attention/affirmation/active listening, assessment, behavior management, building relationship, teamwork, and redirection. Narratives revealed role rewards as well as knowledge deficits, organizational barriers, personal maltreatment, and frustrations. Respondents offered perspectives and prescriptions for behavioral health practice in this setting. The findings expand understanding of the behavioral health role and provide an empirical basis for more research in this area. Recommendations, including educational competencies, are offered.

  6. Food applications of natural antimicrobial compounds.

    PubMed

    Lucera, Annalisa; Costa, Cristina; Conte, Amalia; Del Nobile, Matteo A

    2012-01-01

    In agreement with the current trend of giving value to natural and renewable resources, the use of natural antimicrobial compounds, particularly in food and biomedical applications, becomes very frequent. The direct addition of natural compounds to food is the most common method of application, even if numerous efforts have been made to find alternative solutions to the aim of avoiding undesirable inactivation. Dipping, spraying, and coating treatment of food with active solutions are currently applied to product prior to packaging as valid options. The aim of the current work is to give an overview on the use of natural compounds in food sector. In particular, the review will gather numerous case-studies of meat, fish, dairy products, minimally processed fruit and vegetables, and cereal-based products where these compounds found application.

  7. Food applications of natural antimicrobial compounds

    PubMed Central

    Lucera, Annalisa; Costa, Cristina; Conte, Amalia; Del Nobile, Matteo A.

    2012-01-01

    In agreement with the current trend of giving value to natural and renewable resources, the use of natural antimicrobial compounds, particularly in food and biomedical applications, becomes very frequent. The direct addition of natural compounds to food is the most common method of application, even if numerous efforts have been made to find alternative solutions to the aim of avoiding undesirable inactivation. Dipping, spraying, and coating treatment of food with active solutions are currently applied to product prior to packaging as valid options. The aim of the current work is to give an overview on the use of natural compounds in food sector. In particular, the review will gather numerous case-studies of meat, fish, dairy products, minimally processed fruit and vegetables, and cereal-based products where these compounds found application. PMID:23060862

  8. Information security of power enterprises of North-Arctic region

    NASA Astrophysics Data System (ADS)

    Sushko, O. P.

    2018-05-01

    The role of information technologies in providing technological security for energy enterprises is a component of the economic security for the northern Arctic region in general. Applying instruments and methods of information protection modelling of the energy enterprises' business process in the northern Arctic region (such as Arkhenergo and Komienergo), the authors analysed and identified most frequent risks of information security. With the analytic hierarchy process based on weighting factor estimations, information risks of energy enterprises' technological processes were ranked. The economic estimation of the information security within an energy enterprise considers weighting factor-adjusted variables (risks). Investments in information security systems of energy enterprises in the northern Arctic region are related to necessary security elements installation; current operating expenses on business process protection systems become materialized economic damage.

  9. Bayesian Inference of Natural Rankings in Incomplete Competition Networks

    PubMed Central

    Park, Juyong; Yook, Soon-Hyung

    2014-01-01

    Competition between a complex system's constituents and a corresponding reward mechanism based on it have profound influence on the functioning, stability, and evolution of the system. But determining the dominance hierarchy or ranking among the constituent parts from the strongest to the weakest – essential in determining reward and penalty – is frequently an ambiguous task due to the incomplete (partially filled) nature of competition networks. Here we introduce the “Natural Ranking,” an unambiguous ranking method applicable to a round robin tournament, and formulate an analytical model based on the Bayesian formula for inferring the expected mean and error of the natural ranking of nodes from an incomplete network. We investigate its potential and uses in resolving important issues of ranking by applying it to real-world competition networks. PMID:25163528

  10. Multichannel techniques for motion artifacts removal from electrocardiographic signals.

    PubMed

    Milanesi, M; Martini, N; Vanello, N; Positano, V; Santarelli, M F; Paradiso, R; De Rossi, D; Landini, L

    2006-01-01

    Electrocardiographic (ECG) signals are affected by several kinds of artifacts, that may hide vital signs of interest. Motion artifacts, due to the motion of the electrodes in relation to patient skin, are particularly frequent in bioelectrical signals acquired by wearable systems. In this paper we propose different approaches in order to get rid of motion confounds. The first approach we follow starts from measuring electrode motion provided by an accelerometer placed on the electrode and use this measurement in an adaptive filtering system to remove the noise present in the ECG. The second approach is based on independent component analysis methods applied to multichannel ECG recordings; we propose to use both instantaneous model and a frequency domain implementation of the convolutive model that accounts for different paths of the source signals to the electrodes.

  11. Methods for trend analysis: Examples with problem/failure data

    NASA Technical Reports Server (NTRS)

    Church, Curtis K.

    1989-01-01

    Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.

  12. Menzerath-Altmann Law: Statistical Mechanical Interpretation as Applied to a Linguistic Organization

    NASA Astrophysics Data System (ADS)

    Eroglu, Sertac

    2014-10-01

    The distribution behavior described by the empirical Menzerath-Altmann law is frequently encountered during the self-organization of linguistic and non-linguistic natural organizations at various structural levels. This study presents a statistical mechanical derivation of the law based on the analogy between the classical particles of a statistical mechanical organization and the distinct words of a textual organization. The derived model, a transformed (generalized) form of the Menzerath-Altmann model, was termed as the statistical mechanical Menzerath-Altmann model. The derived model allows interpreting the model parameters in terms of physical concepts. We also propose that many organizations presenting the Menzerath-Altmann law behavior, whether linguistic or not, can be methodically examined by the transformed distribution model through the properly defined structure-dependent parameter and the energy associated states.

  13. Detection of Salmonella typhimurium in retail chicken meat and chicken giblets

    PubMed Central

    El-Aziz, Doaa M Abd

    2013-01-01

    Objective To detect Salmonella typhimurium (S. typhimurium), one of the most frequently isolated serovars from food borne outbreaks throughout the world, in retail raw chicken meat and giblets. Methods One hundred samples of retail raw chicken meat and giblets (Liver, heart and gizzard) which were collected from Assiut city markets for detection of the organism and by using Duplex PCR amplification of DNA using rfbJ and fliC genes. Results S. typhimurium was detected at rate of 44%, 40% and 48% in chicken meat, liver and heart, respectively, but not detected in gizzard. Conclusions The results showed high incidence of S. typhimurium in the examined samples and greater emphasis should be applied on prevention and control of contamination during processing for reducing food-borne risks to consumers. PMID:23998006

  14. DRIFTSEL: an R package for detecting signals of natural selection in quantitative traits.

    PubMed

    Karhunen, M; Merilä, J; Leinonen, T; Cano, J M; Ovaskainen, O

    2013-07-01

    Approaches and tools to differentiate between natural selection and genetic drift as causes of population differentiation are of frequent demand in evolutionary biology. Based on the approach of Ovaskainen et al. (2011), we have developed an R package (DRIFTSEL) that can be used to differentiate between stabilizing selection, diversifying selection and random genetic drift as causes of population differentiation in quantitative traits when neutral marker and quantitative genetic data are available. Apart from illustrating the use of this method and the interpretation of results using simulated data, we apply the package on data from three-spined sticklebacks (Gasterosteus aculeatus) to highlight its virtues. DRIFTSEL can also be used to perform usual quantitative genetic analyses in common-garden study designs. © 2013 John Wiley & Sons Ltd.

  15. Melodic cues for metre.

    PubMed

    Vos, P G; van Dijk, A; Schomaker, L

    1994-01-01

    A method of time-series analysis and a time-beating experiment were used to test the structural and perceptual validity of notated metre. Autocorrelation applied to the flow of melodic intervals between notes from thirty fragments of compositions for solo instruments by J S Bach strongly supported the validity of bar length specifications. Time-beating data, obtained with four stimuli from the same set, played in an expressionless mode, and presented under categorically distinct tempos to different subgroups of musically trained subjects, were rather inconsistent with respect to tapped bar lengths. However, taps were most frequently given to the events in the stimuli that corresponded with the first beats according to the score notations. No significant effects of tempo on tapping patterns were observed. The findings are discussed in comparison with other examinations of metre inference from musical compositions.

  16. Perturbed effects at radiation physics

    NASA Astrophysics Data System (ADS)

    Külahcı, Fatih; Şen, Zekâi

    2013-09-01

    Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.

  17. Bayesian Inference of Natural Rankings in Incomplete Competition Networks

    NASA Astrophysics Data System (ADS)

    Park, Juyong; Yook, Soon-Hyung

    2014-08-01

    Competition between a complex system's constituents and a corresponding reward mechanism based on it have profound influence on the functioning, stability, and evolution of the system. But determining the dominance hierarchy or ranking among the constituent parts from the strongest to the weakest - essential in determining reward and penalty - is frequently an ambiguous task due to the incomplete (partially filled) nature of competition networks. Here we introduce the ``Natural Ranking,'' an unambiguous ranking method applicable to a round robin tournament, and formulate an analytical model based on the Bayesian formula for inferring the expected mean and error of the natural ranking of nodes from an incomplete network. We investigate its potential and uses in resolving important issues of ranking by applying it to real-world competition networks.

  18. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  19. The influence of parametric and external noise in act-and-wait control with delayed feedback.

    PubMed

    Wang, Jiaxing; Kuske, Rachel

    2017-11-01

    We apply several novel semi-analytic approaches for characterizing and calculating the effects of noise in a system with act-and-wait control. For concrete illustration, we apply these to a canonical balance model for an inverted pendulum to study the combined effect of delay and noise within the act-and-wait setting. While the act-and-wait control facilitates strong stabilization through deadbeat control, a comparison of different models with continuous vs. discrete updating of the control strategy in the active period illustrates how delays combined with the imprecise application of the control can seriously degrade the performance. We give several novel analyses of a generalized act-and-wait control strategy, allowing flexibility in the updating of the control strategy, in order to understand the sensitivities to delays and random fluctuations. In both the deterministic and stochastic settings, we give analytical and semi-analytical results that characterize and quantify the dynamics of the system. These results include the size and shape of stability regions, densities for the critical eigenvalues that capture the rate of reaching the desired stable equilibrium, and amplification factors for sustained fluctuations in the context of external noise. They also provide the dependence of these quantities on the length of the delay and the active period. In particular, we see that the combined influence of delay, parametric error, or external noise and on-off control can qualitatively change the dynamics, thus reducing the robustness of the control strategy. We also capture the dependence on how frequently the control is updated, allowing an interpolation between continuous and frequent updating. In addition to providing insights for these specific models, the methods we propose are generalizable to other settings with noise, delay, and on-off control, where analytical techniques are otherwise severely scarce.

  20. Prostitution use has non sexual functions - case report of a depressed psychiatric out-patient

    PubMed Central

    Gysin, Fátima; Gysin, François

    2013-01-01

    Case: A shy, depressed 30 year old male discussed his frequent ego-syntonic indoor prostitution consumption in small peer groups. Several distinctive non-sexual functions of this paid sex habit were identified. Design and method: The patient had 40 hourly psychiatric sessions in the private practice setting over 14 months. The Arizona Sexual Experience Scale was applied to compare the subjective appraisal of both paid sex and sex in a relationship. The informal Social Atom elucidates social preferences and the Operationalized Psychodynamic Diagnostic-procedure was applied to describe a dominant relationship pattern. Results: The paid sex consumption functioned as a proud male life style choice to reinforce the patients fragile identity. The effect on self esteem was a release similar to his favorite past-time of kick-boxing. With paid sex asserted as a group ritual, it was practiced even with frequent erectile dysfunction and when sex with a stable romantic partner was more enjoyable and satisfying. The therapeutic attitude of the female psychiatrist, with her own ethical values, is put in to context with two opposing theories about prostitution: the ‘Sex-Work-model’ and the ‘Oppression-model’. The therapist’s reaction to the patients’ information was seen as a starting point to understanding the intrapsychic function of paid sex as a coping mechanism against depressive feelings. Conclusions: Exploring and understanding prostitution consumption patterns in young men can benefit the treatment of psychiatric disorders in the private practice setting. It is the psychiatrists task to investigate the patients hidden motives behind paid sex use to help patients achieve a greater inner and relational freedom. PMID:24627772

  1. Current status on the application of image processing of digital intraoral radiographs amongst general dental practitioners.

    PubMed

    Tohidast, Parisa; Shi, Xie-Qi

    2016-01-01

    The objectives of this study were to present the subjective knowledge level and the use of image processing on digital intraoral radiographs amongst general dental practitioners at Distriktståndvrden AB, Stockholm. A questionnaire, consisting of12 questions, was sent to 12 dental prac- tices in Stockholm. Additionally, 2000 radiographs were randomly selected from these clinics for evaluation of applied image processing and its effect on image quality. Descriptive and analytical statistical methods were applied to present the current status of the use of image proces- sing alternatives for the dentists' daily clinical work. 50 out of 53 dentists participated in the survey.The survey showed that most of dentists in.this study had received education on image processing at some stage of their career. No correlations were found between application of image processing on one side and educa- tion received with regards to image processing, previous working experience, age and gender on the other. Image processing in terms of adjusting brightness and contrast was frequently used. Overall, in this study 24.5% of the 200 images were actually image processed in practice, in which 90% of the images were improved or maintained in image quality. According to our survey, image processing is experienced to be frequently used by the dentists at Distriktstandvåden AB for diagnosing anatomical and pathological changes using intraoral radiographs. 24.5% of the 200 images were actually image processed in terms of adjusting brightness and/or contrast. In the present study we did not found that the dentists' age, gender, previous working experience and education in image processing influence their viewpoint towards the application of image processing.

  2. The Thermodynamic Structure of Arctic Coastal Fog Occurring During the Melt Season over East Greenland

    NASA Astrophysics Data System (ADS)

    Gilson, Gaëlle F.; Jiskoot, Hester; Cassano, John J.; Gultepe, Ismail; James, Timothy D.

    2018-05-01

    An automated method to classify Arctic fog into distinct thermodynamic profiles using historic in-situ surface and upper-air observations is presented. This classification is applied to low-resolution Integrated Global Radiosonde Archive (IGRA) soundings and high-resolution Arctic Summer Cloud Ocean Study (ASCOS) soundings in low- and high-Arctic coastal and pack-ice environments. Results allow investigation of fog macrophysical properties and processes in coastal East Greenland during melt seasons 1980-2012. Integrated with fog observations from three synoptic weather stations, 422 IGRA soundings are classified into six fog thermodynamic types based on surface saturation ratio, type of temperature inversion, fog-top height relative to inversion-base height and stability using the virtual potential temperature gradient. Between 65-80% of fog observations occur with a low-level inversion, and statically neutral or unstable surface layers occur frequently. Thermodynamic classification is sensitive to the assigned dew-point depression threshold, but categorization is robust. Despite differences in the vertical resolution of radiosonde observations, IGRA and ASCOS soundings yield the same six fog classes, with fog-class distribution varying with latitude and environmental conditions. High-Arctic fog frequently resides within an elevated inversion layer, whereas low-Arctic fog is more often restricted to the mixed layer. Using supplementary time-lapse images, ASCOS microwave radiometer retrievals and airmass back-trajectories, we hypothesize that the thermodynamic classes represent different stages of advection fog formation, development, and dissipation, including stratus-base lowering and fog lifting. This automated extraction of thermodynamic boundary-layer and inversion structure can be applied to radiosonde observations worldwide to better evaluate fog conditions that affect transportation and lead to improvements in numerical models.

  3. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  4. Spectral embedding finds meaningful (relevant) structure in image and microarray data

    PubMed Central

    Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L

    2006-01-01

    Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359

  5. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  6. Perceived Stress and Coping Styles among Malay Caregivers of Children with Learning Disabilities in Kelantan

    PubMed Central

    Isa, Siti Nor Ismalina; Ishak, Ismarulyusda; Rahman, Azriani Ab; Saat, Nur Zakiah Mohd; Din, Normah Che; Lubis, Syarif Husin; Ismail, Muhammad Faiz Mohd

    2017-01-01

    Background Caregivers of children with learning disabilities have been shown to experience increased stress and greater negative caregiving consequences than those with typically developing children. There remains a lack of studies focusing on stress and coping mechanisms among caregivers of a wider age group and diagnosis of individuals with disabilities in Asian countries. The current study examines levels of perceived stress and associated child and caregiver factors among caregivers of children with learning disabilities in the Malaysian context. An additional aim was to determine whether caregiver coping styles may be predictors of perceived stress. Methods The Malay version of the Perceived Stress Scale with 10 items and the Brief COPE Scale were administered to a sample of 190 Malay caregivers of children with learning disabilities registered with community-based rehabilitation centres in Kelantan, a state in Peninsular Malaysia. Multiple linear regression analysis was applied to determine the predictors of perceived stress. Results The mean total perceived stress score of caregivers was 16.96 (SD = 4.66). The most frequently used coping styles found among caregivers included religion, acceptance and positive reframing, while substance use and behavioural disengagement were least frequently used. Higher perceived stress was significantly predicted among caregivers with fewer children, frequent use of instrumental support and behavioural disengagement coping, and lack of emotional support and religious coping. Conclusion Findings indicate that the perceived stress levels among caregivers were significantly predicted by different coping styles. It is vital to help the caregivers improve their good coping styles in order to reduce their stress levels. PMID:28381931

  7. Using Claims Data to Generate Clinical Flags Predicting Short-term Risk of Continued Psychiatric Hospitalizations

    PubMed Central

    Stein, Bradley D.; Pangilinan, Maria; Sorbero, Mark J; Marcus, Sue; Donahue, Sheila; Xu, Yan; Smith, Thomas E; Essock, Susan M

    2014-01-01

    Objective As health information technology advances, efforts to use administrative data to inform real-time treatment planning for individuals are increasing, despite few empirical studies demonstrating that such administrative data predict subsequent clinical events. Medicaid claims for individuals with frequent psychiatric hospitalizations were examined to test how well patterns of service use predict subsequent high short-term risk of continued psychiatric hospitalizations. Methods Medicaid claims files from New York and Pennsylvania were used to identify Medicaid recipients aged 18-64 with two or more inpatient psychiatric admissions during a target year ending March 31, 2009. Definitions from a quality-improvement initiative were used to identify patterns of inpatient and outpatient service use and prescription fills suggestive of clinical concerns. Generalized estimating equations and Markov models were applied to examine claims through March, 2011, to see what patterns of service use were sufficiently predictive of additional hospitalizations to be clinically useful. Results 11,801 unique individuals in New York and 1,859 in Pennsylvania identified met the cohort definition. In both Pennsylvania and New York, multiple recent hospitalizations, but not failure to use outpatient services or failure to fill medication prescriptions, were significant predictors of high risk of continued frequent hospitalizations, with odds ratios greater than 4.0. Conclusions Administrative data can be used to identify individuals at high risk of continued frequent hospitalizations. Such information could be used by payers and system administrators to authorize special services (e.g., mobile outreach) for such individuals as part of efforts to promote service engagement and prevent rapid rehospitalizations. PMID:25022360

  8. Factors Affecting the Occurrence and Distribution of Pesticides in the Yakima River Basin, Washington, 2000

    USGS Publications Warehouse

    Johnson, Henry M.

    2007-01-01

    The Yakima River Basin is a major center of agricultural production. With a cultivated area of about 450,000 ha (hectares), the region is an important producer of tree fruit, grapes, hops, and dairy products as well as a variety of smaller production crops. To control pest insects, weeds, and fungal infections, about 146 pesticide active ingredients were applied in various formulations during the 2000 growing season. Forty-six streams or drains in the Yakima River Basin were sampled for pesticides in July and October of 2000. Water samples also were collected from 11 irrigation canals in July. The samples were analyzed for 75 of the pesticide active ingredients applied during the 2000 growing season - 63 percent of the pesticides were detected. An additional 14 pesticide degradates were detected, including widespread occurrence of 2 degradates of DDT. The most frequently detected herbicide was 2,4-D, which was used on a variety of crops and along rights-of-way. It was detected in 82 percent of the samples collected in July. The most frequently detected insecticide was azinphos-methyl, which was used primarily on tree fruit. It was detected in 37 percent of the samples collected in July. All occurrences of azinphos-methyl exceeded the Environmental Protection Agency recommended chronic concentration for the protection of aquatic organisms. More than 90 percent of the July samples and 79 percent of the October samples contained two or more pesticides, with a median of nine in July and five in October. The most frequently occurring herbicides in mixtures were atrazine, 2,4-D, and the degradate deethylatrazine. The most frequently occurring insecticides in mixtures were azinphos-methyl, carbaryl, and p,p'-DDE (a degradate of DDT). A greater number of pesticides and higher concentrations were found in July than in October, reflecting greater usage and water availability for transport during the summer growing and irrigation season. Most of the samples collected in October (baseflow conditions) contained at least one pesticide. The mass ratio of instream pesticide load and application (pesticide loss) was used to explore spatial and temporal patterns of pesticide occurrence. Losses of pesticides with large organic carbon-water partitioning coefficients (Koc) values, which adhere strongly to sediment and plant surfaces, were smallest in catchments where sprinkler and drip irrigation systems were widely used. In contrast, losses of pesticides with low Koc values did not relate well with irrigation method.

  9. Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies

    PubMed Central

    Geiser, Christian; Burns, G. Leonard; Servera, Mateu

    2014-01-01

    Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods – 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods. PMID:25400603

  10. [Comparative studies on fissure sealing: composite versus Cermet cement].

    PubMed

    Hickel, R; Voss, A

    1989-06-01

    Fifty two molars sealed with either composite or Cermet cement were compared. The composite sealant was applied after enamel etching using a rubber dam. Before sealing with Cermet cement the enamel was only cleaned with pumice powder and sodium hypochlorie and the material was applied without enamel etching. After an average follow-up of 1.6 years composite sealants proved to be significantly more reliable. Cermet cement sealings showed defects more frequently.

  11. ACETANILIDE HERBICIDE DEGRADATION PRODUCTS BY LC/MS

    EPA Science Inventory

    Acetanilide herbicides are frequently applied in the U.S. on crops (corn, soybeans, popcorn, etc.) to control broadleaf and annual weeds. The acetanilide and acetamide herbicides currently registered for use in the U.S. are alachlor, acetochlor, metolachlor, propachlor, flufen...

  12. Characterization of passive polymer optical waveguides

    NASA Astrophysics Data System (ADS)

    Joehnck, Matthias; Kalveram, Stefan; Lehmacher, Stefan; Pompe, Guido; Rudolph, Stefan; Neyer, Andreas; Hofstraat, Johannes W.

    1999-05-01

    The characterization of monomode passive polymer optical devices fabricated according to the POPCORN technology by methods originated from electron, ion and optical spectroscopy is summarized. Impacts of observed waveguide perturbations on the optical characteristics of the waveguide are evaluated. In the POPCORN approach optical components for telecommunication applications are fabricated by photo-curing of liquid halogenated (meth)acrylates which have been applied on moulded thermoplastic substrates. For tuning of waveguide material refractive indices with respect to the substrate refractive index frequently comonomer mixtures are used. The polymerization characteristics, especially the polymerization kinetics of individual monomers, determine the formation of copolymers. Therefore the unsaturation as function of UV-illumination time in the formation of halogenated homo- and copolymers has been examined. From different suitable copolymer system, after characterization of their glass transition temperatures, their curing behavior and their refractive indices as function of the monomer ratios, monomode waveguides applying PMMA substrates have been fabricated. To examine the materials composition also in the 6 X 6 micrometers 2 waveguides they have been visualized by transmission electron microscopy. With this method e.g. segregation phenomena could be observed in the waveguide cross section characterization as well. The optical losses in monomode waveguides caused by segregation and other materials induce defects like micro bubbles formed as a result of shrinkage have been quantized by return loss measurements. Defects causing scattering could be observed by convocal laser scanning microscopy and by conventional light microscopy.

  13. A Rapid Assay to Detect Toxigenic Penicillium spp. Contamination in Wine and Musts

    PubMed Central

    Sanzani, Simona Marianna; Miazzi, Monica Marilena; di Rienzo, Valentina; Fanelli, Valentina; Gambacorta, Giuseppe; Taurino, Maria Rosaria; Montemurro, Cinzia

    2016-01-01

    Wine and fermenting musts are grape products widely consumed worldwide. Since the presence of mycotoxin-producing fungi may greatly compromise their quality characteristics and safety, there is an increasing need for relatively rapid “user friendly” quantitative assays to detect fungal contamination both in grapes delivered to wineries and in final products. Although other fungi are most frequently involved in grape deterioration, secondary infections by Penicillium spp. are quite common, especially in cool areas with high humidity and in wines obtained by partially dried grapes. In this work, a single-tube nested real-time PCR approach—successfully applied to hazelnut and peanut allergen detection—was tested for the first time to trace Penicillium spp. in musts and wines. The method consisted of two sets of primers specifically designed to target the β-tubulin gene, to be simultaneously applied with the aim of lowering the detection limit of conventional real-time PCR. The assay was able to detect up to 1 fg of Penicillium DNA. As confirmation, patulin content of representative samples was determined. Most of analyzed wines/musts returned contaminated results at >50 ppb and a 76% accordance with molecular assay was observed. Although further large-scale trials are needed, these results encourage the use of the newly developed method in the pre-screening of fresh and processed grapes for the presence of Penicillium DNA before the evaluation of related toxins. PMID:27509524

  14. STUDYING TRAVEL-RELATED INDIVIDUAL ASSESSMENTS AND DESIRES BY COMBINING HIERARCHICALLY STRUCTURED ORDINAL VARIABLES

    PubMed Central

    Song, Tingting; Wittkowski, Knut M.

    2010-01-01

    Ordinal measures are frequently encountered in travel behavior research. This paper presents a new method for combining them when a hierarchical structure of the data can be presumed. This method is applied to study the subjective assessment of the amount of travel by different transportation modes among a group of French clerical workers, along with the desire to increase or decrease the use of such modes. Some advantages of this approach over traditional data reduction technique such as factor analysis when applied to ordinal data are then illustrated. In this study, combining evidence from several variables sheds light on the observed moderately negative relationship between the personal assessment of the amount of travel and the desire to increase or decrease it, thus integrating previous partial (univariate) results. We find a latent demand for travel, thus contributing to clarify the behavioral mechanisms behind the induced traffic phenomenon. Categorizing the above relationship by transportation mode shows a desire for a less environmental-friendly mix of modes (i.e. a greater desire to use heavy motorized modes and a lower desire to use two-wheeled modes), whenever the respondents do not feel to travel extensively. This result, combined with previous theoretical investigations concerning the determinants of the desire to alter trips consumption levels, shows the importance of making people aware of how much they travel. PMID:20953273

  15. Monte Carlo role in radiobiological modelling of radiotherapy outcomes

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Pater, Piotr; Seuntjens, Jan

    2012-06-01

    Radiobiological models are essential components of modern radiotherapy. They are increasingly applied to optimize and evaluate the quality of different treatment planning modalities. They are frequently used in designing new radiotherapy clinical trials by estimating the expected therapeutic ratio of new protocols. In radiobiology, the therapeutic ratio is estimated from the expected gain in tumour control probability (TCP) to the risk of normal tissue complication probability (NTCP). However, estimates of TCP/NTCP are currently based on the deterministic and simplistic linear-quadratic formalism with limited prediction power when applied prospectively. Given the complex and stochastic nature of the physical, chemical and biological interactions associated with spatial and temporal radiation induced effects in living tissues, it is conjectured that methods based on Monte Carlo (MC) analysis may provide better estimates of TCP/NTCP for radiotherapy treatment planning and trial design. Indeed, over the past few decades, methods based on MC have demonstrated superior performance for accurate simulation of radiation transport, tumour growth and particle track structures; however, successful application of modelling radiobiological response and outcomes in radiotherapy is still hampered with several challenges. In this review, we provide an overview of some of the main techniques used in radiobiological modelling for radiotherapy, with focus on the MC role as a promising computational vehicle. We highlight the current challenges, issues and future potentials of the MC approach towards a comprehensive systems-based framework in radiobiological modelling for radiotherapy.

  16. Automatic Evaluation of Collagen Fiber Directions from Polarized Light Microscopy Images.

    PubMed

    Novak, Kamil; Polzer, Stanislav; Tichy, Michal; Bursa, Jiri

    2015-08-01

    Mechanical properties of the arterial wall depend largely on orientation and density of collagen fiber bundles. Several methods have been developed for observation of collagen orientation and density; the most frequently applied collagen-specific manual approach is based on polarized light (PL). However, it is very time consuming and the results are operator dependent. We have proposed a new automated method for evaluation of collagen fiber direction from two-dimensional polarized light microscopy images (2D PLM). The algorithm has been verified against artificial images and validated against manual measurements. Finally the collagen content has been estimated. The proposed algorithm was capable of estimating orientation of some 35 k points in 15 min when applied to aortic tissue and over 500 k points in 35 min for Achilles tendon. The average angular disagreement between each operator and the algorithm was -9.3±8.6° and -3.8±8.6° in the case of aortic tissue and -1.6±6.4° and 2.6±7.8° for Achilles tendon. Estimated mean collagen content was 30.3±5.8% and 94.3±2.7% for aortic media and Achilles tendon, respectively. The proposed automated approach is operator independent and several orders faster than manual measurements and therefore has the potential to replace manual measurements of collagen orientation via PLM.

  17. Spatio-temporal variability of dryness/wetness in the middle and lower reaches of the Yangtze River Basin and correlation with large-scale climatic factors

    NASA Astrophysics Data System (ADS)

    Chen, Xinchi; Zhang, Liping; Zou, Lei; Shan, Lijie; She, Dunxian

    2018-02-01

    The middle and lower reaches of the Yangtze River Basin (MLYR) are greatly affected by frequent drought/flooding events and abrupt alternations of these events in China. The purpose of this study is to analyze the spatial and temporal variability of dryness/wetness based on the data obtained from 75 meteorological stations in the MLYR for the period 1960-2015 and investigate the correlations between dryness/wetness and atmospheric circulation factors. The empirical orthogonal function method was applied in this study based on the monthly Standardized Precipitation Index at a 12-month time scale. The first leading pattern captured the same characteristics of dryness/wetness over the entire MLYR area and accounted for 40.87% of the total variance. Both the second and third leading patterns manifested as regional features of variability over the entire MLYR. The cross-wavelet transform method was applied to explore the potential relationship between the three leading patterns and the large-scale climate factors, and finally the relationships between drought/wetness events and climate factors were also analyzed. Our results indicated that the main patterns of dryness/wetness were primarily associated with the Niño 3.4, Indian Ocean Dipole, Southern Oscillation Index and Northern Oscillation Index, with the first pattern exhibiting noticeable periods and remarkable changes in phase with the indices.

  18. Calibration of Gephyrocapsa Coccolith Abundance in Holocene Sediments for Paleo-temperature Assessment

    NASA Astrophysics Data System (ADS)

    Bollmann, J.; Brabec, B.

    2001-12-01

    Abundance and assemblage compositions of microplankton, together with their chemical and stable isotopic composition, have been among the most successful methods in paleoceanography. One of the most frequently applied techniques for reconstruction of paleo-temperature is a transfer function using the relative abundance of planktic foraminifera in sediment samples. Here we present evidence, suggesting that absolute sea surface temperature for a given location can be also calculated from the relative abundance of Gephyrocapsa morphotypes in sediment samples with an accuracy comparable to foraminifera transfer functions. By extrapolating this finding, paleo-enviromental interpretations can be obtained for the Late Pleistocene and discrepancies between the different currently used methods (e.g., foraminifer, alkenone and Ca/Mg derived temperature estimates) might be resolved. Eighty-one Holocene sediment samples were selected from the Pacific, Indian and Atlantic Oceans covering a temperature gradient from 13.4° C to 29.4° C, a salinity gradient from 32.21 to 37.34 and a productivity gradient of 0.045 to 0.492μ g chlorophyll/L. Standard multiple linear regression analyses were applied to this data set, linking the relative abundance of Gephyrocapsa morphotypes to mean sea surface temperature. The best model revealed an r2 of 0.8 with a standard residual error of 1.8° C for calculation of the mean sea surface temperature.

  19. A randomized, controlled cross-over trial of dermally-applied lavender (Lavandula angustifolia) oil as a treatment of agitated behaviour in dementia

    PubMed Central

    2013-01-01

    Background Lavender essential oil shows evidence of sedative properties in neurophysiological and animal studies but clinical trials of its effectiveness as a treatment of agitation in people with dementia have shown mixed results. Study methods have varied widely, however, making comparisons hazardous. To help remedy previous methodological shortcomings, we delivered high grade lavender oil in specified amounts to nursing home residents whose agitated behaviours were recorded objectively. Methods 64 nursing home residents with frequent physically agitated behaviours were entered into a randomized, single-blind cross-over trial of dermally-applied, neurophysiologically active, high purity 30% lavender oil versus an inactive control oil. A blinded observer counted the presence or absence of target behaviours and rated participants’ predominant affect during each minute for 30 minutes prior to exposure and for 60 minutes afterwards. Results Lavender oil did not prove superior to the control oil in reducing the frequency of physically agitated behaviours or in improving participants’ affect. Conclusions Studies of essential oils are constrained by their variable formulations and uncertain pharmacokinetics and so optimal dosing and delivery regimens remain speculative. Notwithstanding this, topically delivered, high strength, pure lavender oil had no discernible effect on affect and behaviour in a well-defined clinical sample. Trial registration Australian and New Zealand Clinical Trials Registry (ACTRN 12609000569202) PMID:24219098

  20. SegAuth: A Segment-based Approach to Behavioral Biometric Authentication

    PubMed Central

    Li, Yanyan; Xie, Mengjun; Bian, Jiang

    2016-01-01

    Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214

  1. A Rapid Assay to Detect Toxigenic Penicillium spp. Contamination in Wine and Musts.

    PubMed

    Sanzani, Simona Marianna; Miazzi, Monica Marilena; di Rienzo, Valentina; Fanelli, Valentina; Gambacorta, Giuseppe; Taurino, Maria Rosaria; Montemurro, Cinzia

    2016-08-08

    Wine and fermenting musts are grape products widely consumed worldwide. Since the presence of mycotoxin-producing fungi may greatly compromise their quality characteristics and safety, there is an increasing need for relatively rapid "user friendly" quantitative assays to detect fungal contamination both in grapes delivered to wineries and in final products. Although other fungi are most frequently involved in grape deterioration, secondary infections by Penicillium spp. are quite common, especially in cool areas with high humidity and in wines obtained by partially dried grapes. In this work, a single-tube nested real-time PCR approach-successfully applied to hazelnut and peanut allergen detection-was tested for the first time to trace Penicillium spp. in musts and wines. The method consisted of two sets of primers specifically designed to target the β-tubulin gene, to be simultaneously applied with the aim of lowering the detection limit of conventional real-time PCR. The assay was able to detect up to 1 fg of Penicillium DNA. As confirmation, patulin content of representative samples was determined. Most of analyzed wines/musts returned contaminated results at >50 ppb and a 76% accordance with molecular assay was observed. Although further large-scale trials are needed, these results encourage the use of the newly developed method in the pre-screening of fresh and processed grapes for the presence of Penicillium DNA before the evaluation of related toxins.

  2. Geometrical optics analysis of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Davis, Christopher C.

    2013-09-01

    2D phase screen methods have been frequently applied to estimate atmospheric turbulence in free space optic communication and imaging systems. In situations where turbulence is "strong" enough to cause severe discontinuity of the wavefront (small Fried coherence length), the transmitted optic signal behaves more like "rays" rather than "waves". However, to achieve accurate simulation results through ray modeling requires both a high density of rays and a large number of eddies. Moreover, their complicated interactions require significant computational resources. Thus, we introduce a 3D ray model based on simple characteristics of turbulent eddies regardless of their particular geometry. The observed breakup of a beam wave into patches at a receiver and the theoretical description indicates that rays passing through the same sequence of turbulent eddies show "group" behavior whose wavefront can still be regarded as continuous. Thus, in our approach, we have divided the curved trajectory of rays into finite line segments and intuitively related their redirections to the refractive property of large turbulent eddies. As a result, our proposed treatment gives a quick and effective high-density ray simulation of a turbulent channel which only requires knowledge of the magnitude of the refractive index deviations. And our method points out a potential correction in reducing equivalent Cn2 by applying adaptive optics. This treatment also shows the possibility of extending 2D phase screen simulations into more general 3D treatments.

  3. SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.

    PubMed

    Li, Yanyan; Xie, Mengjun; Bian, Jiang

    2016-10-01

    Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.

  4. Methods for environmental change; an exploratory study

    PubMed Central

    2012-01-01

    Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712

  5. Differentially Private Frequent Subgraph Mining

    PubMed Central

    Xu, Shengzhi; Xiong, Li; Cheng, Xiang; Xiao, Ke

    2016-01-01

    Mining frequent subgraphs from a collection of input graphs is an important topic in data mining research. However, if the input graphs contain sensitive information, releasing frequent subgraphs may pose considerable threats to individual's privacy. In this paper, we study the problem of frequent subgraph mining (FGM) under the rigorous differential privacy model. We introduce a novel differentially private FGM algorithm, which is referred to as DFG. In this algorithm, we first privately identify frequent subgraphs from input graphs, and then compute the noisy support of each identified frequent subgraph. In particular, to privately identify frequent subgraphs, we present a frequent subgraph identification approach which can improve the utility of frequent subgraph identifications through candidates pruning. Moreover, to compute the noisy support of each identified frequent subgraph, we devise a lattice-based noisy support derivation approach, where a series of methods has been proposed to improve the accuracy of the noisy supports. Through formal privacy analysis, we prove that our DFG algorithm satisfies ε-differential privacy. Extensive experimental results on real datasets show that the DFG algorithm can privately find frequent subgraphs with high data utility. PMID:27616876

  6. The simultaneous isolation of multiple high and low frequent T-cell populations from donor peripheral blood mononuclear cells using the major histocompatibility complex I-Streptamer isolation technology.

    PubMed

    Roex, Marthe C J; Hageman, Lois; Heemskerk, Matthias T; Veld, Sabrina A J; van Liempt, Ellis; Kester, Michel G D; Germeroth, Lothar; Stemberger, Christian; Falkenburg, J H Frederik; Jedema, Inge

    2018-04-01

    Adoptive transfer of donor-derived T cells can be applied to improve immune reconstitution in immune-compromised patients after allogeneic stem cell transplantation. The separation of beneficial T cells from potentially harmful T cells can be achieved by using the major histocompatibility complex (MHC) I-Streptamer isolation technology, which has proven its feasibility for the fast and pure isolation of T-cell populations with a single specificity. We have analyzed the feasibility of the simultaneous isolation of multiple antigen-specific T-cell populations in one procedure by combining different MHC I-Streptamers. First, the effect of combining different amounts of MHC I-Streptamers used in the isolation procedure on the isolation efficacy of target antigen-specific T cells and on the number of off-target co-isolated contaminating cells was assessed. The feasibility of this approach was demonstrated in large-scale validation procedures targeting both high and low frequent T-cell populations using the Good Manufacturing Practice (GMP)-compliant CliniMACS Plus device. T-cell products targeting up to 24 different T-cell populations could be isolated in one, simultaneous MHC I-Streptamer procedure, by adjusting the amount of MHC I- Streptamers per target antigen-specific T-cell population. Concurrently, the co-isolation of potentially harmful contaminating T cells remained below our safety limit. This technology allows the reproducible isolation of high and low frequent T-cell populations. However, the expected therapeutic relevance of direct clinical application without in vitro expansion of these low frequent T-cell populations is questionable. This study provides a feasible, fast and safe method for the generation of highly personalized MHC I-Streptamer isolated T-cell products for adoptive immunotherapy. Copyright © 2018 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  7. Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity

    PubMed Central

    Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich

    2015-01-01

    Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567

  8. Chemometrics-assisted cyclodextrin-enhanced excitation-emission fluorescence spectroscopy for the simultaneous green determination of bisphenol A and nonylphenol in plastics.

    PubMed

    Vidal, Rocío B Pellegrino; Ibañez, Gabriela A; Escandar, Graciela M

    2015-10-01

    The aim of this work was to quantify two relevant priority chemicals, bisphenol A (BPA) and 4-nonylphenol (NP), coupling the sensitivity of fluorescence in organized media and the selectivity of multivariate calibration, measuring excitation-emission fluorescence matrices in an aqueous methyl-β-cyclodextrin solution. The studied priority pollutants are two of the most frequently found xenoestrogens in the environment, and are therefore of public health concern.The data were successfully processed by applying unfolded partial least-squares coupled to residual bilinearization (U-PLS/RBL), which provided the required selectivity for overcoming the severe spectral overlapping among the analyte spectra and also those for the interferents present in real samples. A rigorous International Union of Pure and Applied Chemistry (IUPAC)-consistent approach was applied for the calculation of the limits of detection. Values in the ranges of 1-2 and 4-14 ng mL(-1) were obtained in validation samples for BPA and NP, respectively. On the other hand, low relative prediction errors between 3% and 8% were achieved. The proposed method was successfully applied to the determination of BPA and NP in different plastics. In positive samples, after an easy treatment with a small volume of ethanol at 35°C, concentrations were found to range from 26 to 199 ng g(-1) for BPA, and from 95 to 30,000 ng g(-1) for NP. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Three-step approach for prediction of limit cycle pressure oscillations in combustion chambers of gas turbines

    NASA Astrophysics Data System (ADS)

    Iurashev, Dmytro; Campa, Giovanni; Anisimov, Vyacheslav V.; Cosatto, Ezio

    2017-11-01

    Currently, gas turbine manufacturers frequently face the problem of strong acoustic combustion driven oscillations inside combustion chambers. These combustion instabilities can cause extensive wear and sometimes even catastrophic damages to combustion hardware. This requires prevention of combustion instabilities, which, in turn, requires reliable and fast predictive tools. This work presents a three-step method to find stability margins within which gas turbines can be operated without going into self-excited pressure oscillations. As a first step, a set of unsteady Reynolds-averaged Navier-Stokes simulations with the Flame Speed Closure (FSC) model implemented in the OpenFOAM® environment are performed to obtain the flame describing function of the combustor set-up. The standard FSC model is extended in this work to take into account the combined effect of strain and heat losses on the flame. As a second step, a linear three-time-lag-distributed model for a perfectly premixed swirl-stabilized flame is extended to the nonlinear regime. The factors causing changes in the model parameters when applying high-amplitude velocity perturbations are analysed. As a third step, time-domain simulations employing a low-order network model implemented in Simulink® are performed. In this work, the proposed method is applied to a laboratory test rig. The proposed method permits not only the unsteady frequencies of acoustic oscillations to be computed, but the amplitudes of such oscillations as well. Knowing the amplitudes of unstable pressure oscillations, it is possible to determine how these oscillations are harmful to the combustor equipment. The proposed method has a low cost because it does not require any license for computational fluid dynamics software.

  10. Exploring spatial patterns of sudden cardiac arrests in the city of Toronto using Poisson kriging and Hot Spot analyses

    PubMed Central

    2017-01-01

    Introduction Our study looked at out-of-hospital sudden cardiac arrest events in the City of Toronto. These are relatively rare events, yet present a serious global clinical and public health problem. We report on the application of spatial methods and tools that, although relatively well known to geographers and natural resource scientists, need to become better known and used more frequently by health care researchers. Materials and methods Our data came from the population-based Rescu Epistry cardiac arrest database. We limited it to the residents of the City of Toronto who experienced sudden arrest in 2010. The data was aggregated at the Dissemination Area level, and population rates were calculated. Poisson kriging was carried out on one year of data using three different spatial weights. Kriging estimates were then compared in Hot Spot analyses. Results Spatial analysis revealed that Poisson kriging can yield reliable rates using limited data of high quality. We observed the highest rates of sudden arrests in the north and central parts of Etobicoke, western parts of North York as well as the central and southwestern parts of Scarborough while the lowest rates were found in north and eastern parts of Scarborough, downtown Toronto, and East York as well as east central parts of North York. Influence of spatial neighbours on the results did not extend past two rings of adjacent units. Conclusions Poisson kriging has the potential to be applied to a wide range of healthcare research, particularly on rare events. This approach can be successfully combined with other spatial methods. More applied research, is needed to establish a wider acceptance for this method, especially among healthcare researchers and epidemiologists. PMID:28672029

  11. Numerical investigation of a modified family of centered schemes applied to multiphase equations with nonconservative sources

    NASA Astrophysics Data System (ADS)

    Crochet, M. W.; Gonthier, K. A.

    2013-12-01

    Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.

  12. Using Check-All-That-Apply (CATA) method for determining product temperature-dependent sensory-attribute variations: A case study of cooked rice.

    PubMed

    Pramudya, Ragita C; Seo, Han-Seok

    2018-03-01

    Temperatures of most hot or cold meal items change over the period of consumption, possibly influencing sensory perception of those items. Unlike temporal variations in sensory attributes, product temperature-induced variations have not received much attention. Using a Check-All-That-Apply (CATA) method, this study aimed to characterize variations in sensory attributes over a wide range of temperatures at which hot or cold foods and beverages may be consumed. Cooked milled rice, typically consumed at temperatures between 70 and 30°C in many rice-eating countries, was used as a target sample in this study. Two brands of long-grain milled rice were cooked and randomly presented at 70, 60, 50, 40, and 30°C. Thirty-five CATA terms for cooked milled rice were generated. Eighty-eight untrained panelists were asked to quickly select all the CATA terms that they considered appropriate to characterize sensory attributes of cooked rice samples presented at each temperature. Proportions of selection by panelists for 13 attributes significantly differed among the five temperature conditions. "Product temperature-dependent sensory-attribute variations" differed with two brands of milled rice grains. Such variations in sensory attributes, resulted from both product temperature and rice brand, were more pronounced among panelists who more frequently consumed rice. In conclusion, the CATA method can be useful for characterizing "product temperature-dependent sensory attribute variations" in cooked milled-rice samples. Further study is needed to examine whether the CATA method is also effective in capturing "product temperature-dependent sensory-attribute variations" in other hot or cold foods and beverages. Published by Elsevier Ltd.

  13. Transformation-cost time-series method for analyzing irregularly sampled data

    NASA Astrophysics Data System (ADS)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  14. A parametric method for assessing diversification-rate variation in phylogenetic trees.

    PubMed

    Shah, Premal; Fitzpatrick, Benjamin M; Fordyce, James A

    2013-02-01

    Phylogenetic hypotheses are frequently used to examine variation in rates of diversification across the history of a group. Patterns of diversification-rate variation can be used to infer underlying ecological and evolutionary processes responsible for patterns of cladogenesis. Most existing methods examine rate variation through time. Methods for examining differences in diversification among groups are more limited. Here, we present a new method, parametric rate comparison (PRC), that explicitly compares diversification rates among lineages in a tree using a variety of standard statistical distributions. PRC can identify subclades of the tree where diversification rates are at variance with the remainder of the tree. A randomization test can be used to evaluate how often such variance would appear by chance alone. The method also allows for comparison of diversification rate among a priori defined groups. Further, the application of the PRC method is not restricted to monophyletic groups. We examined the performance of PRC using simulated data, which showed that PRC has acceptable false-positive rates and statistical power to detect rate variation. We apply the PRC method to the well-studied radiation of North American Plethodon salamanders, and support the inference that the large-bodied Plethodon glutinosus clade has a higher historical rate of diversification compared to other Plethodon salamanders. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  15. Transformation-cost time-series method for analyzing irregularly sampled data.

    PubMed

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  16. Vineyard Colonization by Hyalesthes obsoletus (Hemiptera: Cixiidae) Induced by Stinging Nettle Cut Along Surrounding Ditches.

    PubMed

    Mori, N; Pozzebon, A; Duso, C; Reggiani, N; Pavan, F

    2016-02-01

    Stinging nettle (Urtica dioica L.) is the most important host plant for both phytoplasma associated with Bois noir disease of the grapevine and its vector Hyalesthes obsoletus Signoret (Hemiptera: Cixiidae). Vector abundance in vineyards is favored by stinging nettle growing in surrounding areas. Nettle control by herbicides or cutting can reduce vector population in vineyards. However, chemical weeding can cause environmental problems. Many authors suggest that stinging nettle control applied during H. obsoletus flight could force adults to migrate into vineyards. We evaluate if cutting of nettle growing along ditches during adult flight favors vineyard colonization by H. obsoletus. Three different weed management regimes ("no cuts," "one cut" just before the beginning of adult flight, and "frequent cuts" over the whole vegetative season) were applied to the herbaceous vegetation in ditches bordering two vineyards. The flight dynamics of H. obsoletus were recorded by placing yellow sticky traps on the vegetation along the ditches and at different positions in the vineyards. Frequent stinging nettle cuts (compared with a single cut) in surrounding areas favored the dispersion of vectors inside the vineyards. Stinging nettle control should be based on an integration of a single herbicide application before H. obsoletus emergence followed by frequent cuts to minimize negative side effects of chemical weeding. In organic viticulture, a frequent-cuts strategy should avoid cuts during H. obsoletus flight period, at least in the first year of adoption. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Mixed-Methods Research in the Discipline of Nursing.

    PubMed

    Beck, Cheryl Tatano; Harrison, Lisa

    2016-01-01

    In this review article, we examined the prevalence and characteristics of 294 mixed-methods studies in the discipline of nursing. Creswell and Plano Clark's typology was most frequently used along with concurrent timing. Bivariate statistics was most often the highest level of statistics reported in the results. As for qualitative data analysis, content analysis was most frequently used. The majority of nurse researchers did not specifically address the purpose, paradigm, typology, priority, timing, interaction, or integration of their mixed-methods studies. Strategies are suggested for improving the design, conduct, and reporting of mixed-methods studies in the discipline of nursing.

  18. Combining Ordinary Kriging with wind directions to identify sources of industrial odors in Portland, Oregon.

    PubMed

    Eckmann, Ted C; Wright, Samantha G; Simpson, Logan K; Walker, Joe L; Kolmes, Steven A; Houck, James E; Velasquez, Sandra C

    2018-01-01

    This study combines Ordinary Kriging, odor monitoring, and wind direction data to demonstrate how these elements can be applied to identify the source of an industrial odor. The specific case study used as an example of how to address this issue was the University Park neighborhood of Portland, Oregon (USA) where residents frequently complain about industrial odors, and suspect the main source to be a nearby Daimler Trucks North America LLC manufacturing plant. We collected 19,665 odor observations plus 105,120 wind measurements, using an automated weather station to measure winds in the area at five-minute intervals, logging continuously from December 2014 through November 2015, while we also measured odors at 19 locations, three times per day, using methods from the American Society of the International Association for Testing and Materials. Our results quantify how winds vary with season and time of day when industrial odors were observed versus when they were not observed, while also mapping spatiotemporal patterns in these odors using Ordinary Kriging. Our analyses show that industrial odors were detected most frequently to the northwest of the Daimler plant, mostly when winds blew from the southeast, suggesting Daimler's facility is a likely source for much of this odor.

  19. Rapid identification of Enterobacter hormaechei and Enterobacter cloacae genetic cluster III.

    PubMed

    Ohad, S; Block, C; Kravitz, V; Farber, A; Pilo, S; Breuer, R; Rorman, E

    2014-05-01

    Enterobacter cloacae complex bacteria are of both clinical and environmental importance. Phenotypic methods are unable to distinguish between some of the species in this complex, which often renders their identification incomplete. The goal of this study was to develop molecular assays to identify Enterobacter hormaechei and Ent. cloacae genetic cluster III which are relatively frequently encountered in clinical material. The molecular assays developed in this study are qPCR technology based and served to identify both Ent. hormaechei and Ent. cloacae genetic cluster III. qPCR results were compared to hsp60 sequence analysis. Most clinical isolates were assigned to Ent. hormaechei subsp. steigerwaltii and Ent. cloacae genetic cluster III. The latter was proportionately more frequently isolated from bloodstream infections than from other material (P < 0·05). The qPCR assays detecting Ent. hormaechei and Ent. cloacae genetic cluster III demonstrated high sensitivity and specificity. The presented qPCR assays allow accurate and rapid identification of clinical isolates of the Ent. cloacae complex. The improved identifications obtained can specifically assist analysis of Ent. hormaechei and Ent. cloacae genetic cluster III in nosocomial outbreaks and can promote rapid environmental monitoring. An association was observed between Ent. cloacae cluster III and systemic infection that deserves further attention. © 2014 The Society for Applied Microbiology.

  20. Successful Strategies to Engage Research Partners for Translating Evidence into Action in Community Health: A Critical Review

    PubMed Central

    Salsberg, Jon; Parry, David; Pluye, Pierre; Macridis, Soultana; Herbert, Carol P.; Macaulay, Ann C.

    2015-01-01

    Objectives. To undertake a critical review describing key strategies supporting development of participatory research (PR) teams to engage partners for creation and translation of action-oriented knowledge. Methods. Sources are four leading PR practitioners identified via bibliometric analysis. Authors' publications were identified in January 1995–October 2009 in PubMed, Embase, ISI Web of Science and CAB databases, and books. Works were limited to those with a process description describing a research project and practitioners were first, second, third, or last author. Results. Adapting and applying the “Reliability Tested Guidelines for Assessing Participatory Research Projects” to retained records identified five key strategies: developing advisory committees of researchers and intended research users; developing research agreements; using formal and informal group facilitation techniques; hiring co-researchers/partners from community; and ensuring frequent communication. Other less frequently mentioned strategies were also identified. Conclusion. This review is the first time these guidelines were used to identify key strategies supporting PR projects. They proved effective at identifying and evaluating engagement strategies as reported by completed research projects. Adapting these guidelines identified gaps where the tool was unable to assess fundamental PR elements of power dynamics, equity of resources, and member turnover. Our resulting template serves as a new tool to measure partnerships. PMID:25815016

  1. Tracing the evolution of critical evaluation skills in students' use of the Internet.

    PubMed

    Blumberg, P; Sparks, J

    1999-04-01

    This paper documents the evolving uses of the Internet made by public health graduate students and traces the development of their search methods and critical evaluative criteria. Early in the first semester and again six months later, twenty-four graduate students in a problem-based learning curriculum, which emphasizes evidence-based critical thinking skills, were required to describe their most helpful resources and to evaluate these resources critically. The answers were coded for the types of resources the students used, how frequently they were used, and why they were used. Student perception of the usefulness of resources, especially the Internet, and ability to evaluate these resources critically changed greatly. Initially, 96% of the students stated that the Internet was their most helpful resource. Six months later, these students continued to use the Internet; however, it was not their most useful source. At the later point, students had very specific uses for the Internet. Their most frequently used evaluation criterion was the reliability and objectivity of the source of the information. By the end of the first year of study, the majority of the students demonstrated an understanding of the principles of evidence-based practice and applied them to their research and analysis of information resources.

  2. Fetal gender and pregnancy outcomes in Libya: a retrospective study

    PubMed Central

    Khalil, Mounir M.; Alzahra, Esgair

    2013-01-01

    Objective The relationship between pregnancy outcomes and fetal gender is well reported from different areas in the world, but not from Africa. In this study, we try to understand whether the recorded phenomenon of association of adverse pregnancy outcomes with a male fetus applies to our population. Materials and methods A total of 29,140 patient records from 2009 and 2010 were retrieved from Aljalaa Maternity Hospital, Tripoli, Libya. Analysis was carried out to find the correlation between fetal gender and different pregnancy outcomes. Results A male fetus was associated with an increased incidence of gestational diabetes mellitus (odds risk 1.4), preterm delivery (6.7% for males, 5.5% for females, odds risk 1.24), cesarean section (23.9% for males, 20% for females, odds risk 1.25), and instrumental vaginal delivery (4.4% for males, 3.1% for females, odds risk 1.48), p<0.005. Preeclampsia was more frequent among preterm females and postterm males, p<0.005. It was also more frequent in male-bearing primigravids, p<0.01. Conclusion We confirm the existence of an adverse effect of a male fetus on pregnancy and labor in our population. We recommend further research to understand the mechanisms and clinical implications of this phenomenon. PMID:23308081

  3. Combining Ordinary Kriging with wind directions to identify sources of industrial odors in Portland, Oregon

    PubMed Central

    Kolmes, Steven A.; Houck, James E.; Velasquez, Sandra C.

    2018-01-01

    This study combines Ordinary Kriging, odor monitoring, and wind direction data to demonstrate how these elements can be applied to identify the source of an industrial odor. The specific case study used as an example of how to address this issue was the University Park neighborhood of Portland, Oregon (USA) where residents frequently complain about industrial odors, and suspect the main source to be a nearby Daimler Trucks North America LLC manufacturing plant. We collected 19,665 odor observations plus 105,120 wind measurements, using an automated weather station to measure winds in the area at five-minute intervals, logging continuously from December 2014 through November 2015, while we also measured odors at 19 locations, three times per day, using methods from the American Society of the International Association for Testing and Materials. Our results quantify how winds vary with season and time of day when industrial odors were observed versus when they were not observed, while also mapping spatiotemporal patterns in these odors using Ordinary Kriging. Our analyses show that industrial odors were detected most frequently to the northwest of the Daimler plant, mostly when winds blew from the southeast, suggesting Daimler’s facility is a likely source for much of this odor. PMID:29385136

  4. Characterizing urban hydrodynamic models in densely settled river-corridors: Lessons from Jakarta

    NASA Astrophysics Data System (ADS)

    Shaad, K.; Ninsalam, Y.; Padawangi, R.; Burlando, P.

    2016-12-01

    The nature and pace of urbanization in South and South-east Asia has created unique circumstances for the inter-action between social and ecological systems linked to water resources - with the growing density of population; frequent and extensive modification on the flood plain alongside governance challenges creating large segment of the settled regions exposed to water security issues and flooding risks. The densely-settled river corridor in Jakarta, with nearly 590 km of waterfront exposed to frequent flooding, captures the scale and complexity typical of these systems. Developing models that can help improve our insights into these urban areas remain a challenge. Here, we present our attempts to apply high-resolution aerial and ground based mapping methods, alongside shallow groundwater monitoring and household surveys, to characterize hydrodynamic models of varying complexity, for a 7 km stretch on the Ciliwung River in the center of Jakarta. We explore the uncertainty associated with obtaining "hydraulically representative" ground description and influence of representation of structures in flood propagation over the short-term, while linking it to the diffusive forcings from settlement acting on the floodplain-river interaction over the long-term. Connecting, thus, flooding with water availability and contamination, we speculate on the ability to scale these approaches and technologies beyond the limits of the test site.

  5. Molecular typing, antibiotic resistance, virulence gene and biofilm formation of different Salmonella enterica serotypes.

    PubMed

    Turki, Yousra; Mehr, Ines; Ouzari, Hadda; Khessairi, Amel; Hassen, Abdennaceur

    2014-01-01

    Salmonella enterica isolates representing commonly isolated serotypes in Tunisia were analyzed using genotyping and phenotyping methods. ERIC and ITS-PCR applied to 48 Salmonella spp. isolates revealed the presence of 12 and 10 different profiles, respectively. The distribution of profiles among serotypes demonstrated the presence of strains showing an identical fingerprinting pattern. All Salmonella strains used in this study were positive for the sdiA gene. Three Salmonella isolates belonging to serotypes Anatum, Enteritidis and Amsterdam were negative for the invA gene. The spvC gene was detected in thirteen isolates belonging to serotypes Anatum, Typhimurium, Enteritidis, Gallinarum and Montevideo. Antibiotic resistance was frequent among the recovered Salmonella isolates belonging to serotypes Anatum, Typhimurium, Enteritidis, Zanzibar and Derby. The majority of these isolates exhibited resistance to at least two antibiotic families. Four multidrug-resistant isolates were recovered from food animals and poultry products. These isolates exhibited not only resistance to tetracycline, sulphonamides, and ampicillin, but also have shown resistance to fluoroquinolones. Common resistance to nalidixic acid, ciprofloxacin and ofloxacin in two S. Anatum and S. Zanzibar strains isolated from raw meat and poultry was also obtained. Furthermore, wastewater and human isolates exhibited frequent resistance to nalidixic acid and tetracycline. Of all isolates, 33.5% were able to form biofilm.

  6. Surface Plasmon Resonance or Biocompatibility—Key Properties for Determining the Applicability of Noble Metal Nanoparticles

    PubMed Central

    Craciun, Ana Maria; Focsan, Monica; Vulpoi, Adriana

    2017-01-01

    Metal and in particular noble metal nanoparticles represent a very special class of materials which can be applied as prepared or as composite materials. In most of the cases, two main properties are exploited in a vast number of publications: biocompatibility and surface plasmon resonance (SPR). For instance, these two important properties are exploitable in plasmonic diagnostics, bioactive glasses/glass ceramics and catalysis. The most frequently applied noble metal nanoparticle that is universally applicable in all the previously mentioned research areas is gold, although in the case of bioactive glasses/glass ceramics, silver and copper nanoparticles are more frequently applied. The composite partners/supports/matrix/scaffolds for these nanoparticles can vary depending on the chosen application (biopolymers, semiconductor-based composites: TiO2, WO3, Bi2WO6, biomaterials: SiO2 or P2O5-based glasses and glass ceramics, polymers: polyvinyl alcohol (PVA), Gelatin, polyethylene glycol (PEG), polylactic acid (PLA), etc.). The scientific works on these materials’ applicability and the development of new approaches will be targeted in the present review, focusing in several cases on the functioning mechanism and on the role of the noble metal. PMID:28773196

  7. Tuboimpedance: A New Test of Eustachian Tube Function.

    PubMed

    Smith, Matthew E; Zou, Charlie C; Blythe, Andrew J C; Tysome, James R

    2017-04-01

    Objective Eustachian tube (ET) dysfunction is most frequently caused by a failure of the ET to adequately open; however, there is currently no reliable method of assessing this. Tubomanometry has recently shown good interindividual repeatability as a measure of ET function by measuring middle ear pressure after the application of regulated nasopharyngeal pressures during swallowing. We present the first reports of a novel test: middle ear impedance measurements during standard nasopharyngeal pressure application (tuboimpedance). We assess repeatability in healthy ears and any advantages over tubomanometry. Study Design Exploratory cohort diagnosis study. Setting Tertiary referral center. Subjects Twenty screened, healthy ears (10 volunteers). Methods Tubomanometry and tuboimpedance tests were performed while individuals swallowed with applied nasopharyngeal pressures of 20, 30, 40, and 50 mbar. Eustachian tube opening detection rate and test repeatability (measured by intraclass correlation coefficient [ICC]) for immediate and delayed repeats at each pressure were compared. Results ET opening was detected more frequently using tuboimpedance, with a 100% detection rate using a nasopharyngeal pressure of 30 mbar or more, compared to 88% to 96% with tubomanometry. Detection of ET opening at 20 mbar was possible with tuboimpedance. Repeatability of both tests was mostly strong (ICC >0.7) for both immediate and delayed repeats. Repeatability for the tubomanometry R value was only fair to moderate. Conclusion Tuboimpedance may provide a repeatable measure of ET opening that is easier to perform due to lower nasopharyngeal pressures required and fewer issues with poor ear-probe sealing. Further assessment in patients with different forms of ET dysfunction is required.

  8. Dermoscopy of pigmented lesions on mucocutaneous junction and mucous membrane.

    PubMed

    Lin, J; Koga, H; Takata, M; Saida, T

    2009-12-01

    The dermoscopic features of pigmented lesions on the mucocutaneous junction and mucous membrane are different from those on hairy skin. Differentiation between benign lesions and malignant melanomas of these sites is often difficult. To define the dermoscopic patterns of lesions on the mucocutaneous junction and mucous membrane, and assess the applicability of standard dermoscopic algorithms to these lesions. An unselected consecutive series of 40 lesions on the mucocutaneous junction and mucous membrane was studied. All the lesions were imaged using dermoscopy devices, analysed for dermoscopic patterns and scored with algorithms including the ABCD rule, Menzies method, 7-point checklist, 3-point checklist and the CASH algorithm. Benign pigmented lesions of the mucocutaneous junction and mucous membrane frequently presented a dotted-globular pattern (25%), a homogeneous pattern (25%), a fish scale-like pattern (18.8%) and a hyphal pattern (18.8%), while melanomas of these sites showed a multicomponent pattern (75%) and a homogeneous pattern (25%). The fish scale-like pattern and hyphal pattern were considered to be variants of the ring-like pattern. The sensitivities of the ABCD rule, Menzies method, 7-point checklist, 3-point checklist and CASH algorithm in diagnosing mucosal melanomas were 100%, 100%, 63%, 88% and 100%; and the specificities were 100%, 94%, 100%, 94% and 100%, respectively. The ring-like pattern and its variants (fish scale-like pattern and hyphal pattern) are frequently observed as well as the dotted-globular pattern and homogeneous pattern in mucosal melanotic macules. The algorithms for pigmented lesions on hairy skin also apply to those on the mucocutaneous junction and mucous membrane with high sensitivity and specificity.

  9. Secondary invasions of noxious weeds associated with control of invasive Tamarix are frequent, idiosyncratic and persistent

    USGS Publications Warehouse

    González, Eduardo; Sher, Anna A.; Anderson, Robert M.; Bay, Robin F.; Bean, Daniel W.; Bissonnete, Gabriel J.; Cooper, David J.; Dohrenwend, Kara; Eichhorst, Kim D.; El Waer, Hisham; Kennard, Deborah K.; Harms-Weissinger, Rebecca; Henry, Annie L.; Makarick, Lori J.; Ostoja, Steven M.; Reynolds, Lindsay V.; Robinson, W. Wright; Shafroth, Patrick B.; Tabacchi, Erich

    2017-01-01

    Control of invasive species within ecosystems may induce secondary invasions of non-target invaders replacing the first alien. We used four plant species listed as noxious by local authorities in riparian systems to discern whether 1) the severity of these secondary invasions was related to the control method applied to the first alien; and 2) which species that were secondary invaders persisted over time. In a collaborative study by 16 research institutions, we monitored plant species composition following control of non-native Tamarix trees along southwestern U.S. rivers using defoliation by an introduced biocontrol beetle, and three physical removal methods: mechanical using saws, heavy machinery, and burning in 244 treated and 79 untreated sites across six U.S. states. Physical removal favored secondary invasions immediately after Tamarix removal (0–3 yrs.), while in the biocontrol treatment, secondary invasions manifested later (> 5 yrs.). Within this general trend, the response of weeds to control was idiosyncratic; dependent on treatment type and invader. Two annual tumbleweeds that only reproduce by seed (Bassia scoparia and Salsola tragus) peaked immediately after physical Tamarix removal and persisted over time, even after herbicide application. Acroptilon repens, a perennial forb that vigorously reproduces by rhizomes, and Bromus tectorum, a very frequent annual grass before removal that only reproduces by seed, were most successful at biocontrol sites, and progressively spread as the canopy layer opened. These results demonstrate that strategies to control Tamarix affect secondary invasions differently among species and that time since disturbance is an important, generally overlooked, factor affecting response.

  10. Analysis of Trace Siderophile Elements at High Spatial Resolution Using Laser Ablation ICP-MS

    NASA Astrophysics Data System (ADS)

    Campbell, A. J.; Humayun, M.

    2006-05-01

    Laser ablation inductively coupled plasma mass spectometry is an increasingly important method of performing spatially resolved trace element analyses. Over the last several years we have applied this technique to measure siderophile element distributions at the ppm level in a variety of natural and synthetic samples, especially metallic phases in meteorites and experimental run products intended for trace element partitioning studies. These samples frequently require trace element analyses to be made at a finer spatial resolution (25 microns or better) than is frequently attained using LA-ICP-MS. In this presentation we review analytical protocols that were developed to optimize the LA-ICP-MS measurements for high spatial resolution. Particular attention is paid to the trade-offs involving sensitivity, ablation pit depth and diameter, background levels, and number of elements measured. To maximize signal/background ratios and avoid difficulties associated with ablating to depths greater than the ablation pit diameter, measurement involved integration of rapidly varying, transient but well-behaved signals. The abundances of platinum group elements and other siderophile elements in ferrous metals were calibrated against well-characterized standards, including iron meteorites and NIST certified steels. The calibrations can be set against the known abundance of an independently determined element, but normalization to 100 percent can also be employed, and was more useful in many circumstances. Evaluation of uncertainties incorporated counting statistics as well as a measure of instrumental uncertainty, determined by replicate analyses of the standards. These methods have led to a number of insights into the formation and chemical processing of metal in the early solar system.

  11. Methylation profile analysis of DNA repair genes in hepatocellular carcinoma with MS-MLPA.

    PubMed

    Ozer, Ozge; Bilezikci, Banu; Aktas, Sema; Sahin, Feride I

    2013-12-01

    Hepatocellular carcinoma (HCC) is one of the rare tumors with well-defined risk factors. The multifactorial etiology of HCC can be explained by its complex molecular pathogenesis. In the current study, the methylation status of 7 genes involved in DNA repair mechanisms, namely MLH1, PMS2, MSH6, MSH2, MGMT, MSH3, and MLH3, was investigated in tumor samples from HCC patients, using the methylation-specific-multiplex ligated probe amplification method and the results were correlated with available clinical findings. The most common etiological factor in these cases was the presence of hepatitis B alone (47.2%). Among the 56 cases that were studied, promoter methylation was detected in at least one of the genes in 27 (48.2%) cases, only in 1 gene in 13 (23.2%) cases, and in >1 gene in 14 (25%) cases. Of the 7 genes investigated, methylation was most frequently observed in MSH3, in 14 (25%) cases. Methylation of at least 1 gene was significantly more frequent in patients with single tumors than multifocal tumors. There were significant differences regarding hepatitis B status, Child Class, tumor number, grade, and TNM stage in cases where PMS2 methylation was detected. Our results suggest that methylation of genes involved in mismatch repair may be responsible in the pathogenesis of HCC, and evaluating changes in multiple genes in these pathways simultaneously would be more informative. Despite being a robust and relatively inexpensive method, the methylation-specific-multiplex ligated probe amplification assay could be more extensively applied with improvements in the currently intricate data analysis component.

  12. Assessment of Ionospheric Spatial Decorrelation for CAT I GBAS in Equatorial Region at Nominal days: Data Selection and Bias Removal

    NASA Astrophysics Data System (ADS)

    Chang, H.; Lee, J.

    2017-12-01

    Ground-based augmentations of global positioning system (GBAS) provide the user with the integrity parameter, standard deviation of vertical ionospheric gradient (σvig), to ensure integrity. σvig value currently available in CAT I GBAS is derived from the data collected from the reference stations located on the US mainland and have a value of 4 mm/km. However, since the equatorial region near the geomagnetic equator is relatively more active in the ionosphere than the mid-latitude region, there is a limit to applying σvig used in the mid-latitude region on the equatorial region. Also, since the ionospheric phenomena of daytime and nighttime in the equatorial region are significantly different, it is necessary to apply σvig whilst distinguishing the time zone. This study presents a method for obtaining standard deviation of vertical ionospheric gradient in the equatorial region at nominal days considering the equatorial ionosphere environment. We used the data collected from the Brazilian region near the geomagnetic equator in the nominal days. One of the distinguishing features of the equatorial ionosphere environment from the mid-latitude ionosphere environment is that the scintillation event occurs frequently. Therefore, the days used for the analysis were selected not only by geomagnetic indexes Kp (Planetary K index) and Dst (Disturbance storm index), but also by S4 (Scintillation index) which indicates scintillation event. In addition, unlike the ionospheric delay bias elimination method used in the mid-latitude region, the `Long-term ionospheric anomaly monitor (LTIAM)' used in this study utilized the bias removal method that applies different bias removal standards according to IPP (Ionospheric pierce point) distance in consideration of ionospheric activity. As a result, σvig values which are conservative enough to bound ionosphere spatial decorrelation for the equatorial region in nominal days are 8 mm/km for daytime and 19 mm/km for nighttime. Therefore, for CAT I GBAS operation in the equatorial region, σvig value that is twice as large as the σvig provided in the mid-latitude region needs to be applied in daytime, and the σvig value about two times greater than the σvig of daytime needs to be applied in nighttime.

  13. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    PubMed Central

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  14. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    PubMed

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  15. Developing "Personality" Taxonomies: Metatheoretical and Methodological Rationales Underlying Selection Approaches, Methods of Data Generation and Reduction Principles.

    PubMed

    Uher, Jana

    2015-12-01

    Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".

  16. Applying a gender lens on human papillomavirus infection: cervical cancer screening, HPV DNA testing, and HPV vaccination

    PubMed Central

    2013-01-01

    Background Our aim is to provide a state-of-the-art overview of knowledge on sex (biological) and gender (sociocultural) aspects of Human papillomavirus (HPV) and cervical cancer for educational purposes. Considerable disparities exist in cervical cancer incidences between different subgroups of women. We provide an outline on the crucial issues and debates based on the recent literature published in leading gender medicine journals. Intersectionality was applied in order to help categorise the knowledge. Methods Key terms (HPV, cervical cancer) were screened in Gender Medicine, Journal of Women’s Health and Women & Health from January 2005-June 2012. Additional searches were conducted for topics insufficiently mentioned, such as HPV vaccination of boys. In total, 71 publications were included (56 original papers, four reviews, six reports, three commentaries, one editorial and one policy statement). Results Research reveals complexity in the way various subgroups of women adhere to cervical screening. Less educated women, older women, uninsured women, homeless women, migrant women facing language barriers, women who have sex with women and obese women participate in Pap smears less frequently. A series of barriers can act to impede decisions to vaccinate against HPV. Conclusions Both male and female controlled preventive methods and treatment measures should be developed in order to tackle HPV infection and different strategies are needed for different subgroups. A substantial discussion and research on alternative methods of prevention was and is lacking. In future research, sex and gender aspects of HPV-related diseases of boys and men as well as subgroup differences in HPV risk need to be addressed. PMID:23394214

  17. Evaluation of a Mobile Application for Multiplier Method Growth and Epiphysiodesis Timing Predictions.

    PubMed

    Wagner, Pablo; Standard, Shawn C; Herzenberg, John E

    The multiplier method (MM) is frequently used to predict limb-length discrepancy and timing of epiphysiodesis. The traditional MM uses complex formulae and requires a calculator. A mobile application was developed in an attempt to simplify and streamline these calculations. We compared the accuracy and speed of using the traditional pencil and paper technique with that using the Multiplier App (MA). After attending a training lecture and a hands-on workshop on the MM and MA, 30 resident surgeons were asked to apply the traditional MM and the MA at different weeks of their rotations. They were randomized as to the method they applied first. Subjects performed calculations for 5 clinical exercises that involved congenital and developmental limb-length discrepancies and timing of epiphysiodesis. The amount of time required to complete the exercises and the accuracy of the answers were evaluated for each subject. The test subjects answered 60% of the questions correctly using the traditional MM and 80% of the questions correctly using the MA (P=0.001). The average amount of time to complete the 5 exercises with the MM and MA was 22 and 8 minutes, respectively (P<0.0001). Several reports state that the traditional MM is quick and easy to use. Nevertheless, even in the most experienced hands, performing the calculations in clinical practice can be time-consuming. Errors may result from choosing the wrong formulae and from performing the calculations by hand. Our data show that the MA is simpler, more accurate, and faster than the traditional MM from a practical standpoint. Level II.

  18. Real-Time Airborne Gamma-Ray Background Estimation Using NASVD with MLE and Radiation Transport for Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.

    2015-06-01

    Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less

  19. [Efficacy of transforaminal lumbar epidural steroid injections in patients with lumbar radiculopathy].

    PubMed

    Çetin, Mehmet Fatih; Karaman, Haktan; Ölmez Kavak, Gönül; Tüfek, Adnan; Baysal Yildirim, Zeynep

    2012-01-01

    This study looks into the efficacy and safety of the transforaminal lumbar epidural steroid injection (TLESI) applied to patients with radiculopathy due to lumbar disk herniation. The patients' files which were applied TLESI, were retrospectively scanned. Patients who did not respond to one-month conservative treatment and who were detected to have bulging or protruding lumbar disk herniation as a result of imaging methods were included in the study. All applications were performed with C-arm fluoroscopy under local anesthesia by outpatient method. In all cases, a mix of 80 mg triamsinolone and 0.25% bupivacaine, was transforaminally injected to the anterior epidural area. Initial VAS pain scores were compared with the values of the 1, 3 and 6th months after the application. Patient satisfaction was determined through scoring. Furthermore, early and late term complications were collected for evaluation. A total of 222 patients were administered TLESI 460 times (average: 2.1, repeat interval: 1-6 times). The applications were carried out most frequently at the levels of L4-L5 and L5-S1. While the initial VAS score average was 8.2±0.7, after TLESI, it was 5.0±1.6, 4.8±1.5 and 5.1±1.5 in the 1, 3 and 6th months, respectively. 63.9% of the patients (n=142) defined the treatment as 'good and excellent'. No major complications were experienced and the overall minor complication rate was 11.1%. It was seen that TLESI was an efficient and safe method in the short and medium term.

  20. Autologous Blood Transfusion in Sports: Emerging Biomarkers.

    PubMed

    Salamin, Olivier; De Angelis, Sara; Tissot, Jean-Daniel; Saugy, Martial; Leuenberger, Nicolas

    2016-07-01

    Despite being prohibited by the World Anti-Doping Agency, blood doping through erythropoietin injection or blood transfusion is frequently used by athletes to increase oxygen delivery to muscles and enhance performance. In contrast with allogeneic blood transfusion and erythropoietic stimulants, there is presently no direct method of detection for autologous blood transfusion (ABT) doping. Blood reinfusion is currently monitored with individual follow-up of hematological variables via the athlete biological passport, which requires further improvement. Microdosage is undetectable, and suspicious profiles in athletes are often attributed to exposure to altitude, heat stress, or illness. Additional indirect biomarkers may increase the sensitivity and specificity of the longitudinal approach. The emergence of "-omics" strategies provides new opportunities to discover biomarkers for the indirect detection of ABT. With the development of direct quantitative methods, transcriptomics based on microRNA or messenger RNA expression is a promising approach. Because blood donation and blood reinfusion alter iron metabolism, quantification of proteins involved in metal metabolism, such as hepcidin, may be applied in an "ironomics" strategy to improve the detection of ABT. As red blood cell (RBC) storage triggers changes in membrane proteins, proteomic methods have the potential to identify the presence of stored RBCs in blood. Alternatively, urine matrix can be used for the quantification of the plasticizer di(2-ethyhexyl)phthalate and its metabolites that originate from blood storage bags, suggesting recent blood transfusion, and have an important degree of sensitivity and specificity. This review proposes that various indirect biomarkers should be applied in combination with mathematical approaches for longitudinal monitoring aimed at improving ABT detection. Copyright © 2016 Elsevier Inc. All rights reserved.

Top