Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Taxonomies of Organizational Change: Literature Review and Analysis
1978-09-01
operational terms presented a sig- nificant problem. The redundancy and circularity in discussions of variable groups reflects this dilemma . -34...Behavioral event and structured inteview protocols to be used to collect data from internal Army OE change agent and client subjects are presented with a...TABLE 22: Data Collection Method Proposed for Each Intervention Variable 168 *1 ABSTRACT This report presents a taxonomy and data collection method
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.
Gil-Ley, Alejandro; Bussi, Giovanni
2015-03-10
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.
Using collective variables to drive molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Fiorin, Giacomo; Klein, Michael L.; Hénin, Jérôme
2013-12-01
A software framework is introduced that facilitates the application of biasing algorithms to collective variables of the type commonly employed to drive massively parallel molecular dynamics (MD) simulations. The modular framework that is presented enables one to combine existing collective variables into new ones, and combine any chosen collective variable with available biasing methods. The latter include the classic time-dependent biases referred to as steered MD and targeted MD, the temperature-accelerated MD algorithm, as well as the adaptive free-energy biases called metadynamics and adaptive biasing force. The present modular software is extensible, and portable between commonly used MD simulation engines.
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering
2015-01-01
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide. PMID:25838811
NASA Astrophysics Data System (ADS)
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-01
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-07
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Chen, Changjun
2016-03-31
The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.
Path Finding on High-Dimensional Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Díaz Leines, Grisell; Ensing, Bernd
2012-07-01
We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.
Constructing a multidimensional free energy surface like a spider weaving a web.
Chen, Changjun
2017-10-15
Complete free energy surface in the collective variable space provides important information of the reaction mechanisms of the molecules. But, sufficient sampling in the collective variable space is not easy. The space expands quickly with the number of the collective variables. To solve the problem, many methods utilize artificial biasing potentials to flatten out the original free energy surface of the molecule in the simulation. Their performances are sensitive to the definitions of the biasing potentials. Fast-growing biasing potential accelerates the sampling speed but decreases the accuracy of the free energy result. Slow-growing biasing potential gives an optimized result but needs more simulation time. In this article, we propose an alternative method. It adds the biasing potential to a representative point of the molecule in the collective variable space to improve the conformational sampling. And the free energy surface is calculated from the free energy gradient in the constrained simulation, not given by the negative of the biasing potential as previous methods. So the presented method does not require the biasing potential to remove all the barriers and basins on the free energy surface exactly. Practical applications show that the method in this work is able to produce the accurate free energy surfaces for different molecules in a short time period. The free energy errors are small in the cases of various biasing potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Assessing the Efficiency of HIV Prevention around the World: Methods of the PANCEA Project
Marseille, Elliot; Dandona, Lalit; Saba, Joseph; McConnel, Coline; Rollins, Brandi; Gaist, Paul; Lundberg, Mattias; Over, Mead; Bertozzi, Stefano; Kahn, James G
2004-01-01
Objective To develop data collection methods suitable to obtain data to assess the costs, cost-efficiency, and cost-effectiveness of eight types of HIV prevention programs in five countries. Data Sources/Study Setting Primary data collection from prevention programs for 2002–2003 and prior years, in Uganda, South Africa, India, Mexico, and Russia. Study Design This study consisted of a retrospective review of HIV prevention programs covering one to several years of data. Key variables include services delivered (outputs), quality indicators, and costs. Data Collection/Extraction Methods Data were collected by trained in-country teams during week-long site visits, by reviewing service and financial records and interviewing program managers and clients. Principal Findings Preliminary data suggest that the unit cost of HIV prevention programs may be both higher and more variable than previous studies suggest. Conclusions A mix of standard data collection methods can be successfully implemented across different HIV prevention program types and countries. These methods can provide comprehensive services and cost data, which may carry valuable information for the allocation of HIV prevention resources. PMID:15544641
Effects of electrofishing gear type on spatial and temporal variability in fish community sampling
Meador, M.R.; McIntyre, J.P.
2003-01-01
Fish community data collected from 24 major river basins between 1993 and 1998 as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess multiple-reach (three consecutive reaches) and multiple-year (three consecutive years) variability in samples collected at a site. Variability was assessed using the coefficient of variation (CV; SD/mean) of species richness, the Jaccard index (JI), and the percent similarity index (PSI). Data were categorized by three electrofishing sample collection methods: backpack, towed barge, and boat. Overall, multiple-reach CV values were significantly lower than those for multiple years, whereas multiple-reach JI and PSI values were significantly greater than those for multiple years. Multiple-reach and multiple-year CV values did not vary significantly among electrofishing methods, although JI and PSI values were significantly greatest for backpack electrofishing across multiple reaches and multiple years. The absolute difference between mean species richness for multiple-reach samples and mean species richness for multiple-year samples was 0.8 species (9.5% of total species richness) for backpack samples, 1.7 species (10.1%) for towed-barge samples, and 4.5 species (24.4%) for boat-collected samples. Review of boat-collected fish samples indicated that representatives of four taxonomic families - Catostomidae, Centrarchidae, Cyprinidae, and Ictaluridae - were collected at all sites. Of these, catostomids exhibited greater interannual variability than centrarchids, cyprinids, or ictalurids. Caution should be exercised when combining boat-collected fish community data from different years because of relatively high interannual variability, which is primarily due to certain relatively mobile species. Such variability may obscure longer-term trends.
Villaverde-Morcillo, S; Esteso, M C; Castaño, C; Santiago-Moreno, J
2016-02-01
Many post-mortem sperm collection techniques have been described for mammalian species, but their use in birds is scarce. This paper compares the efficacy of two post-mortem sperm retrieval techniques - the flushing and float-out methods - in the collection of rooster sperm, in conjunction with the use of two extenders, i.e., L&R-84 medium and Lake 7.1 medium. To determine whether the protective effects of these extenders against refrigeration are different for post-mortem and ejaculated sperm, pooled ejaculated samples (procured via the massage technique) were also diluted in the above extenders. Post-mortem and ejaculated sperm variables were assessed immediately at room temperature (0 h), and after refrigeration at 5°C for 24 and 48 h. The flushing method retrieved more sperm than the float-out method (596.5 ± 75.4 million sperm vs 341.0 ± 87.6 million sperm; p < 0.05); indeed, the number retrieved by the former method was similar to that obtained by massage-induced ejaculation (630.3 ± 78.2 million sperm). For sperm collected by all methods, the L&R-84 medium provided an advantage in terms of sperm motility variables at 0 h. In the refrigerated sperm samples, however, the Lake 7.1 medium was associated with higher percentages of viable sperm, and had a greater protective effect (p < 0.05) with respect to most motility variables. In conclusion, the flushing method is recommended for collecting sperm from dead birds. If this sperm needs to be refrigerated at 5°C until analysis, Lake 7.1 medium is recommended as an extender. © 2015 Blackwell Verlag GmbH.
Replicates in high dimensions, with applications to latent variable graphical models.
Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han
2016-12-01
In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less
Design of A Cyclone Separator Using Approximation Method
NASA Astrophysics Data System (ADS)
Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee
2017-12-01
A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.
Cretini, Kari F.; Visser, Jenneke M.; Krauss, Ken W.; Steyer, Gregory D.
2011-01-01
This document identifies the main objectives of the Coastwide Reference Monitoring System (CRMS) vegetation analytical team, which are to provide (1) collection and development methods for vegetation response variables and (2) the ways in which these response variables will be used to evaluate restoration project effectiveness. The vegetation parameters (that is, response variables) collected in CRMS and other coastal restoration projects funded under the Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) are identified, and the field collection methods for these parameters are summarized. Existing knowledge on community and plant responses to changes in environmental drivers (for example, flooding and salinity) from published literature and from the CRMS and CWPPRA monitoring dataset are used to develop a suite of indices to assess wetland condition in coastal Louisiana. Two indices, the floristic quality index (FQI) and a productivity index, are described for herbaceous and forested vegetation. The FQI for herbaceous vegetation is tested with a long-term dataset from a CWPPRA marsh creation project. Example graphics for this index are provided and discussed. The other indices, an FQI for forest vegetation (that is, trees and shrubs) and productivity indices for herbaceous and forest vegetation, are proposed but not tested. New response variables may be added or current response variables removed as data become available and as our understanding of restoration success indicators develops. Once indices are fully developed, each will be used by the vegetation analytical team to assess and evaluate CRMS/CWPPRA project and program effectiveness. The vegetation analytical teams plan to summarize their results in the form of written reports and/or graphics and present these items to CRMS Federal and State sponsors, restoration project managers, landowners, and other data users for their input.
NASA Astrophysics Data System (ADS)
Soszyński, I.; Pawlak, M.; Pietrukowicz, P.; Udalski, A.; Szymański, M. K.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Kozłowski, S.; Skowron, D. M.; Skowron, J.; Mróz, P.; Hamanowicz, A.
2016-12-01
We present a collection of 450 598 eclipsing and ellipsoidal binary systems detected in the OGLE fields toward the Galactic bulge. The collection consists of binary systems of all types: detached, semi-detached, and contact eclipsing binaries, RS CVn stars, cataclysmic variables, HW Vir binaries, double periodic variables, and even planetary transits. For all stars we provide the I- and V-band time-series photometry obtained during the OGLE-II, OGLE-III, and OGLE-IV surveys. We discuss methods used to identify binary systems in the OGLE data and present several objects of particular interest.
NASA Astrophysics Data System (ADS)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
2017-08-01
Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.
Bowler, Michael G; Bowler, Matthew W
2014-01-01
The advent of micro-focused X-ray beams has led to the development of a number of advanced methods of sample evaluation and data collection. In particular, multiple-position data-collection and helical oscillation strategies are now becoming commonplace in order to alleviate the problems associated with radiation damage. However, intra-crystal and inter-crystal variation means that it is not always obvious on which crystals or on which region or regions of a crystal these protocols should be performed. For the automation of this process for large-scale screening, and to provide an indication of the best strategy for data collection, a metric of crystal variability could be useful. Here, measures of the intrinsic variability within protein crystals are presented and their implications for optimal data-collection strategies are discussed.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
Quantifying the isotopic composition of NOx emission sources: An analysis of collection methods
NASA Astrophysics Data System (ADS)
Fibiger, D.; Hastings, M.
2012-04-01
We analyze various collection methods for nitrogen oxides, NOx (NO2 and NO), used to evaluate the nitrogen isotopic composition (δ15N). Atmospheric NOx is a major contributor to acid rain deposition upon its conversion to nitric acid; it also plays a significant role in determining air quality through the production of tropospheric ozone. NOx is released by both anthropogenic (fossil fuel combustion, biomass burning, aircraft emissions) and natural (lightning, biogenic production in soils) sources. Global concentrations of NOx are rising because of increased anthropogenic emissions, while natural source emissions also contribute significantly to the global NOx burden. The contributions of both natural and anthropogenic sources and their considerable variability in space and time make it difficult to attribute local NOx concentrations (and, thus, nitric acid) to a particular source. Several recent studies suggest that variability in the isotopic composition of nitric acid deposition is related to variability in the isotopic signatures of NOx emission sources. Nevertheless, the isotopic composition of most NOx sources has not been thoroughly constrained. Ultimately, the direct capture and quantification of the nitrogen isotopic signatures of NOx sources will allow for the tracing of NOx emissions sources and their impact on environmental quality. Moreover, this will provide a new means by which to verify emissions estimates and atmospheric models. We present laboratory results of methods used for capturing NOx from air into solution. A variety of methods have been used in field studies, but no independent laboratory verification of the efficiencies of these methods has been performed. When analyzing isotopic composition, it is important that NOx be collected quantitatively or the possibility of fractionation must be constrained. We have found that collection efficiency can vary widely under different conditions in the laboratory and fractionation does not vary predictably with collection efficiency. For example, prior measurements frequently utilized triethanolamine solution for collecting NOx, but the collection efficiency was found to drop quickly as the solution aged. The most promising method tested is a NaOH/KMnO4 solution (Margeson and Knoll, Anal. Chem., 1985) which can collect NOx quantitatively from the air. Laboratory tests of previously used methods, along with progress toward creating a suitable and verifiable field deployable collection method will be presented.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
A new method and application for determining the nitrogen isotopic composition of NOx
NASA Astrophysics Data System (ADS)
Hastings, M. G.; Miller, D. J.; Wojtal, P.; O'Connor, M.
2015-12-01
Atmospheric nitrogen oxides (NOx = NO + NO2) play key roles in atmospheric chemistry, air quality, and radiative forcing, and contribute to nitric acid deposition. Sources of NOx include both natural and anthropogenic emissions, which vary significantly in space and time. NOx isotopic signatures offer a potentially valuable tool to trace source impacts on atmospheric chemistry and regional acid deposition. Previous work on NOx isotopic signatures suggests large ranges in values, even from the same emission source, as well as overlapping ranges amongst different sources, making it difficult to use the isotopic composition as a quantitative tracer of source influences. These prior measurements have utilized a variety of methods for collecting the NOx as nitrate or nitrite for isotopic analysis, and testing of some of these methods (including active and passive collections) reveal inconsistencies in efficiency of collection, as well as issues related to changes in conditions such as humidity, temperature, and NOx fluxes. A recently developed method allows for accurately measuring the nitrogen isotopic composition of NOx (NOx = NO + NO2) after capturing the NOx in a potassium permanganate/sodium hydroxide solution as nitrate (Fibiger et al., Anal. Chem., 2014). The method has been thoroughly tested in the laboratory and field, and efficiently collects NO and NO2 under a variety of conditions. There are several advantages to collecting NOx actively, including the ability to collect over minutes to hourly time scales, and the ability to collect in environments with highly variable NOx sources and concentrations. Challenges include a nitrate background present in potassium permanganate (solid and liquid forms), accurately deriving ambient NOx concentrations based upon flow rate and solution concentrations above this variable background, and potential interferences from other nitrogen species. This method was designed to collect NOx in environments with very different emission source loadings in an effort to isotopically characterize NOx sources. Results to date suggest very different values, and less variability than previous work, particularly for vehicle emissions. Ultimately, we aim to determine whether the influence of NOx sources can be quantitatively tracked in the environment.
González, Nerea; Iloro, Ibon; Durán, Juan A.; Elortza, Félix
2012-01-01
Purpose To characterize the tear film peptidome and low molecular weight protein profiles of healthy control individuals, and to evaluate changes due to day-to-day and individual variation and tear collection methods, by using solid phase extraction coupled to matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) profiling. Methods The tear protein profiles of six healthy volunteers were analyzed over seven days and inter-day and inter-individual variability was evaluated. The bilaterality of tear film and the effect of tear collection methods on protein profiles were also analyzed in some of these patients. MALDI-TOF MS analyses were performed on tear samples purified by using a solid phase extraction (SPE) method based on C18 functionalized magnetic beads for peptide and low molecular weight protein enrichment, focusing spectra acquisition on the 1 to 20 kDa range. Spectra were analyzed using principal component analysis (PCA) with MultiExperiment Viewer (TMeV) software. Volunteers were examined in terms of tear production status (Schirmer I test), clinical assessment of palpebral lids and meibomian glands, and a subjective OSD questionnaire before tear collection by a glass micro-capillary. Results Analysis of peptides and proteins in the 1–20 kDa range showed no significant inter-day differences in tear samples collected from six healthy individuals during seven days of monitoring, but revealed subtle intrinsic inter-individual differences. Profile analyses of tears collected from the right and left eyes confirmed tear bilaterality in four healthy patients. The addition of physiologic serum for tear sample collection did not affect the peptide and small protein profiles with respect to the number of resolved peaks, but it did reduce the signal intensity of the peaks, and increased variability. Magnetic beads were found to be a suitable method for tear film purification for the profiling study. Conclusions No significant variability in tear peptide and protein profiles below 20 kDa was found in healthy controls over a seven day period, nor in right versus left eye profiles from the same individual. Subtle inter-individual differences can be observed upon tear profiling analysis and confirm intrinsic variability between control subjects. Addition of physiologic serum for tear collection affects the proteome and peptidome in terms of peak intensities, but not in the composition of the profiles themselves. This work shows that MALDI-TOF MS coupled with C18 magnetic beads is an effective and reproducible methodology for tear profiling studies in the clinical monitoring of patients. PMID:22736947
Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.
Bonomi, M; Barducci, A; Parrinello, M
2009-08-01
Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Metabolomics study of Populus type propolis.
Anđelković, Boban; Vujisić, Ljubodrag; Vučković, Ivan; Tešević, Vele; Vajs, Vlatka; Gođevac, Dejan
2017-02-20
Herein, we propose rapid and simple spectroscopic methods to determine the chemical composition of propolis derived from various Populus species using a metabolomics approach. In order to correlate variability in Populus type propolis composition with the altitude of its collection, NMR, IR, and UV spectroscopy followed by OPLS was conducted. The botanical origin of propolis was established by comparing propolis spectral data to those of buds of various Populus species. An O2PLS method was utilized to integrate two blocks of data. According to OPLS and O2PLS, the major compounds in propolis samples, collected from temperate continental climate above 500m, were phenolic glycerides originating from P. tremula buds. Flavonoids were predominant in propolis samples collected below 400m, originating from P. nigra and P. x euramericana buds. Samples collected at 400-500m were of mixed origin, with variable amounts of all detected metabolites. Copyright © 2016 Elsevier B.V. All rights reserved.
Castrillo, Azucena; Jimenez-Marco, Teresa; Arroyo, José L; Jurado, María L; Larrea, Luis; Maymo, Rosa M; Monge, Jorge; Muñoz, Carmen; Pajares, Ángel; Yáñez, Marta
2017-06-01
Diverse variables are involved in apheresis platelet collection, processing and storage. This survey shows how these are realized in Spain. An analysis of collected data was performed in a questionnaire completed by ten Transfusion Centers (TC) which perform between 50 and 520 apheresis procedures per month. This information comprises the procedures used to collect, inspect and store apheresis platelet concentrates (PC), and quality control data. Macroscopic inspection of PC is performed in all TC, especially during the first few hours post-collection and before distribution. The type of processor, duration of post-collection resting periods and temperature from the time of collection until distribution are similar in all TC. In 80% of TC, PC with small and scarce aggregates are distributed to transfusion services. The presence of clumps is influenced by type of processor, female donor, cold ambient temperature and collection of hyperconcentrated platelets, and is often recurrent in the same donor, although some TC have not found any influential variables. Overall, no objective inspection methods are followed, although there are exceptions. The degree of compliance with quality control parameters, such as the number of units studied, mean platelet yield, residual leukocyte counts and pH at expiry date, is acceptable in all TC. Compliance in terms of number of microbiological culture samples is variable. The usual practice in Spanish TC with respect to the collection, post-collection handling and storage of apheresis PC can be considered uniform, although some specific aspects of analyses should follow more objective methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multivariate analysis: greater insights into complex systems
USDA-ARS?s Scientific Manuscript database
Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...
NASA Astrophysics Data System (ADS)
Miura, Yasunari; Sugiyama, Yuki
2017-12-01
We present a general method for analyzing macroscopic collective phenomena observed in many-body systems. For this purpose, we employ diffusion maps, which are one of the dimensionality-reduction techniques, and systematically define a few relevant coarse-grained variables for describing macroscopic phenomena. The time evolution of macroscopic behavior is described as a trajectory in the low-dimensional space constructed by these coarse variables. We apply this method to the analysis of the traffic model, called the optimal velocity model, and reveal a bifurcation structure, which features a transition to the emergence of a moving cluster as a traffic jam.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
NATURAL AND HUMAN FACTORS STRUCTURING FISH ASSEMBLAGES IN WEST VIRGINIA WADEABLE STREAMS
We surveyed fishes and environmental variables in 119 stream basins to identify natural and anthropogenic factors structuring fish assemblages. We collected fishes and physico-chemical variables using standardized EPA methods and compiled basin characteristics (e.g., land cover)...
Applied statistics in agricultural, biological, and environmental sciences.
USDA-ARS?s Scientific Manuscript database
Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...
Solving Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1987-01-01
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
Manges, Amee R; Tellis, Patricia A; Vincent, Caroline; Lifeso, Kimberley; Geneau, Geneviève; Reid-Smith, Richard J; Boerlin, Patrick
2009-11-01
Discriminatory genotyping methods for the analysis of Escherichia coli other than O157:H7 are necessary for public health-related activities. A new multi-locus variable number tandem repeat analysis protocol is presented; this method achieves an index of discrimination of 99.5% and is reproducible and valid when tested on a collection of 836 diverse E. coli.
Arrossi, Silvina; Ramos, Silvina; Straw, Cecilia; Thouyaret, Laura; Orellana, Liliana
2016-08-19
HPV test self-collection has been shown to reduce barriers to cervical screening and increase uptake. However, little is known about women's preferences when given the choice between self-collected and clinician-collected tests. This paper aims to describe experiences with HPV self-collection among women in Jujuy, the first Argentinean province to have introduced HPV testing as the primary screening method, provided free of cost in all public health centers. Between July and December 2012, data on acceptability of HPV self-collection and several social variables including past screening were collected from 2616 self-collection accepters and 433 non-accepters, and were analyzed using multivariate regression. In addition, in-depth interviews (n = 30) and 2 focus groups were carried out and analyzed using thematic analysis. Quantitative findings indicate that main reasons for choosing self-collection are those reducing barriers related to women's roles of responsibility for domestic work and work/family organization, and to health care services' organization. No social variables were significantly associated with acceptability. Among those who preferred clinician-collection, the main reasons were trust in health professionals and fear of hurting themselves. Qualitative findings also showed that self-collection allows women to overcome barriers related to the health system (i.e. long wait times), without sacrificing time devoted to work/domestic responsibilities. Findings have implications for self-collection recommendations, as they show it is the preferred method when women are given the choice, even if they are not screening non-attenders. Findings also highlight the importance of incorporating women's needs/preferences in HPV screening recommendations.
Ge, Tian; Nichols, Thomas E.; Ghosh, Debashis; Mormino, Elizabeth C.
2015-01-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. PMID:25600633
Work zone variable speed limit systems: Effectiveness and system design issues.
DOT National Transportation Integrated Search
2010-03-01
Variable speed limit (VSL) systems have been used in a number of countries, particularly in Europe, as a method to improve flow and increase safety. VSLs use detectors to collect data on current traffic and/or weather conditions. Posted speed limits ...
Work zone variable speed limit systems : effectiveness and system design issues.
DOT National Transportation Integrated Search
2010-03-01
Variable speed limit (VSL) systems have been used in a number of countries, particularly in Europe, as a method to improve flow and increase safety. VSLs use detectors to collect data on current traffic and/or weather conditions. Posted speed limits ...
McCarty, James; Parrinello, Michele
2017-11-28
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
NASA Astrophysics Data System (ADS)
McCarty, James; Parrinello, Michele
2017-11-01
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
NASA Astrophysics Data System (ADS)
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
Collection Evaluation for Interdisciplinary Fields: A Comprehensive Approach.
ERIC Educational Resources Information Center
Dobson, Cynthia; And Others
1996-01-01
Collection development for interdisciplinary areas is more complex than for traditionally well-defined disciplines, so new evaluation methods are needed. This article identifies variables in interdisciplinary fields and presents a model of their typical information components. Traditional use-centered and materials-centered evaluation methods…
From metadynamics to dynamics.
Tiwary, Pratyush; Parrinello, Michele
2013-12-06
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Parrinello, Michele
2013-12-01
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
Scaling the Library Collection; A Simplified Method for Weighing the Variables
ERIC Educational Resources Information Center
Vagianos, Louis
1973-01-01
On the assumption that the physical properties of any information stock (book, etc.) offer the best foundation on which to develop satisfactory measurements for assessing library operations and developing library procedures, weight is suggested as the most useful variable for assessment and standardization. Advantages of this approach are…
Paretti, Nicholas; Coes, Alissa L.; Kephart, Christopher M.; Mayo, Justine
2018-03-05
Tumacácori National Historical Park protects the culturally important Mission, San José de Tumacácori, while also managing a portion of the ecologically diverse riparian corridor of the Santa Cruz River. This report describes the methods and quality assurance procedures used in the collection of water samples for the analysis of Escherichia coli (E. coli), microbial source tracking markers, suspended sediment, water-quality parameters, turbidity, and the data collection for discharge and stage; the process for data review and approval is also described. Finally, this report provides a quantitative assessment of the quality of the E. coli, microbial source tracking, and suspended sediment data.The data-quality assessment revealed that bias attributed to field and laboratory contamination was minimal, with E. coli detections in only 3 out of 33 field blank samples analyzed. Concentrations in the field blanks were several orders of magnitude lower than environmental concentrations. The microbial source tracking (MST) field blank was below the detection limit for all MST markers analyzed. Laboratory blanks for E. coli at the USGS Arizona Water Science Center and laboratory blanks for MST markers at the USGS Ohio Water Microbiology Laboratory were all below the detection limit. Irreplicate data for E. coli and suspended sediment indicated that bias was not introduced to the data by combining samples collected using discrete sampling methods with samples collected using automatic sampling methods.The split and sequential E. coli replicate data showed consistent analytical variability and a single equation was developed to explain the variability of E. coli concentrations. An additional analysis of analytical variability for E. coli indicated analytical variability around 18 percent relative standard deviation and no trend was observed in the concentration during the processing and analysis of multiple split-replicates. Two replicate samples were collected for MST and individual markers were compared for a base flow and flood sample. For the markers found in common between the two types of samples, the relative standard deviation for the base flow sample was more than 3 times greater than the markers in the flood sample. Sequential suspended sediment replicates had a relative standard deviation of about 1.3 percent, indicating that environmental and analytical variability was minimal.A holding time review and laboratory study analysis supported the extended holding times required for this investigation. Most concentrations for flood and base-flow samples were within the theoretical variability specified in the most probable number approach suggesting that extended hold times did not overly influence the final concentrations reported.
Hua, Bin; Abbas, Estelle; Hayes, Alan; Ryan, Peter; Nelson, Lisa; O'Brien, Kylie
2012-11-01
Chinese medicine (CM) has its own diagnostic indicators that are used as evidence of change in a patient's condition. The majority of studies investigating efficacy of Chinese herbal medicine (CHM) have utilized biomedical diagnostic endpoints. For CM clinical diagnostic variables to be incorporated into clinical trial designs, there would need to be evidence that these diagnostic variables are reliable. Previous studies have indicated that the reliability of CM syndrome diagnosis is variable. Little information is known about where the variability stems from--the basic data collection level or the synthesis of diagnostic data, or both. No previous studies have investigated systematically the reliability of all four diagnostic methods used in the CM diagnostic process (Inquiry, Inspection, Auscultation/Olfaction, and Palpation). The objective of this study was to assess the inter-rater reliability of data collected using the four diagnostic methods of CM in Australian patients with knee osteoarthritis (OA), in order to investigate if CM variables could be used with confidence as diagnostic endpoints in a clinical trial investigating the efficacy of a CHM in treating OA. An inter-rater reliability study was conducted as a substudy of a clinical trial investigating the treatment of knee OA with Chinese herbal medicine. Two (2) experienced CM practitioners conducted a CM examination separately, within 2 hours of each other, in 40 participants. A CM assessment form was utilized to record the diagnostic data. Cohen's κ coefficient was used as a measure of the level of agreement between 2 practitioners. There was a relatively good level of agreement for Inquiry and Auscultation variables, and, in general, a low level of agreement for (visual) Inspection and Palpation variables. There was variation in the level of agreement between 2 practitioners on clinical information collected using the Four Diagnostic Methods of a CM examination. Some aspects of CM diagnosis appear to be reliable, while others are not. Based on these results, it was inappropriate to use CM diagnostic variables as diagnostic endpoints in the main study, which was an investigation of efficacy of CHM treatment of knee OA.
Evaluating the interior thermal performance of mosques in the tropical environment
NASA Astrophysics Data System (ADS)
Nordin, N. I.; Misni, A.
2018-02-01
This study introduces the methodology applied in conducting data collection and data analysis. Data collection is the process of gathering and measuring information on targeted variables in an established systematic method. Qualitative and quantitative methods are combined in collecting data from government departments, site experiments and observation. Furthermore, analysing the indoor thermal performance data in the heritage and new mosques were used thermal monitoring tests, while validation will be made by meteorology data. Origin 8 version of the software is used to analyse all the data. Comparison techniques were applied to analyse several factors that influence the indoor thermal performance of mosques, namely building envelope include floor area, opening, and material used. Building orientation, location, surrounding vegetation and water elements are also recorded as supported building primary data. The comparison of primary data using these variables for four mosques include heritage and new buildings were revealed.
A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.
Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu
2015-12-01
Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.
Ge, Tian; Nichols, Thomas E; Ghosh, Debashis; Mormino, Elizabeth C; Smoller, Jordan W; Sabuncu, Mert R
2015-04-01
Measurements derived from neuroimaging data can serve as markers of disease and/or healthy development, are largely heritable, and have been increasingly utilized as (intermediate) phenotypes in genetic association studies. To date, imaging genetic studies have mostly focused on discovering isolated genetic effects, typically ignoring potential interactions with non-genetic variables such as disease risk factors, environmental exposures, and epigenetic markers. However, identifying significant interaction effects is critical for revealing the true relationship between genetic and phenotypic variables, and shedding light on disease mechanisms. In this paper, we present a general kernel machine based method for detecting effects of the interaction between multidimensional variable sets. This method can model the joint and epistatic effect of a collection of single nucleotide polymorphisms (SNPs), accommodate multiple factors that potentially moderate genetic influences, and test for nonlinear interactions between sets of variables in a flexible framework. As a demonstration of application, we applied the method to the data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to detect the effects of the interactions between candidate Alzheimer's disease (AD) risk genes and a collection of cardiovascular disease (CVD) risk factors, on hippocampal volume measurements derived from structural brain magnetic resonance imaging (MRI) scans. Our method identified that two genes, CR1 and EPHA1, demonstrate significant interactions with CVD risk factors on hippocampal volume, suggesting that CR1 and EPHA1 may play a role in influencing AD-related neurodegeneration in the presence of CVD risks. Copyright © 2015 Elsevier Inc. All rights reserved.
Sampling methods for the study of pneumococcal carriage: a systematic review.
Gladstone, R A; Jefferies, J M; Faust, S N; Clarke, S C
2012-11-06
Streptococcus pneumoniae is an important pathogen worldwide. Accurate sampling of S. pneumoniae carriage is central to surveillance studies before and following conjugate vaccination programmes to combat pneumococcal disease. Any bias introduced during sampling will affect downstream recovery and typing. Many variables exist for the method of collection and initial processing, which can make inter-laboratory or international comparisons of data complex. In February 2003, a World Health Organisation working group published a standard method for the detection of pneumococcal carriage for vaccine trials to reduce or eliminate variability. We sought to describe the variables associated with the sampling of S. pneumoniae from collection to storage in the context of the methods recommended by the WHO and those used in pneumococcal carriage studies since its publication. A search of published literature in the online PubMed database was performed on the 1st June 2012, to identify published studies that collected pneumococcal carriage isolates, conducted after the publication of the WHO standard method. After undertaking a systematic analysis of the literature, we show that a number of differences in pneumococcal sampling protocol continue to exist between studies since the WHO publication. The majority of studies sample from the nasopharynx, but the choice of swab and swab transport media is more variable between studies. At present there is insufficient experimental data that supports the optimal sensitivity of any standard method. This may have contributed to incomplete adoption of the primary stages of the WHO detection protocol, alongside pragmatic or logistical issues associated with study design. Consequently studies may not provide a true estimate of pneumococcal carriage. Optimal sampling of carriage could lead to improvements in downstream analysis and the evaluation of pneumococcal vaccine impact and extrapolation to pneumococcal disease control therefore further in depth comparisons would be of value. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impact of Preadmission Variables on USMLE Step 1 and Step 2 Performance
ERIC Educational Resources Information Center
Kleshinski, James; Khuder, Sadik A.; Shapiro, Joseph I.; Gold, Jeffrey P.
2009-01-01
Purpose: To examine the predictive ability of preadmission variables on United States Medical Licensing Examinations (USMLE) step 1 and step 2 performance, incorporating the use of a neural network model. Method: Preadmission data were collected on matriculants from 1998 to 2004. Linear regression analysis was first used to identify predictors of…
Flagging versus dragging as sampling methods for nymphal Ixodes scapularis (Acari: Ixodidae)
Rulison, Eric L.; Kuczaj, Isis; Pang, Genevieve; Hickling, Graham J.; Tsao, Jean I.; Ginsberg, Howard S.
2013-01-01
The nymphal stage of the blacklegged tick, Ixodes scapularis (Acari: Ixodidae), is responsible for most transmission of Borrelia burgdorferi, the etiologic agent of Lyme disease, to humans in North America. From 2010 to fall of 2012, we compared two commonly used techniques, flagging and dragging, as sampling methods for nymphal I. scapularis at three sites, each with multiple sampling arrays (grids), in the eastern and central United States. Flagging and dragging collected comparable numbers of nymphs, with no consistent differences between methods. Dragging collected more nymphs than flagging in some samples, but these differences were not consistent among sites or sampling years. The ratio of nymphs collected by flagging vs dragging was not significantly related to shrub density, so habitat type did not have a strong effect on the relative efficacy of these methods. Therefore, although dragging collected more ticks in a few cases, the numbers collected by each method were so variable that neither technique had a clear advantage for sampling nymphal I. scapularis.
Hinckley, A; Bachand, A; Nuckols, J; Reif, J
2005-01-01
Background and Aims: Epidemiological studies of disinfection by-products (DBPs) and reproductive outcomes have been hampered by misclassification of exposure. In most epidemiological studies conducted to date, all persons living within the boundaries of a water distribution system have been assigned a common exposure value based on facility-wide averages of trihalomethane (THM) concentrations. Since THMs do not develop uniformly throughout a distribution system, assignment of facility-wide averages may be inappropriate. One approach to mitigate this potential for misclassification is to select communities for epidemiological investigations that are served by distribution systems with consistently low spatial variability of THMs. Methods and Results: A feasibility study was conducted to develop methods for community selection using the Information Collection Rule (ICR) database, assembled by the US Environmental Protection Agency. The ICR database contains quarterly DBP concentrations collected between 1997 and 1998 from the distribution systems of 198 public water facilities with minimum service populations of 100 000 persons. Facilities with low spatial variation of THMs were identified using two methods; 33 facilities were found with low spatial variability based on one or both methods. Because brominated THMs may be important predictors of risk for adverse reproductive outcomes, sites were categorised into three exposure profiles according to proportion of brominated THM species and average TTHM concentration. The correlation between THMs and haloacetic acids (HAAs) in these facilities was evaluated to see whether selection by total trihalomethanes (TTHMs) corresponds to low spatial variability for HAAs. TTHMs were only moderately correlated with HAAs (r = 0.623). Conclusions: Results provide a simple method for a priori selection of sites with low spatial variability from state or national public water facility datasets as a means to reduce exposure misclassification in epidemiological studies of DBPs. PMID:15961627
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
ERIC Educational Resources Information Center
Science Teacher, 1990
1990-01-01
Eight activities for use in the science classroom are presented. Included are insect collecting, laboratory procedures and safety, recycling, current events, variable manipulation, scientific method, electricity, and mechanics (Newton's Second Law of Motion). (KR)
Analysis of the semi-permanent house in Merauke city in terms of aesthetic value in architecture
NASA Astrophysics Data System (ADS)
Topan, Anton; Octavia, Sari; Soleman, Henry
2018-05-01
Semi permanent houses are also used called “Rumah Kancingan” is the houses that generally exist in the Merauke city. Called semi permanent because the main structure use is woods even if the walls uses bricks. This research tries to analyze more about Semi permanent house in terms of aesthethics value. This research is a qualitative research with data collection techniques using questionnaire method and direct observation field and study of literature. The result of questionnaire data collection then processed using SPSS to get the influence of independent variable against the dependent variable and found that color, ornament, shape of the door-window and shape of roof (independent) gives 97,1% influence to the aesthetics of the Semi permanent house and based on the output coefficient SPSS obtained that the dependent variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable. For variables of semi permanent and wooden structure gives an effect of 98,6% to aesthetics and based on the result of SPSS coefficient it is found that free variable has p-value < 0.05 which means independent variables have an effect on significant to aesthetic variable.
Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner
2004-01-01
Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...
Pancreatic islet isolation variables in non-human primates (rhesus macaques).
Andrades, P; Asiedu, C K; Gansuvd, B; Inusah, S; Goodwin, K J; Deckard, L A; Jargal, U; Thomas, J M
2008-07-01
Non-human primates (NHPs) are important preclinical models for pancreatic islet transplantation (PIT) because of their close phylogenetic and immunological relationship with humans. However, low availability of NHP tissue, long learning curves and prohibitive expenses constrain the consistency of isolated NHP islets for PIT studies. To advance preclinical studies, we attempted to identify key variables that consistently influence the quantity and quality of NHP islets. Seventy-two consecutive pancreatic islet isolations from rhesus macaques were reviewed retrospectively. A scaled down, semi-automated islet isolation method was used, and monkeys with streptozotocin-induced diabetes, weighing 3-7 kg, served as recipients for allotransplantation. We analysed the effects of 22 independent variables grouped as donor factors, surgical factors and isolation technique factors. Islet yields, success of isolation and transplantation results were used as quantitative and qualitative outcomes. In the multivariate analysis, variables that significantly affected islet yield were the type of monkey, pancreas preservation, enzyme lot and volume of enzyme delivered. The variables associated with successful isolation were the enzyme lot and volume delivered. The transplant result was correlated with pancreas preservation, enzyme lot, endotoxin levels and COBE collection method. Islet quantity and quality are highly variable between isolations. The data reviewed suggest that future NHP isolations should use bilayer preservation, infuse more than 80 ml of Liberase into the pancreas, collect non-fractioned tissue from the COBE, and strictly monitor for infection.
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
Stability of Measures from Children's Interviews: The Effects of Time, Sample Length, and Topic
ERIC Educational Resources Information Center
Heilmann, John; DeBrock, Lindsay; Riley-Tillman, T. Chris
2013-01-01
Purpose: The purpose of this study was to examine the reliability of, and sources of variability in, language measures from interviews collected from young school-age children. Method: Two 10-min interviews were collected from 20 at-risk kindergarten children by an examiner using a standardized set of questions. Test-retest reliability…
Kogut, Katherine; Eisen, Ellen A.; Jewell, Nicholas P.; Quirós-Alcalá, Lesliam; Castorina, Rosemary; Chevrier, Jonathan; Holland, Nina T.; Barr, Dana Boyd; Kavanagh-Baird, Geri; Eskenazi, Brenda
2012-01-01
Background: Dialkyl phosphate (DAP) metabolites in spot urine samples are frequently used to characterize children’s exposures to organophosphorous (OP) pesticides. However, variable exposure and short biological half-lives of OP pesticides could result in highly variable measurements, leading to exposure misclassification. Objective: We examined within- and between-child variability in DAP metabolites in urine samples collected during 1 week. Methods: We collected spot urine samples over 7 consecutive days from 25 children (3–6 years of age). On two of the days, we collected 24-hr voids. We assessed the reproducibility of urinary DAP metabolite concentrations and evaluated the sensitivity and specificity of spot urine samples as predictors of high (top 20%) or elevated (top 40%) weekly average DAP metabolite concentrations. Results: Within-child variance exceeded between-child variance by a factor of two to eight, depending on metabolite grouping. Although total DAP concentrations in single spot urine samples were moderately to strongly associated with concentrations in same-day 24-hr samples (r ≈ 0.6–0.8, p < 0.01), concentrations in spot samples collected > 1 day apart and in 24-hr samples collected 3 days apart were weakly correlated (r ≈ –0.21 to 0.38). Single spot samples predicted high (top 20%) and elevated (top 40%) full-week average total DAP excretion with only moderate sensitivity (≈ 0.52 and ≈ 0.67, respectively) but relatively high specificity (≈ 0.88 and ≈ 0.78, respectively). Conclusions: The high variability we observed in children’s DAP metabolite concentrations suggests that single-day urine samples provide only a brief snapshot of exposure. Sensitivity analyses suggest that classification of cumulative OP exposure based on spot samples is prone to type 2 classification errors. PMID:23052012
Saleh-Lakha, S.; Allen, V. G.; Li, J.; Pagotto, F.; Odumeru, J.; Taboada, E.; Lombos, M.; Tabing, K. C.; Blais, B.; Ogunremi, D.; Downing, G.; Lee, S.; Gao, A.; Nadon, C.
2013-01-01
Listeria monocytogenes is responsible for severe and often fatal food-borne infections in humans. A collection of 2,421 L. monocytogenes isolates originating from Ontario's food chain between 1993 and 2010, along with Ontario clinical isolates collected from 2004 to 2010, was characterized using an improved multilocus variable-number tandem-repeat analysis (MLVA). The MLVA method was established based on eight primer pairs targeting seven variable-number tandem-repeat (VNTR) loci in two 4-plex fluorescent PCRs. Diversity indices and amplification rates of the individual VNTR loci ranged from 0.38 to 0.92 and from 0.64 to 0.99, respectively. MLVA types and pulsed-field gel electrophoresis (PFGE) patterns were compared using Comparative Partitions analysis involving 336 clinical and 99 food and environmental isolates. The analysis yielded Simpson's diversity index values of 0.998 and 0.992 for MLVA and PFGE, respectively, and adjusted Wallace coefficients of 0.318 when MLVA was used as a primary subtyping method and 0.088 when PFGE was a primary typing method. Statistical data analysis using BioNumerics allowed for identification of at least 8 predominant and persistent L. monocytogenes MLVA types in Ontario's food chain. The MLVA method correctly clustered epidemiologically related outbreak strains and separated unrelated strains in a subset analysis. An MLVA database was established for the 2,421 L. monocytogenes isolates, which allows for comparison of data among historical and new isolates of different sources. The subtyping method coupled with the MLVA database will help in effective monitoring/prevention approaches to identify environmental contamination by pathogenic strains of L. monocytogenes and investigation of outbreaks. PMID:23956391
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
ERIC Educational Resources Information Center
McClean, Brian; Grey, Ian
2012-01-01
Background: Positive behaviour support emphasises the impact of contextual variables to enhance participation, choice, and quality of life. This study evaluates a sequence for implementing changes to key contextual variables for 4 individuals. Interventions were maintained and data collection continued over a 3-year period. Method: Functional…
ERIC Educational Resources Information Center
Akgün, Ismail Hakan
2016-01-01
The aim of this research is to determine Social Studies teacher candidates' intended uses of social networks in terms of various variables. The research was carried out by using screening model of quantitative research methods. In the study, "The Social Network Intended Use Scale" was used as a data collection tool. As a result of the…
Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J
2015-07-01
Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for all methods. For VOIs with elevated uptake, mean and SD were less than 5% except in the thorax. The use of a dedicated atlas for the pediatric patient collective resulted in improved attenuation map prediction in osseous regions and reduced interpatient bias variation in femur-adjacent VOIs. For the lungs, in which intrapatient variation was higher for the pediatric collective, a patient- or group-specific attenuation coefficient might improve attenuation map accuracy. Mean errors of -14% and -23% in bone marrow and femur-adjacent VOIs can affect PET quantification in these regions when bone tissue is ignored. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Sources of variability in collection and preparation of paint and lead-coating samples.
Harper, S L; Gutknecht, W F
2001-06-01
Chronic exposure of children to lead (Pb) can result in permanent physiological impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The sources of variability in the collection and preparation of samples were investigated to improve the performance and comparability of methods and to ensure that data generated will be adequate for its intended use. Paint samples of varying sizes (areas and masses) were collected at different locations across a variety of surfaces including metal, plaster, concrete, and wood. A variety of grinding techniques were compared. Manual mortar and pestle grinding for at least 1.5 min and mechanized grinding techniques were found to generate similar homogenous particle size distributions required for aliquots as small as 0.10 g. When 342 samples were evaluated for sample weight loss during mortar and pestle grinding, 4% had 20% or greater loss with a high of 41%. Homogenization and sub-sampling steps were found to be the principal sources of variability related to the size of the sample collected. Analysis of samples from different locations on apparently identical surfaces were found to vary by more than a factor of two both in Pb concentration (mg cm-2 or %) and areal coating density (g cm-2). Analyses of substrates were performed to determine the Pb remaining after coating removal. Levels as high as 1% Pb were found in some substrate samples, corresponding to more than 35 mg cm-2 Pb. In conclusion, these sources of variability must be considered in development and/or application of any sampling and analysis methodologies.
Efficient method for assessing channel instability near bridges
Robinson, Bret A.; Thompson, R.E.
1993-01-01
Efficient methods for data collection and processing are required to complete channel-instability assessments at 5,600 bridge sites in Indiana at an affordable cost and within a reasonable time frame while maintaining the quality of the assessments. To provide this needed efficiency and quality control, a data-collection form was developed that specifies the data to be collected and the order of data collection. This form represents a modification of previous forms that grouped variables according to type rather than by order of collection. Assessments completed during two field seasons showed that greater efficiency was achieved by using a fill-in-the-blank form that organizes the data to be recorded in a specified order: in the vehicle, from the roadway, in the upstream channel, under the bridge, and in the downstream channel.
COMMUNICATING PROBABILISTIC RISK OUTCOMES TO RISK MANAGERS
Increasingly, risk assessors are moving away from simple deterministic assessments to probabilistic approaches that explicitly incorporate ecological variability, measurement imprecision, and lack of knowledge (collectively termed "uncertainty"). While the new methods provide an...
Conformational Entropy as Collective Variable for Proteins.
Palazzesi, Ferruccio; Valsson, Omar; Parrinello, Michele
2017-10-05
Many enhanced sampling methods rely on the identification of appropriate collective variables. For proteins, even small ones, finding appropriate descriptors has proven challenging. Here we suggest that the NMR S 2 order parameter can be used to this effect. We trace the validity of this statement to the suggested relation between S 2 and conformational entropy. Using the S 2 order parameter and a surrogate for the protein enthalpy in conjunction with metadynamics or variationally enhanced sampling, we are able to reversibly fold and unfold a small protein and draw its free energy at a fraction of the time that is needed in unbiased simulations. We also use S 2 in combination with the free energy flooding method to compute the unfolding rate of this peptide. We repeat this calculation at different temperatures to obtain the unfolding activation energy.
Collective coordinates and constrained hamiltonian systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-07-01
A general method of incorporating collective coordinates (transformation of fields into an overcomplete basis) with constrained hamiltonian systems is given where the original phase space variables and collective coordinates can be bosonic or/and fermionic. This method is illustrated by applying it to the SU(2) Yang-Mills-Higgs theory and its BFV-BRST quantization is discussed. Moreover, this formalism is used to give a systematic way of converting second class constraints into effectively first class ones, by considering second class constraints as first class constraints and gauge fixing conditions. This approach is applied to the massive superparticle. Proca lagrangian, and some topological quantum fieldmore » theories.« less
Comparison of dialysis membrane diffusion samplers and two purging methods in bedrock wells
Imbrigiotta, T.E.; Ehlke, T.A.; Lacombe, P.J.; Dale, J.M.; ,
2002-01-01
Collection of ground-water samples from bedrock wells using low-flow purging techniques is problematic because of the random spacing, variable hydraulic conductivity, and variable contamination of contributing fractures in each well's open interval. To test alternatives to this purging method, a field comparison of three ground-water-sampling techniques was conducted on wells in fractured bedrock at a site contaminated primarily with volatile organic compounds. Constituent concentrations in samples collected with a diffusion sampler constructed from dialysis membrane material were compared to those in samples collected from the same wells with a standard low-flow purging technique and a hybrid (high-flow/low-flow) purging technique. Concentrations of trichloroethene, cis-1,2-dichloroethene, vinyl chloride, calcium, chloride, and alkalinity agreed well among samples collected with all three techniques in 9 of the 10 wells tested. Iron concentrations varied more than those of the other parameters, but their pattern of variation was not consistent. Overall, the results of nonparametric analysis of variance testing on the nine wells sampled twice showed no statistically significant difference at the 95-percent confidence level among the concentrations of volatile organic compounds or inorganic constituents recovered by use of any of the three sampling techniques.
Prodhan, Mohammad Dalower Hossain; Papadakis, Emmanouil-Nikolaos; Papadopoulou-Mourkidou, Euphemia
2018-04-01
Variability of pesticide residues among food items is very important when assessing the risks and food safety for the consumers. Therefore, the present study was undertaken to estimate the unit-to-unit residue variability factors for eggplant. In total, 120 samples from a trial field and 142 samples from different marketplaces in Thessaloniki, Greece, were collected to estimate the variability of pesticide residues in eggplant units. They were extracted by the QuEChERS method and the residues were determined by LC-MS/MS. For the field samples, the unit-to-unit variability factors (VFs) obtained for cypermethrin and deltamethrin residues were 2.54 and 2.51, respectively. The mean residue levels of both pesticides were higher in the composite samples than in the individual samples. The average VFs for the marketplace samples was 3.89. The eggplant units exposed to pesticides were higher in residues than the non-exposed units. The variability factors obtained in the marketplace samples were higher than those in the samples collected from the field trial. A default VF value of 3 for field trials is appropriate for use when assessing the acute dietary intake but a VF for the marketplace samples should be reconsidered with a larger data. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
ERIC Educational Resources Information Center
Abry, Tashia D. S.; Rimm-Kaufman, Sara E.; Larsen, Ross A.; Brewer, Alix J.
2011-01-01
The present study examines data collected during the second year of a three-year longitudinal cluster randomized controlled trial, the Responsive Classroom Efficacy Study (RCES). In the context of and RCT, the research questions address naturally occurring variability in the independent variables of interest (i.e., teachers' (fidelity of…
ERIC Educational Resources Information Center
Forsyth, Rob; McNally, Richard; James, Peter; Crossland, Kevin; Woolley, Mark; Colver, Allan
2010-01-01
Aim: The aim of this study was to examine geographical variability in the support for families caring for children with severe disabilities as well as the relationships between this variability and local government social and educational performance indicators. Method: Data were collected from a cross-sectional, self-completed postal survey of the…
Classification of collected trot, passage and piaffe based on temporal variables.
Clayton, H M
1997-05-01
The objective was to determine whether collected trot, passage and piaffe could be distinguished as separate gaits on the basis of temporal variables. Sagittal plane, 60 Hz videotapes of 10 finalists in the dressage competitions at the 1992 Olympic Games were analysed to measure the temporal variables in absolute terms and as percentages of stride duration. Classification was based on analysis of variance, a graphical method and discriminant analysis. Stride duration was sufficient to distinguish collected trot from passage and piaffe in all horses. The analysis of variance showed that the mean values of most variables differed significantly between passage and piaffe. When hindlimb stance percentage was plotted against diagonal advanced placement percentage, some overlap was found between all 3 movements indicating that individual horses could not be classified reliably in this manner. Using hindlimb stance percentage and diagonal advanced placement percentage as input in a discriminant analysis, 80% of the cases were classified correctly, but at least one horse was misclassified in each movement. When the absolute, rather than percentage, values of the 2 variables were used as input in the discriminant analysis, 90% of the cases were correctly classified and the only misclassifications were between passage and piaffe. However, the 2 horses in which piaffe was misclassified as passage were the gold and silver medallists. In general, higher placed horses tended toward longer diagonal advanced placements, especially in collected trot and passage, and shorter hindlimb stance percentages in passage and piaffe.
Khan, Jenna; Lieberman, Joshua A; Lockwood, Christina M
2017-05-01
microRNAs (miRNAs) hold promise as biomarkers for a variety of disease processes and for determining cell differentiation. These short RNA species are robust, survive harsh treatment and storage conditions and may be extracted from blood and tissue. Pre-analytical variables are critical confounders in the analysis of miRNAs: we elucidate these and identify best practices for minimizing sample variation in blood and tissue specimens. Pre-analytical variables addressed include patient-intrinsic variation, time and temperature from sample collection to storage or processing, processing methods, contamination by cells and blood components, RNA extraction method, normalization, and storage time/conditions. For circulating miRNAs, hemolysis and blood cell contamination significantly affect profiles; samples should be processed within 2 h of collection; ethylene diamine tetraacetic acid (EDTA) is preferred while heparin should be avoided; samples should be "double spun" or filtered; room temperature or 4 °C storage for up to 24 h is preferred; miRNAs are stable for at least 1 year at -20 °C or -80 °C. For tissue-based analysis, warm ischemic time should be <1 h; cold ischemic time (4 °C) <24 h; common fixative used for all specimens; formalin fix up to 72 h prior to processing; enrich for cells of interest; validate candidate biomarkers with in situ visualization. Most importantly, all specimen types should have standard and common workflows with careful documentation of relevant pre-analytical variables.
Turbidity threshold sampling: Methods and instrumentation
Rand Eads; Jack Lewis
2001-01-01
Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua
2017-10-01
Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.
Lovell, David P; Fellows, Mick; Marchetti, Francesco; Christiansen, Joan; Elhajouji, Azeddine; Hashimoto, Kiyohiro; Kasamoto, Sawako; Li, Yan; Masayasu, Ozaki; Moore, Martha M; Schuler, Maik; Smith, Robert; Stankowski, Leon F; Tanaka, Jin; Tanir, Jennifer Y; Thybaud, Veronique; Van Goethem, Freddy; Whitwell, James
2018-01-01
The recent revisions of the Organisation for Economic Co-operation and Development (OECD) genetic toxicology test guidelines emphasize the importance of historical negative controls both for data quality and interpretation. The goal of a HESI Genetic Toxicology Technical Committee (GTTC) workgroup was to collect data from participating laboratories and to conduct a statistical analysis to understand and publish the range of values that are normally seen in experienced laboratories using TK6 cells to conduct the in vitro micronucleus assay. Data from negative control samples from in vitro micronucleus assays using TK6 cells from 13 laboratories were collected using a standard collection form. Although in some cases statistically significant differences can be seen within laboratories for different test conditions, they were very small. The mean incidence of micronucleated cells/1000 cells ranged from 3.2/1000 to 13.8/1000. These almost four-fold differences in micronucleus levels cannot be explained by differences in scoring method, presence or absence of exogenous metabolic activation (S9), length of treatment, presence or absence of cytochalasin B or different solvents used as vehicles. The range of means from the four laboratories using flow cytometry methods (3.7-fold: 3.5-12.9 micronucleated cells/1000 cells) was similar to that from the nine laboratories using other scoring methods (4.3-fold: 3.2-13.8 micronucleated cells/1000 cells). No laboratory could be identified as an outlier or as showing unacceptably high variability. Quality Control (QC) methods applied to analyse the intra-laboratory variability showed that there was evidence of inter-experimental variability greater than would be expected by chance (i.e. over-dispersion). However, in general, this was low. This study demonstrates the value of QC methods in helping to analyse the reproducibility of results, building up a 'normal' range of values, and as an aid to identify variability within a laboratory in order to implement processes to maintain and improve uniformity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Quality of nutrient data from streams and ground water sampled during water years 1992-2001
Mueller, David K.; Titus, Cindy J.
2005-01-01
Proper interpretation of water-quality data requires consideration of the effects that bias and variability might have on measured constituent concentrations. In this report, methods are described to estimate the bias due to contamination of samples in the field or laboratory and the variability due to sample collection, processing, shipment, and analysis. Contamination can adversely affect interpretation of measured concentrations in comparison to standards or criteria. Variability can affect interpretation of small differences between individual measurements or mean concentrations. Contamination and variability are determined for nutrient data from quality-control samples (field blanks and replicates) collected as part of the National Water-Quality Assessment (NAWQA) Program during water years 1992-2001. Statistical methods are used to estimate the likelihood of contamination and variability in all samples. Results are presented for five nutrient analytes from stream samples and four nutrient analytes from ground-water samples. Ammonia contamination can add at least 0.04 milligram per liter in up to 5 percent of all samples. This could account for more than 22 percent of measured concentrations at the low range of aquatic-life criteria (0.18 milligram per liter). Orthophosphate contamination, at least 0.019 milligram per liter in up to 5 percent of all samples, could account for more than 38 percent of measured concentrations at the limit to avoid eutrophication (0.05 milligram per liter). Nitrite-plus-nitrate and Kjeldahl nitrogen contamination is less than 0.4 milligram per liter in 99 percent of all samples; thus there is no significant effect on measured concentrations of environmental significance. Sampling variability has little or no effect on reported concentrations of ammonia, nitrite-plus-nitrate, orthophosphate, or total phosphorus sampled after 1998. The potential errors due to sampling variability are greater for the Kjeldahl nitrogen analytes and for total phosphorus sampled before 1999. The uncertainty in a mean of 10 concentrations caused by sampling variability is within a small range (1 to 7 percent) for all nutrients. These results can be applied to interpretation of environmental data collected during water years 1992-2001 in 52 NAWQA study units.
Berrozpe, Pablo; Lamattina, Daniela; Santini, María Soledad; Araujo, Analía Vanesa; Utgés, María Eugenia; Salomón, Oscar Daniel
2017-01-01
BACKGROUND Visceral leishmaniasis (VL) is an endemic disease in northeastern Argentina including the Corrientes province, where the presence of the vector and canine cases of VL were recently confirmed in December 2008. OBJECTIVES The objective of this study was to assess the modelling of micro- and macro-habitat variables to evaluate the urban environmental suitability for the spatial distribution of Lutzomyia longipalpis presence and abundance in an urban scenario. METHODS Sampling of 45 sites distributed throughout Corrientes city (Argentina) was carried out using REDILA-BL minilight traps in December 2013. The sampled specimens were identified according to methods described by Galati (2003). The analysis of variables derived from the processing of satellite images (macro-habitat variables) and from the entomological sampling and surveys (micro-habitat variables) was performed using the statistical software R. Three generalised linear models were constructed composed of micro- and macro-habitat variables to explain the spatial distribution of the abundance of Lu. longipalpis and one composed of micro-habitat variables to explain the occurrence of the vector. FINDINGS A total of 609 phlebotominae belonging to five species were collected, of which 56% were Lu. longipalpis. In addition, the presence of Nyssomyia neivai and Migonemya migonei, which are vectors of tegumentary leishmaniasis, were also documented and represented 34.81% and 6.74% of the collections, respectively. The explanatory variable normalised difference vegetation index (NDVI) described the abundance distribution, whereas the presence of farmyard animals was important for explaining both the abundance and the occurrence of the vector. MAIN CONCLUSIONS The results contribute to the identification of variables that can be used to establish priority areas for entomological surveillance and provide an efficient transfer tool for the control and prevention of vector-borne diseases. PMID:28953995
Chapin, Thomas
2015-01-01
Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.
New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise
NASA Astrophysics Data System (ADS)
Pal'a, Jozef; Ušák, Elemír
2016-03-01
A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.
ERIC Educational Resources Information Center
Finch, W. Holmes; Shim, Sungok Serena
2018-01-01
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
Cost Modeling for Space Telescope
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models.
Probabilistic structural analysis methods of hot engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Hopkins, D. A.
1989-01-01
Development of probabilistic structural analysis methods for hot engine structures at Lewis Research Center is presented. Three elements of the research program are: (1) composite load spectra methodology; (2) probabilistic structural analysis methodology; and (3) probabilistic structural analysis application. Recent progress includes: (1) quantification of the effects of uncertainties for several variables on high pressure fuel turbopump (HPFT) turbine blade temperature, pressure, and torque of the space shuttle main engine (SSME); (2) the evaluation of the cumulative distribution function for various structural response variables based on assumed uncertainties in primitive structural variables; and (3) evaluation of the failure probability. Collectively, the results demonstrate that the structural durability of hot engine structural components can be effectively evaluated in a formal probabilistic/reliability framework.
Van Norman, Ethan R; Christ, Theodore J
2016-10-01
Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Burrow, J Gordon
2016-05-01
This small-scale study examined the role that bare footprint collection and measurement processes have on the Reel method of measurement in forensic podiatry and its use in the Criminal Justice System. Previous research indicated that the Reel method was a valid and reliable measurement system for bare footprint analysis but various collection systems have been used to collect footprint data and both manual and digital measurement processes were utilized in forensic podiatry and other disciplines. This study contributes to the debate about collecting bare footprints; the techniques employed to quantify various Reel measurements and considered whether there was asymmetry between feet and footprints of the same person. An inductive, quantitative paradigm used the Podotrack gathering procedure for footprint collection and the subsequent dynamic footprints subjected to Adobe Photoshop techniques of calculating the Reel linear variables. Statistical analyses using paired-sample t tests were conducted to test hypotheses and compare data sets. Standard error of mean (SEM) showed variation between feet and the findings provide support for the Reel study and measurement method. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Johnson, Margaret E.; Hummer, Gerhard
2012-01-01
We explore the theoretical foundation of different string methods used to find dominant reaction pathways in high-dimensional configuration spaces. Pathways are assessed by the amount of reactive flux they carry and by their orientation relative to the committor function. By examining the effects of transforming between different collective coordinates that span the same underlying space, we unmask artificial coordinate dependences in strings optimized to follow the free energy gradient. In contrast, strings optimized to follow the drift vector produce reaction pathways that are significantly less sensitive to reparameterizations of the collective coordinates. The differences in these paths arise because the drift vector depends on both the free energy gradient and the diffusion tensor of the coarse collective variables. Anisotropy and position dependence of diffusion tensors arise commonly in spaces of coarse variables, whose generally slow dynamics are obtained by nonlinear projections of the strongly coupled atomic motions. We show here that transition paths constructed to account for dynamics by following the drift vector will (to a close approximation) carry the maximum reactive flux both in systems with isotropic position dependent diffusion, and in systems with constant but anisotropic diffusion. We derive a simple method for calculating the committor function along paths that follow the reactive flux. Lastly, we provide guidance for the practical implementation of the dynamic string method. PMID:22616575
Synthesis study on transverse variable asphalt application rates for seal coats.
DOT National Transportation Integrated Search
2009-06-01
This report documents a cooperative effort to collect, process, and make available information about successful methods of varying seal coat asphalt application rates across treated roadways to optimize aggregate retention and avoid wheel path flushi...
System and method for anomaly detection
Scherrer, Chad
2010-06-15
A system and method for detecting one or more anomalies in a plurality of observations is provided. In one illustrative embodiment, the observations are real-time network observations collected from a stream of network traffic. The method includes performing a discrete decomposition of the observations, and introducing derived variables to increase storage and query efficiencies. A mathematical model, such as a conditional independence model, is then generated from the formatted data. The formatted data is also used to construct frequency tables which maintain an accurate count of specific variable occurrence as indicated by the model generation process. The formatted data is then applied to the mathematical model to generate scored data. The scored data is then analyzed to detect anomalies.
Ecological Momentary Assessment is a Neglected Methodology in Suicidology.
Davidson, Collin L; Anestis, Michael D; Gutierrez, Peter M
2017-01-02
Ecological momentary assessment (EMA) is a group of research methods that collect data frequently, in many contexts, and in real-world settings. EMA has been fairly neglected in suicidology. The current article provides an overview of EMA for suicidologists including definitions, data collection considerations, and different sampling strategies. Next, the benefits of EMA in suicidology (i.e., reduced recall bias, accurate tracking of fluctuating variables, testing assumptions of theories, use in interventions), participant safety considerations, and examples of published research that investigate self-directed violence variables using EMA are discussed. The article concludes with a summary and suggested directions for EMA research in suicidology with the particular aim to spur the increased use of this methodology among suicidologists.
Data preparation techniques for a perinatal psychiatric study based on linked data.
Xu, Fenglian; Hilder, Lisa; Austin, Marie-Paule; Sullivan, Elizabeth A
2012-06-08
In recent years there has been an increase in the use of population-based linked data. However, there is little literature that describes the method of linked data preparation. This paper describes the method for merging data, calculating the statistical variable (SV), recoding psychiatric diagnoses and summarizing hospital admissions for a perinatal psychiatric study. The data preparation techniques described in this paper are based on linked birth data from the New South Wales (NSW) Midwives Data Collection (MDC), the Register of Congenital Conditions (RCC), the Admitted Patient Data Collection (APDC) and the Pharmaceutical Drugs of Addiction System (PHDAS). The master dataset is the meaningfully linked data which include all or major study data collections. The master dataset can be used to improve the data quality, calculate the SV and can be tailored for different analyses. To identify hospital admissions in the periods before pregnancy, during pregnancy and after birth, a statistical variable of time interval (SVTI) needs to be calculated. The methods and SPSS syntax for building a master dataset, calculating the SVTI, recoding the principal diagnoses of mental illness and summarizing hospital admissions are described. Linked data preparation, including building the master dataset and calculating the SV, can improve data quality and enhance data function.
Variables affecting learning in a simulation experience: a mixed methods study.
Beischel, Kelly P
2013-02-01
The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.
Clinical Significance of Mobile Health Assessed Sleep Duration and Variability in Bipolar Disorder
Kaufmann, Christopher N.; Gershon, Anda; Eyler, Lisa T.; Depp, Colin A.
2016-01-01
OBJECTIVE Sleep disturbances are prevalent, persistent, and impairing features of bipolar disorder. However, the near-term and cumulative impact of the severity and variability of sleep disturbances on symptoms and functioning remains unclear. We examined self-reported daily sleep duration and variability in relation to mood symptoms, medication adherence, cognitive functioning, and concurrent daily affect. METHODS Forty-one outpatients diagnosed with bipolar disorder were asked to provide daily reports of sleep duration and affect collected via ecological momentary assessment with smartphones over eleven weeks. Measures of depressive and manic symptoms, medication adherence, and cognitive function were collected at baseline and concurrent assessment of affect were collected daily. Analyses examined whether sleep duration or variability were associated with baseline measures and changes in same-day or next-day affect. RESULTS Greater sleep duration variability (but not average sleep duration) was associated with greater depressive and manic symptom severity, and lower medication adherence at baseline, and with lower and more variable ratings of positive affect and higher ratings of negative affect. Sleep durations shorter than 7-8 hours were associated with lower same-day ratings of positive and higher same-day ratings of negative affect, however this did not extend to next-day affect. CONCLUSIONS Greater cumulative day-to-day sleep duration variability, but not average sleep duration, was related to more severe mood symptoms, lower self-reported medication adherence and higher levels of negative affect. Bouts of short- or long-duration sleep had transient impact on affect. Day-to-day sleep variability may be important to incorporate into clinical assessment of sleep disturbances in bipolar disorder. PMID:27451108
Oviedo-Ocaña, E R; Torres-Lozada, P; Marmolejo-Rebellon, L F; Torres-López, W A; Dominguez, I; Komilis, D; Sánchez, A
2017-04-01
Biowaste is commonly the largest fraction of municipal solid waste (MSW) in developing countries. Although composting is an effective method to treat source separated biowaste (SSB), there are certain limitations in terms of operation, partly due to insufficient control to the variability of SSB quality, which affects process kinetics and product quality. This study assesses the variability of the SSB physicochemical quality in a composting facility located in a small town of Colombia, in which SSB collection was performed twice a week. Likewise, the influence of the SSB physicochemical variability on the variability of compost parameters was assessed. Parametric and non-parametric tests (i.e. Student's t-test and the Mann-Whitney test) showed no significant differences in the quality parameters of SSB among collection days, and therefore, it was unnecessary to establish specific operation and maintenance regulations for each collection day. Significant variability was found in eight of the twelve quality parameters analyzed in the inlet stream, with corresponding coefficients of variation (CV) higher than 23%. The CVs for the eight parameters analyzed in the final compost (i.e. pH, moisture, total organic carbon, total nitrogen, C/N ratio, total phosphorus, total potassium and ash) ranged from 9.6% to 49.4%, with significant variations in five of those parameters (CV>20%). The above indicate that variability in the inlet stream can affect the variability of the end-product. Results suggest the need to consider variability of the inlet stream in the performance of composting facilities to achieve a compost of consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cuq, Benoît; Blois, Shauna L; Wood, R Darren; Monteith, Gabrielle; Abrams-Ogg, Anthony C; Bédard, Christian; Wood, Geoffrey A
2018-06-01
Thrombin plays a central role in hemostasis and thrombosis. Calibrated automated thrombography (CAT), a thrombin generation assay, may be a useful test for hemostatic disorders in dogs. To describe CAT results in a group of healthy dogs, and assess preanalytical variables and biological variability. Forty healthy dogs were enrolled. Lag time (Lag), time to peak (ttpeak), peak thrombin generation (peak), and endogenous thrombin potential (ETP) were measured. Direct jugular venipuncture and winged-needle catheter-assisted saphenous venipuncture were used to collect samples from each dog, and results were compared between methods. Sample stability at -80°C was assessed over 12 months in a subset of samples. Biological variability of CAT was assessed via nested ANOVA using samples obtained weekly from a subset of 9 dogs for 4 consecutive weeks. Samples for CAT were stable at -80°C over 12 months of storage. Samples collected via winged-needle catheter venipuncture showed poor repeatability compared to direct venipuncture samples; there was also poor agreement between the 2 sampling methods. Intra-individual variability of CAT parameters was below 25%; inter-individual variability ranged from 36.9% to 78.5%. Measurement of thrombin generation using CAT appears to be repeatable in healthy dogs, and samples are stable for at least 12 months when stored at -80°C. Direct venipuncture sampling is recommended for CAT. Low indices of individuality suggest that subject-based reference intervals are more suitable when interpreting CAT results. © 2018 American Society for Veterinary Clinical Pathology.
Case Study Research Methodology in Nursing Research.
Cope, Diane G
2015-11-01
Through data collection methods using a holistic approach that focuses on variables in a natural setting, qualitative research methods seek to understand participants' perceptions and interpretations. Common qualitative research methods include ethnography, phenomenology, grounded theory, and historic research. Another type of methodology that has a similar qualitative approach is case study research, which seeks to understand a phenomenon or case from multiple perspectives within a given real-world context.
Impact of Sampling and Cellular Separation on Amino Acid Determinations in Drosophila Hemolymph.
Cabay, Marissa R; Harris, Jasmine C; Shippy, Scott A
2018-04-03
The fruit fly is a frequently used model system with a high degree of human disease-related genetic homology. The quantitative chemical analysis of fruit fly tissues and hemolymph uniquely brings chemical signaling and compositional information to fly experimentation. The work here explores the impact of measured chemical content of hemolymph with three aspects of sample collection and preparation. Cellular content of hemolymph was quantitated and removed to determine hemolymph composition changes for seven primary amine analytes. Hemolymph sampling methods were adapted to determine differences in primary amine composition of hemolymph collected from the head, antenna, and abdomen. Also, three types of anesthesia were employed with hemolymph collection to quantitate effects on measured amino acid content. Cell content was found to be 45.4 ± 22.1 cells/nL of hemolymph collected from both adult and larvae flies. Cell-concentrated fractions of adult, but not larvae, hemolymph were found to have higher and more variable amine content. There were amino acid content differences found between all three areas indicating a robust method to characterize chemical markers from specific regions of a fly, and these appear related to physiological activity. Methods of anesthesia have an impact on hemolymph amino acid composition related to overall physiological impact to fly including higher amino acid content variability and oxygen deprivation effects. Together, these analyses identify potential complications with Drosophila hemolymph analysis and opportunities for future studies to relate hemolymph content with model physiological activity.
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
Sotelo, Pablo H.; Collazo, Noberto; Zuñiga, Roberto; Gutiérrez-González, Matías; Catalán, Diego; Ribeiro, Carolina Hager; Aguillón, Juan Carlos; Molina, María Carmen
2012-01-01
Phage display library technology is a common method to produce human antibodies. In this technique, the immunoglobulin variable regions are displayed in a bacteriophage in a way that each filamentous virus displays the product of a single antibody gene on its surface. From the collection of different phages, it is possible to isolate the virus that recognizes specific targets. The most common form in which to display antibody variable regions in the phage is the single chain variable fragment format (scFv), which requires assembly of the heavy and light immunoglobulin variable regions in a single gene. In this work, we describe a simple and efficient method for the assembly of immunoglobulin heavy and light chain variable regions in a scFv format. This procedure involves a two-step reaction: (1) DNA amplification to produce the single strand form of the heavy or light chain gene required for the fusion; and (2) mixture of both single strand products followed by an assembly reaction to construct a complete scFv gene. Using this method, we produced 6-fold more scFv encoding DNA than the commonly used splicing by overlap extension PCR (SOE-PCR) approach. The scFv gene produced by this method also proved to be efficient in generating a diverse scFv phage display library. From this scFv library, we obtained phages that bound several non-related antigens, including recombinant proteins and rotavirus particles. PMID:22692130
Longitudinal Urinary Protein Variability in Participants of the Space Flight Simulation Program.
Khristenko, Nina A; Larina, Irina M; Domon, Bruno
2016-01-04
Urine is a valuable material for the diagnosis of renal pathologies and to investigate the effects of their treatment. However, the variability in protein abundance in the context of normal homeostasis remains a major challenge in urinary proteomics. In this study, the analysis of urine samples collected from healthy individuals, rigorously selected to take part in the MARS-500 spaceflight simulation program, provided a unique opportunity to estimate normal concentration ranges for an extended set of urinary proteins. In order to systematically identify and reliably quantify peptides/proteins across a large sample cohort, a targeted mass spectrometry method was developed. The performance of parallel reaction monitoring (PRM) analyses was improved by implementing tight control of the monitoring windows during LC-MS/MS runs, using an on-the-fly correction routine. Matching the experimentally obtained MS/MS spectra with reference fragmentation patterns allowed dependable peptide identifications to be made. Following optimization and evaluation, the targeted method was applied to investigate protein abundance variability in 56 urine samples, collected from six volunteers participating in the MARS-500 program. The intrapersonal protein concentration ranges were determined for each individual and showed unexpectedly high abundance variation, with an average difference of 1 order of magnitude.
ERIC Educational Resources Information Center
Jay, Tim
2012-01-01
Verbal reports are a common method of data collection in studies of mathematics learning, often in studies with a longitudinal component or those employing microgenetic methods where several observations of problem-solving are made over a short period of time. Whilst there is a fairly substantial literature on reactivity to verbal reports,…
Metherel, Adam H; Aristizabal Henao, Juan J; Ciobanu, Flaviu; Taha, Ameer Y; Stark, Ken D
2015-09-01
Dried blood spots (DBS) by fingertip prick collection for fatty acid profiling are becoming increasingly popular due to ease of collection, minimal invasiveness and its amenability to high-throughput analyses. Herein, we assess a microwave-assisted direct transesterification method for the production of fatty acid methyl esters (FAME) from DBS. Technical replicates of human whole blood were collected and 25-μL aliquots were applied to chromatography strips prior to analysis by a standard 3-h transesterification method or microwave-assisted direct transesterification method under various power (variable vs constant), time (1-5 min) and reagent (1-10% H2SO4 in methanol) conditions. In addition, a standard method was compared to a 5-min, 30-W power microwave in 1% H2SO4 method for FAME yield from whole blood sphingomyelin, and sphingomyelin standards alone and spiked in whole blood. Microwave-assisted direct transesterification yielded no significant differences in both quantitative (nmol/100 µL) and qualitative (mol%) fatty acid assessments after as little as 1.5- and 1-min reaction times, respectively, using the variable power method and 5% H2SO4 in methanol. However, 30-W power for 5 min increased total FAME yield of the technical replicates by 14%. This increase appears largely due to higher sphingomyelin-derived FAME yield of up to 109 and 399% compared to the standard method when determined from whole blood or pure standards, respectively. In conclusion, microwave-assisted direct transesterification of DBS achieved in as little as 1-min, and 5-min reaction times increase total fatty acids primarily by significantly improving sphingomyelin-derived fatty acid yield.
NASA Astrophysics Data System (ADS)
Nepomuceno, Miguel C. S.; Lopes, Sérgio M. R.
2017-10-01
Non-destructive tests (NDT) have been used in the last decades for the assessment of in-situ quality and integrity of concrete elements. An important step in the application of NDT methods concerns to the interpretation and validation of the test results. In general, interpretation of NDT results should involve three distinct phases leading to the development of conclusions: processing of collected data, analysis of within-test variability and quantitative evaluation of property under investigation. The analysis of within-test variability can provide valuable information, since this can be compared with that of within-test variability associated with the NDT method in use, either to provide a measure of the quality control or to detect the presence of abnormal circumstances during the in-situ application. This paper reports the analysis of the experimental results of within-test variability of NDT obtained for normal vibrated concrete and self-compacting concrete. The NDT reported includes the surface hardness test, ultrasonic pulse velocity test, penetration resistance test, pull-off test, pull-out test and maturity test. The obtained results are discussed and conclusions are presented.
AIC identifies optimal representation of longitudinal dietary variables.
VanBuren, John; Cavanaugh, Joseph; Marshall, Teresa; Warren, John; Levy, Steven M
2017-09-01
The Akaike Information Criterion (AIC) is a well-known tool for variable selection in multivariable modeling as well as a tool to help identify the optimal representation of explanatory variables. However, it has been discussed infrequently in the dental literature. The purpose of this paper is to demonstrate the use of AIC in determining the optimal representation of dietary variables in a longitudinal dental study. The Iowa Fluoride Study enrolled children at birth and dental examinations were conducted at ages 5, 9, 13, and 17. Decayed or filled surfaces (DFS) trend clusters were created based on age 13 DFS counts and age 13-17 DFS increments. Dietary intake data (water, milk, 100 percent-juice, and sugar sweetened beverages) were collected semiannually using a food frequency questionnaire. Multinomial logistic regression models were fit to predict DFS cluster membership (n=344). Multiple approaches could be used to represent the dietary data including averaging across all collected surveys or over different shorter time periods to capture age-specific trends or using the individual time points of dietary data. AIC helped identify the optimal representation. Averaging data for all four dietary variables for the whole period from age 9.0 to 17.0 provided a better representation in the multivariable full model (AIC=745.0) compared to other methods assessed in full models (AICs=750.6 for age 9 and 9-13 increment dietary measurements and AIC=762.3 for age 9, 13, and 17 individual measurements). The results illustrate that AIC can help researchers identify the optimal way to summarize information for inclusion in a statistical model. The method presented here can be used by researchers performing statistical modeling in dental research. This method provides an alternative approach for assessing the propriety of variable representation to significance-based procedures, which could potentially lead to improved research in the dental community. © 2017 American Association of Public Health Dentistry.
2009-10-01
8 weeks. The experimental procedure consisted in collecting (i) psychological data (resilience, well-being, anxiety ), (ii) 12h-night urines to assess...was performed during 6 to 8 weeks. The experimental procedure consisted in collecting (i) psychological data (resilience, well-being, anxiety ), (ii...cardio- vascular regulation, the spectral analysis of heart rate variability ( HRV ) analysis is usually proposed as a method to assess vagal tone [7,2,8
da Costa Pereira, A; Olsen, J; Ogston, S
1993-01-01
STUDY OBJECTIVE--To describe the intra-subject variability of self reported maternal alcohol consumption using different ways of collecting information and to analyse the implications of this variability for research into the effect of low to moderate maternal alcohol consumption on birth weight. DESIGN--This was a longitudinal study. Self reported maternal alcohol consumption before, during, and after pregnancy was assessed on four occasions over two years. The data were collected by two self administered questionnaires and during two personal interviews (one by phone and another face to face). SETTINGS--The Obstetrics Department, Odense University Hospital, Odense, Fünen, Denmark. PARTICIPANTS--A total of 2880 pregnant women were recruited consecutively from the hospital catchment area. Altogether 328 pregnant women and their babies were selected. All women who reported an average alcohol consumption of five drinks or more per week were recruited to the study (164 women) and a 1:1 control group was selected from the remaining women based upon two matching criteria: expected date of delivery and the women's year of birth. Some 279 women (85%) completed the study. MEASUREMENTS AND MAIN RESULTS--Self reported alcohol consumption (number of drinks per week) and birth weight (g) were the main outcomes. Women's self reported alcohol consumption varied over time and according to the data collection method. When different methods of data collection were used to assess alcohol intake in similar periods of time, significant differences in reporting were found despite the relatively high correlations between the measurements. Although a consistent reduction in birth weight with increasing consumption of alcohol was found, there were differences in the shape and strength of this association when comparing the six available alcohol measurements. CONCLUSIONS--The type of questions used, the way the data were collected, the period of time referred to, and the time the questions were asked, should be taken into consideration when describing the drinking pattern of pregnant women. Furthermore, birth weight results from studies that have used different alcohol measures should be interpreted or compared with caution because of possible large differences resulting from the differing methods of assessing fetal exposure to alcohol. PMID:8228772
Integrated Data Collection and Analysis Project: Friction Correlation Study
2015-08-01
methods authorized in AOP-7 include Pendulum Friction, Rotary Friction, Sliding Friction (ABL), BAM Friction and Steel/Fiber Shoe Methods. The...sensitivity can be obtained by Pendulum Friction, Rotary Friction, Sliding Friction (such as the ABL), BAM Friction and Steel/Fiber Shoe Methods.3, 4 Within...Figure 4.16 A variable compressive force is applied downward through the wheel hydraulically (50-1995 psi). The 5 kg pendulum impacts (8 ft/sec is the
Battistoni, Andrea; Bencivenga, Filippo; Fioretto, Daniele; Masciovecchio, Claudio
2014-10-15
In this Letter, we present a simple method to avoid the well-known spurious contributions in the Brillouin light scattering (BLS) spectrum arising from the finite aperture of collection optics. The method relies on the use of special spatial filters able to select the scattered light with arbitrary precision around a given value of the momentum transfer (Q). We demonstrate the effectiveness of such filters by analyzing the BLS spectra of a reference sample as a function of scattering angle. This practical and inexpensive method could be an extremely useful tool to fully exploit the potentiality of Brillouin acoustic spectroscopy, as it will easily allow for effective Q-variable experiments with unparalleled luminosity and resolution.
Sun, Rui; Dama, James F; Tan, Jeffrey S; Rose, John P; Voth, Gregory A
2016-10-11
Metadynamics is an important enhanced sampling technique in molecular dynamics simulation to efficiently explore potential energy surfaces. The recently developed transition-tempered metadynamics (TTMetaD) has been proven to converge asymptotically without sacrificing exploration of the collective variable space in the early stages of simulations, unlike other convergent metadynamics (MetaD) methods. We have applied TTMetaD to study the permeation of drug-like molecules through a lipid bilayer to further investigate the usefulness of this method as applied to problems of relevance to medicinal chemistry. First, ethanol permeation through a lipid bilayer was studied to compare TTMetaD with nontempered metadynamics and well-tempered metadynamics. The bias energies computed from various metadynamics simulations were compared to the potential of mean force calculated from umbrella sampling. Though all of the MetaD simulations agree with one another asymptotically, TTMetaD is able to predict the most accurate and reliable estimate of the potential of mean force for permeation in the early stages of the simulations and is robust to the choice of required additional parameters. We also show that using multiple randomly initialized replicas allows convergence analysis and also provides an efficient means to converge the simulations in shorter wall times and, more unexpectedly, in shorter CPU times; splitting the CPU time between multiple replicas appears to lead to less overall error. After validating the method, we studied the permeation of a more complicated drug-like molecule, trimethoprim. Three sets of TTMetaD simulations with different choices of collective variables were carried out, and all converged within feasible simulation time. The minimum free energy paths showed that TTMetaD was able to predict almost identical permeation mechanisms in each case despite significantly different definitions of collective variables.
Cleanliness Policy Implementation: Evaluating Retribution Model to Rise Public Satisfaction
NASA Astrophysics Data System (ADS)
Dailiati, Surya; Hernimawati; Prihati; Chintia Utami, Bunga
2018-05-01
This research is based on the principal issues concerning the evaluation of cleanliness retribution policy which has not been optimally be able to improve the Local Revenue of Pekanbaru City and has not improved the cleanliness of Pekanbaru City. It was estimated to be caused by the performance of Garden and Sanitation Department are not in accordance with the requirement of society of Pekanbaru City. The research method used in this study is a mixed method with sequential exploratory strategy. The data collection used are observation, interview and documentation for qualitative research as well as questionnaires for quantitative research. The collected data were analyzed with interactive model of Miles and Huberman for qualitative research and multiple regression analysis for quantitative research. The research result indicated that the model of cleanliness policy implementation that can increase of PAD Pekanbaru City and be able to improve people’s satisfaction divided into two (2) which are the evaluation model and the society satisfaction model. The evaluation model influence by criteria/variable of effectiveness, efficiency, adequacy, equity, responsiveness, and appropriateness, while the society satisfaction model influence by variables of society satisfaction, intentions, goals, plans, programs, and appropriateness of cleanliness retribution collection policy.
Emerson, Douglas G.; Vecchia, Aldo V.; Dahl, Ann L.
2005-01-01
The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data were collected. To evaluate the validity of the drainage-area ratio method and to determine if an improved method could be developed to estimate streamflow, a multiple-regression technique was used to determine if drainage area, main channel slope, and precipitation were significant variables for estimating streamflow in the Red River of the North Basin. A separate regression analysis was performed for streamflow for each of three seasons-- winter, spring, and summer. Drainage area and summer precipitation were the most significant variables. However, the regression equations generally overestimated streamflows for North Dakota stations and underestimated streamflows for Minnesota stations. To correct the bias in the residuals for the two groups of stations, indicator variables were included to allow both the intercept and the coefficient for the logarithm of drainage area to depend on the group. Drainage area was the only significant variable in the revised regression equations. The exponents for the drainage-area ratio were 0.85 for the winter season, 0.91 for the spring season, and 1.02 for the summer season.
tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.
M Sultan, Mohammad; Pande, Vijay S
2017-06-13
Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.
Application of the Newtonian nudging data assimilation method for the Biebrza River flow model
NASA Astrophysics Data System (ADS)
Miroslaw-Swiatek, Dorota
2010-05-01
Data assimilation provides a tool for integrating observations of spatially distributed environmental variables with model predictions. In this paper a simple data assimilation technique, the Newtonian nudging to individual observations method, has been implemented in the 1D St. Venant equations. The method involves adding a term to the prognostic equation. This term is proportional to the difference between the value calculated in the model at a given point in time and space and the one resulted from observations. Improving the model with available measurement observations is accomplished by adequate weighting functions, that can incorporate prior knowledge about the spatial and temporal variability of the state variables being assimilated. The article contains a description of the numerical model, which employs the finite element method (FEM) to solve the 1D St. Venant equations modified by the ‘nudging' method. The developed model was applied to the Biebrza River, situated in the north-eastern part of Poland, flowing through the last extensive, fairly undisturbed river-marginal peatland in Europe. A 41 km long reach of the Lower Biebrza River described by 68 cross-sections was selected for the study. The observed water stage collected by automatic sensors was the subject of the data assimilation in the Newtonian nudging to individual observations method. The water level observation data were collected in four observation points along a river with time interval 6 hours for one year. The obtained results show a prediction with no nudging and influence of the nudging term on water stages forecast. The developed model enables integrating water stage observation with an unsteady river flow model for improved water level prediction.
Matsche, Mark A; Rosemary, Kevin; Stence, Charles P
2017-09-01
Declines in Hickory shad (Alosa mediocris) populations in Chesapeake Bay have prompted efforts at captive propagation of wild broodfish for stock enhancement and research. The objectives of this study were to evaluate injuries sustained, and immediate and delayed (24 hours) effects on blood variables related to 2 fish capturing methods (electrofishing [EF] and angling). Blood specimens were collected from fish immediately following capture by EF and angling (n = 40 per sex and capture method) from the Susquehanna River (MD, USA). Additional fish (n = 25 per sex and capture method) were collected on the same day, placed in holding tanks and bled 24 hours following capture. Blood data that were non-Gaussian in distribution were transformed (Box-Cox), and effects of sex, method of capture, and holding time were tested using ANOVA with general linear models. Fish were evaluated for injuries by necropsy and radiography. Sex-specific differences were observed for RBC, HGB, PCV, MCH, MCHC, total proteins (TP), globulins, glucose, calcium, AST, CK, and lactate, while RBC, HGB, PCV, MCV, MCH, MCHC, TP, albumin, globulins, glucose, potassium, sodium, AST, CK, and lactate differed significantly by fish capturing method. Electrofishing may have induced greater disruption in blood variables, but mortality (4%) was not significantly different compared to angling. Electrofishing for Hickory shad using a constant DC voltage resulted in numerous hematologic and biochemical changes, with no additional injuries or deaths compared to angling. Capture method must be considered when evaluating fish condition, and blood variables should be partitioned by sex during spawning season. © 2017 American Society for Veterinary Clinical Pathology.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-04-20
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.
Gouvea, Dayana Rubio; Meloni, Fernando; Ribeiro, Arthur de Barros Bello; Lopes, João Luis Callegari; Lopes, Norberto Peporine
2012-10-20
Lychnophora salicifolia Mart., which occurs in the Brazilian Cerrado in the states of Bahia and Minas Gerais as well as in the southeast of the state of Goiás, is the most widely distributed and also the most polymorphic species of the genus. This plant is popularly known to have anti-inflammatory and analgesic activities. In this work, we have studied the variation in terms of polar metabolites of ninety-three Lychnophora salicifolia Mart. specimens collected from different regions of the Brazilian Cerrado. Identification of the constituents of this mixture was carried out by analysis of the UV spectra and MS data after chromatographic separation. Twenty substances were identified, including chlorogenic acid derivatives, a flavonoid C-glucoside, and other sesquiterpenes. The analytical method was validated, and the reliability and credibility of the results was ensured for the purposes of this study. The concentration range required for analysis of content variability within the analyzed group of specimens was covered with appropriate values of limits of detection and quantitation, as well as satisfactory precision and recovery. A quantitative variability was observed among specimens collected from the same location, but on average they were similar from a chemical viewpoint. In relation to the study involving specimens from different locations, there were both qualitative and quantitative differences among plants collected from different regions of Brazil. Statistical analysis revealed that there is a correlation between geographical localization and polar metabolites profile for specimens collected from different locations. This is evidence that the pattern of metabolites concentration depends on the geographical distribution of the specimens. Copyright © 2012 Elsevier B.V. All rights reserved.
Miller, W B; Pasta, D J
2001-01-01
In this study we develop and then test a couple model of contraceptive method choice decision-making following a pregnancy scare. The central constructs in our model are satisfaction with one's current method and confidence in the use of it. Downstream in the decision sequence, satisfaction and confidence predict desires and intentions to change methods. Upstream they are predicted by childbearing motivations, contraceptive attitudes, and the residual effects of the couples' previous method decisions. We collected data from 175 mostly unmarried and racially/ethnically diverse couples who were seeking pregnancy tests. We used LISREL and its latent variable capacity to estimate a structural equation model of the couple decision-making sequence leading to a change (or not) in contraceptive method. Results confirm most elements in our model and demonstrate a number of important cross-partner effects. Almost one-half of the sample had positive pregnancy tests and the base model fitted to this subsample indicates less accuracy in partner perception and greater influence of the female partner on method change decision-making. The introduction of some hypothesis-generating exogenous variables to our base couple model, together with some unexpected findings for the contraceptive attitude variables, suggest interesting questions that require further exploration.
Rouger, Amélie; Remenant, Benoit; Prévost, Hervé; Zagorec, Monique
2017-04-17
Influenced by production and storage processes and by seasonal changes the diversity of meat products microbiota can be very variable. Because microbiotas influence meat quality and safety, characterizing and understanding their dynamics during processing and storage is important for proposing innovative and efficient storage conditions. Challenge tests are usually performed using meat from the same batch, inoculated at high levels with one or few strains. Such experiments do not reflect the true microbial situation, and the global ecosystem is not taken into account. Our purpose was to constitute live stocks of chicken meat microbiotas to create standard and reproducible ecosystems. We searched for the best method to collect contaminating bacterial communities from chicken cuts to store as frozen aliquots. We tested several methods to extract DNA of these stored communities for subsequent PCR amplification. We determined the best moment to collect bacteria in sufficient amounts during the product shelf life. Results showed that the rinsing method associated to the use of Mobio DNA extraction kit was the most reliable method to collect bacteria and obtain DNA for subsequent PCR amplification. Then, 23 different chicken meat microbiotas were collected using this procedure. Microbiota aliquots were stored at -80°C without important loss of viability. Their characterization by cultural methods confirmed the large variability (richness and abundance) of bacterial communities present on chicken cuts. Four of these bacterial communities were used to estimate their ability to regrow on meat matrices. Challenge tests performed on sterile matrices showed that these microbiotas were successfully inoculated and could overgrow the natural microbiota of chicken meat. They can therefore be used for performing reproducible challenge tests mimicking a true meat ecosystem and enabling the possibility to test the influence of various processing or storage conditions on complex meat matrices. Copyright © 2016 Elsevier B.V. All rights reserved.
Vogel, Laura J; Edge, Thomas A; O'Carroll, Denis M; Solo-Gabriele, Helena M; Kushnir, Caitlin S E; Robinson, Clare E
2017-09-15
Fecal indicator bacteria (FIB) are known to accumulate in foreshore beach sand and pore water (referred to as foreshore reservoir) where they act as a non-point source for contaminating adjacent surface waters. While guidelines exist for sampling surface waters at recreational beaches, there is no widely-accepted method to collect sand/sediment or pore water samples for FIB enumeration. The effect of different sampling strategies in quantifying the abundance of FIB in the foreshore reservoir is unclear. Sampling was conducted at six freshwater beaches with different sand types to evaluate sampling methods for characterizing the abundance of E. coli in the foreshore reservoir as well as the partitioning of E. coli between different components in the foreshore reservoir (pore water, saturated sand, unsaturated sand). Methods were evaluated for collection of pore water (drive point, shovel, and careful excavation), unsaturated sand (top 1 cm, top 5 cm), and saturated sand (sediment core, shovel, and careful excavation). Ankle-depth surface water samples were also collected for comparison. Pore water sampled with a shovel resulted in the highest observed E. coli concentrations (only statistically significant at fine sand beaches) and lowest variability compared to other sampling methods. Collection of the top 1 cm of unsaturated sand resulted in higher and more variable concentrations than the top 5 cm of sand. There were no statistical differences in E. coli concentrations when using different methods to sample the saturated sand. Overall, the unsaturated sand had the highest amount of E. coli when compared to saturated sand and pore water (considered on a bulk volumetric basis). The findings presented will help determine the appropriate sampling strategy for characterizing FIB abundance in the foreshore reservoir as a means of predicting its potential impact on nearshore surface water quality and public health risk. Copyright © 2017 Elsevier Ltd. All rights reserved.
INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING
Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong
2017-01-01
Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363
NASA Astrophysics Data System (ADS)
Hu, Guanyu; Fang, Zhou; Liu, Bilin; Chen, Xinjun; Staples, Kevin; Chen, Yong
2018-04-01
The cephalopod beak is a vital hard structure with a stable configuration and has been widely used for the identification of cephalopod species. This study was conducted to determine the best standardization method for identifying different species by measuring 12 morphological variables of the beaks of Illex argentinus, Ommastrephes bartramii, and Dosidicus gigas that were collected by Chinese jigging vessels. To remove the effects of size, these morphometric variables were standardized using three methods. The average ratios of the upper beak morphological variables and upper crest length of O. bartramii and D. gigas were found to be greater than those of I. argentinus. However, for lower beaks, only the average of LRL (lower rostrum length)/ LCL (lower crest length), LRW (lower rostrum width)/ LCL, and LLWL (lower lateral wall length)/ LCL of O. bartramii and D. gigas were greater than those of I. argentinus. The ratios of beak morphological variables and crest length were found to be all significantly different among the three species ( P < 0.001). Among the three standardization methods, the correct classification rate of stepwise discriminant analysis (SDA) was the highest using the ratios of beak morphological variables and crest length. Compared with hood length, the correct classification rate was slightly higher when using beak variables standardized by crest length using an allometric model. The correct classification rate of the lower beak was also found to be greater than that of the upper beak. This study indicates that the ratios of beak morphological variables to crest length could be used for interspecies and intraspecies identification. Meanwhile, the lower beak variables were found to be more effective than upper beak variables in classifying beaks found in the stomachs of predators.
DOT National Transportation Integrated Search
2012-06-01
Our current ability to forecast demand on tolled facilities has not kept pace with advances in decision sciences and : technological innovation. The current forecasting methods suffer from lack of descriptive power of actual behavior because : of the...
Cardiorespiratory Variability and Synchronization in Critical Illness
2008-03-08
Sedating drugs routinely prescribed to relieve patient anxiety were discontinued prior to the data collection. All patients were awake and responsive to...P.; Tarvainen, M. P.; Ranta-Aho, P. O.; Karjalainen, P. A. Software for advanced HRV analysis. Computer Methods and Programs in Biomedicine 2004
Maps of seagrass beds are useful for monitoring estuarine condition, managing habitats, and modeling estuarine processes. We recently developed inexpensive methods for collecting and classifying sidescan sonar (SSS) imagery for seagrass presence in turbid waters as shallow as 1-...
The focus on sample quality: Influence of colon tissue collection on reliability of qPCR data
Korenkova, Vlasta; Slyskova, Jana; Novosadova, Vendula; Pizzamiglio, Sara; Langerova, Lucie; Bjorkman, Jens; Vycital, Ondrej; Liska, Vaclav; Levy, Miroslav; Veskrna, Karel; Vodicka, Pavel; Vodickova, Ludmila; Kubista, Mikael; Verderio, Paolo
2016-01-01
Successful molecular analyses of human solid tissues require intact biological material with well-preserved nucleic acids, proteins, and other cell structures. Pre-analytical handling, comprising of the collection of material at the operating theatre, is among the first critical steps that influence sample quality. The aim of this study was to compare the experimental outcomes obtained from samples collected and stored by the conventional means of snap freezing and by PAXgene Tissue System (Qiagen). These approaches were evaluated by measuring rRNA and mRNA integrity of the samples (RNA Quality Indicator and Differential Amplification Method) and by gene expression profiling. The collection procedures of the biological material were implemented in two hospitals during colon cancer surgery in order to identify the impact of the collection method on the experimental outcome. Our study shows that the pre-analytical sample handling has a significant effect on the quality of RNA and on the variability of qPCR data. PAXgene collection mode proved to be more easily implemented in the operating room and moreover the quality of RNA obtained from human colon tissues by this method is superior to the one obtained by snap freezing. PMID:27383461
Variationally Optimized Free-Energy Flooding for Rate Calculation.
McCarty, James; Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele
2015-08-14
We propose a new method to obtain kinetic properties of infrequent events from molecular dynamics simulation. The procedure employs a recently introduced variational approach [Valsson and Parrinello, Phys. Rev. Lett. 113, 090601 (2014)] to construct a bias potential as a function of several collective variables that is designed to flood the associated free energy surface up to a predefined level. The resulting bias potential effectively accelerates transitions between metastable free energy minima while ensuring bias-free transition states, thus allowing accurate kinetic rates to be obtained. We test the method on a few illustrative systems for which we obtain an order of magnitude improvement in efficiency relative to previous approaches and several orders of magnitude relative to unbiased molecular dynamics. We expect an even larger improvement in more complex systems. This and the ability of the variational approach to deal efficiently with a large number of collective variables will greatly enhance the scope of these calculations. This work is a vindication of the potential that the variational principle has if applied in innovative ways.
Galloway, Gantt P; Didier, Ryne; Garrison, Kathleen; Mendelson, John
2008-01-01
Background Predictors of relapse to methamphetamine use are poorly understood. State variables may play an important role in relapse, but they have been difficult to measure at frequent intervals in outpatients. Methods We conducted a feasibility study of the use of cellular telephones to collect state variable data from outpatients. Six subjects in treatment for methamphetamine dependence were called three times per weekday for approximately seven weeks. Seven questionnaires were administered that assessed craving, stress, affect and current type of location and social environment. Results 395/606 (65%) of calls attempted were completed. The mean time to complete each call was 4.9 (s.d. 1.8) minutes and the mean time to complete each item was 8.4 (s.d. 4.8) seconds. Subjects rated the acceptability of the procedures as good. All six cellular phones and battery chargers were returned undamaged. Conclusion Cellular telephones are a feasible method for collecting state data from methamphetamine dependent outpatients. PMID:19997532
Chen, Jianjun; Frey, H Christopher
2004-12-15
Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.
Fault Diagnosis for Rolling Bearings under Variable Conditions Based on Visual Cognition
Cheng, Yujie; Zhou, Bo; Lu, Chen; Yang, Chao
2017-01-01
Fault diagnosis for rolling bearings has attracted increasing attention in recent years. However, few studies have focused on fault diagnosis for rolling bearings under variable conditions. This paper introduces a fault diagnosis method for rolling bearings under variable conditions based on visual cognition. The proposed method includes the following steps. First, the vibration signal data are transformed into a recurrence plot (RP), which is a two-dimensional image. Then, inspired by the visual invariance characteristic of the human visual system (HVS), we utilize speed up robust feature to extract fault features from the two-dimensional RP and generate a 64-dimensional feature vector, which is invariant to image translation, rotation, scaling variation, etc. Third, based on the manifold perception characteristic of HVS, isometric mapping, a manifold learning method that can reflect the intrinsic manifold embedded in the high-dimensional space, is employed to obtain a low-dimensional feature vector. Finally, a classical classification method, support vector machine, is utilized to realize fault diagnosis. Verification data were collected from Case Western Reserve University Bearing Data Center, and the experimental result indicates that the proposed fault diagnosis method based on visual cognition is highly effective for rolling bearings under variable conditions, thus providing a promising approach from the cognitive computing field. PMID:28772943
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
ERIC Educational Resources Information Center
Henning, Margaret; Chi, Chunheui; Khanna, Sunil K.
2011-01-01
Objective: The purpose of this study was to evaluate the socio-cultural variables that may influence teachers' adoption of classroom-based HIV/AIDS education within the school setting and among school types in Zambia's Lusaka Province. Method: Mixed methods were used to collect original data. Using semi-structured interviews (n=11) and a survey…
Serious Game and Virtual World Training: Instrumentation and Assessment
2012-12-10
Effectiveness of EEG Neurofeedback Training for ADHD in a Clinical Setting as Measured by Changes in T.O.V.A. Scores, Behavioral Ratings, and WISC-R...Human Physiological Data Collection Methods 24 4.3.1 Electroencephalography ( EEG ) 24 4.3.2 Galvanic Skin Response (GSR) and Heart Rate Variability...Collecting Human Data 24 8 Participant Wearing a 32-Channel EEG Cap 25 9 Future Force Warrior Example Combat Armor 27 10 Screenshot of the Organic
Hill, Kylie
2010-01-01
ABSTRACT Purpose: The aims of this study were (1) to describe the cardiorespiratory physiotherapy weekend service (PWS) at three tertiary hospitals in the Greater Toronto Area (GTA) and (2) to compare measures of staff burden among the clinical service areas in one of the hospitals that had a programme-based management structure. Method: Two focus-group meetings were held with physiotherapists from hospitals within the GTA. Thereafter, variables characterizing the PWS were collected over 8 months, using a standardized data-collection form. Results: A total of 632 data-collection forms were received. Response rates exceeded 75% at each hospital. Workload variables, including the number of patient visits, new referrals per hour, and the proportion of staff completing unpaid overtime, differed between the hospitals (p<0.002). There was no difference in any variable when data were compared between Saturday, Sunday, and statutory holidays (p>0.13). Workload measures varied between clinical service areas at the hospital that provided PWS using a programme-based approach. Conclusions: These findings highlight the important shortcomings of a programme-based management approach to providing PWS and may constitute a catalyst for change. PMID:21359048
SOURCES OF VARIABILITY IN COLLECTION AND PREPARATION OF PAINT AND LEAD-COATING SAMPLES
Chronic exposure of children to lead can result in permanent physiologic impairment. Since surfaces coated with lead-containing paints and varnishes are potential sources of exposure, it is extremely important that reliable methods for sampling and analysis be available. The so...
Content Structure as a Design Strategy Variable in Concept Acquisition.
ERIC Educational Resources Information Center
Tennyson, Robert D.; Tennyson, Carol L.
Three methods of sequencing coordinate concepts (simultaneous, collective, and successive) were investigated with a Bayesian, computer-based, adaptive control system. The data analysis showed that when coordinate concepts are taught simultaneously (contextually similar concepts presented at the same time), student performance is superior to either…
Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott
2014-01-01
With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Peart, Daniel J; Balsalobre-Fernández, Carlos; Shaw, Matthew P
2017-11-22
Mobile devices are ubiquitous in the population, and most have the capacity to download applications (apps). Some apps have been developed to collect physiological, kinanthropometric and performance data, however the validity and reliability of such data is often unknown. An appraisal of such apps is warranted as mobile apps may offer an alternative method of data collection for practitioners and athletes with money, time and space constraints. This article identifies and critically reviews the commercially available apps that have been tested in the scientific literature, finding evidence to support the measurement of resting heart through photoplethysmograpy, heart rate variability, range of motion, barbell velocity, vertical jump, mechanical variables during running, and distances covered during walking, jogging and running. The specific apps with evidence, along with reported measurement errors are summarised in the review. Whilst mobile apps may have the potential to collect data in the field, athletes and practitioners should exercise caution when implementing them into practice as not all apps have support from the literature, and the performance of a number of apps have only been tested on one device.
Statokinesigram normalization method.
de Oliveira, José Magalhães
2017-02-01
Stabilometry is a technique that aims to study the body sway of human subjects, employing a force platform. The signal obtained from this technique refers to the position of the foot base ground-reaction vector, known as the center of pressure (CoP). The parameters calculated from the signal are used to quantify the displacement of the CoP over time; there is a large variability, both between and within subjects, which prevents the definition of normative values. The intersubject variability is related to differences between subjects in terms of their anthropometry, in conjunction with their muscle activation patterns (biomechanics); and the intrasubject variability can be caused by a learning effect or fatigue. Age and foot placement on the platform are also known to influence variability. Normalization is the main method used to decrease this variability and to bring distributions of adjusted values into alignment. In 1996, O'Malley proposed three normalization techniques to eliminate the effect of age and anthropometric factors from temporal-distance parameters of gait. These techniques were adopted to normalize the stabilometric signal by some authors. This paper proposes a new method of normalization of stabilometric signals to be applied in balance studies. The method was applied to a data set collected in a previous study, and the results of normalized and nonnormalized signals were compared. The results showed that the new method, if used in a well-designed experiment, can eliminate undesirable correlations between the analyzed parameters and the subjects' characteristics and show only the experimental conditions' effects.
Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.
Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert
2016-08-01
We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.
Quintana, D S; Alvares, G A; Heathers, J A J
2016-01-01
The number of publications investigating heart rate variability (HRV) in psychiatry and the behavioral sciences has increased markedly in the last decade. In addition to the significant debates surrounding ideal methods to collect and interpret measures of HRV, standardized reporting of methodology in this field is lacking. Commonly cited recommendations were designed well before recent calls to improve research communication and reproducibility across disciplines. In an effort to standardize reporting, we propose the Guidelines for Reporting Articles on Psychiatry and Heart rate variability (GRAPH), a checklist with four domains: participant selection, interbeat interval collection, data preparation and HRV calculation. This paper provides an overview of these four domains and why their standardized reporting is necessary to suitably evaluate HRV research in psychiatry and related disciplines. Adherence to these communication guidelines will help expedite the translation of HRV research into a potential psychiatric biomarker by improving interpretation, reproducibility and future meta-analyses. PMID:27163204
ERIC Educational Resources Information Center
Ajayi, A. O.
2006-01-01
This study assessed farmers' willingness to pay (WTP) for extension services. The Contingent Valuation Method (CVM) was used to assess the amount which farmers are willing to pay. Primary data on the demographic, socio-economic variables of farmers and their WTP were collected from 228 farmers selected randomly in a stage-wise sampling procedure…
Variability of Hormonal Stress Markers Collected from a Managed Dolphin Population
2014-09-30
physiological indicators of stress in wild marine mammals and the interrelationships between different stress markers can be used to estimate the impact...and thyroid hormones via radioimmunoassay (RIA). The methods have been validated for cortisol and aldosterone in this species (Houser et al., 2011...measurement methods. Metabolites of cortisol, aldosterone and thyroid hormone will be extracted from fecal samples and measured via RIA using established
NASA Astrophysics Data System (ADS)
Barbieri, L.; Adair, C.; Galford, G. L.; Wyngaard, J.
2017-12-01
We present on a full season of low-cost sUAS agricultural monitoring for improved GHG emissions accounting and mitigation. Agriculture contributes 10-12% of global anthropogenic GHG emissions, and roughly half are from agricultural soils. A variety of land management strategies can be implemented to reduce GHG emissions, but agricultural lands are complex and heterogenous. Nutrient cycling processes that ultimately regulate GHG emission rates are affected by environmental and management dynamics that vary spatially and temporally (e.g. soil properties, manure spreading). Thus, GHG mitigation potential is also variable, and determining best practices for mitigation is challenging, especially considering potential conflicting pressure to manage agricultural lands for other objectives (e.g. decrease agricultural runoff). Monitoring complexity from agricultural lands is critical for regional GHG accounting and decision making, but current methods (e.g., static chambers) are time intensive, expensive, and use in-situ equipment. These methods lack the spatio-temporal flexibility necessary to reduce the high uncertainty in regional emissions estimates, while traditional remote sensing methods often do not provide adequate spatio-temporal resolution for robust field-level monitoring. Small Unmanned Aerial Systems (sUAS) provide the range and the rapid response data collection needed to monitor key variables on the landscape (imagery) and from the atmosphere (CO2 concentrations), and can provide ways to bridge between in-situ and remote sensing data. Initial results show good agreement between sUAS CO2 sensors with more traditional equipment, and at a fraction of the cost. We present results from test flights over managed agricultural landscapes in Vermont, showcasing capabilities from both sUAS imagery and atmospheric data collected from on-board sensors (CO2, PTH). We then compare results from two different in-flight data collection methods: Vertical Profile and Horizontal Surveys. We conclude with results from the integration of these sUAS data with concurrently collected in-field measurements from static chambers and Landsat imagery, demonstrating enhanced understanding of agricultural landscapes and improved GHG emissions monitoring with the addition of sUAS collected data.
Al-Askar, Abdulaziz A; Ghoneem, Khalid M; Rashad, Younes M; Abdulkhair, Waleed M; Hafez, Elsayed E; Shabana, Yasser M; Baka, Zakaria A
2014-01-01
One hundred samples of tomato seeds were collected in 2011 and 2012 from tomato-cultivated fields in Saudi Arabia and screened for their seed-borne mycoflora. A total of 30 genera and 57 species of fungi were recovered from the collected seed samples using agar plate and deep-freezing blotter methods. The two methods differed as regards the frequency of recovered seed-borne fungi. Seven fungi among those recovered from tomato seeds, which are known as plant pathogens, were tested for their pathogenicity and transmission on tomato seedlings. The recovery rate of these pathogens gradually decreased from root up to the upper stem, and did not reach to the stem apex. The distribution of tomato seed-borne fungi was also investigated throughout Saudi Arabia. In this concern, Al-Madena governorate recorded the highest incidence of fungal flora associated with tomato seeds. The impact of meteorological variables on the distribution of tomato seed-borne mycoflora was explored using the ordination technique (canonical correspondence analysis). Among all climatic factors, relative humidity was the most influential variable in this regard. Our findings may provide a valuable contribution to our understanding of future global disease change and may be used also to predict disease occurrence and fungal transfer to new uninfected areas. PMID:24964218
Andersen, Ole Juul; Grouleff, Julie; Needham, Perri; Walker, Ross C; Jensen, Frank
2015-11-19
Current enhanced sampling molecular dynamics methods for studying large conformational changes in proteins suffer from certain limitations. These include, among others, the need for user defined collective variables, the prerequisite of both start and end point structures of the conformational change, and the need for a priori knowledge of the amount by which to boost specific parts of the potential. In this paper, a framework is proposed for a molecular dynamics method for studying ligand-induced conformational changes, in which the nonbonded interactions between the ligand and the protein are used to calculate a biasing force. The method requires only a single input structure, and does not entail the use of collective variables. We provide a proof-of-concept for accelerating conformational changes in three simple test molecules, as well as promising results for two proteins known to undergo domain closure upon ligand binding. For the ribose-binding protein, backbone root-mean-square deviations as low as 0.75 Å compared to the crystal structure of the closed conformation are obtained within 50 ns simulations, whereas no domain closures are observed in unbiased simulations. A skewed closed structure is obtained for the glutamine-binding protein at high bias values, indicating that specific protein-ligand interactions might suppress important protein-protein interactions.
Recipes for free energy calculations in biomolecular systems.
Moradi, Mahmoud; Babin, Volodymyr; Sagui, Celeste; Roland, Christopher
2013-01-01
During the last decade, several methods for sampling phase space and calculating various free energies in biomolecular systems have been devised or refined for molecular dynamics (MD) simulations. Thus, state-of-the-art methodology and the ever increasing computer power allow calculations that were forbidden a decade ago. These calculations, however, are not trivial as they require knowledge of the methods, insight into the system under study, and, quite often, an artful combination of different methodologies in order to avoid the various traps inherent in an unknown free energy landscape. In this chapter, we illustrate some of these concepts with two relatively simple systems, a sugar ring and proline oligopeptides, whose free energy landscapes still offer considerable challenges. In order to explore the configurational space of these systems, and to surmount the various free energy barriers, we combine three complementary methods: a nonequilibrium umbrella sampling method (adaptively biased MD, or ABMD), replica-exchange molecular dynamics (REMD), and steered molecular dynamics (SMD). In particular, ABMD is used to compute the free energy surface of a set of collective variables; REMD is used to improve the performance of ABMD, to carry out sampling in space complementary to the collective variables, and to sample equilibrium configurations directly; and SMD is used to study different transition mechanisms.
Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design
ERIC Educational Resources Information Center
Schlueter, Mark A.; D'Costa, Allison R.
2013-01-01
Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…
Identifying Behavioral Measures of Stress in Individuals with Aphasia
ERIC Educational Resources Information Center
Laures-Gore, Jacqueline S.; DuBay, Michaela F.; Duff, Melissa C.; Buchanan, Tony W.
2010-01-01
Purpose: To develop valid indicators of stress in individuals with aphasia (IWA) by examining the relationship between certain language variables (error frequency [EF] and word productivity [WP]) and cortisol reactivity. Method: Fourteen IWA and 10 controls participated in a speaking task. Salivary cortisol was collected pre- and posttask. WP and…
Kusumaningrum, Dewi; Lee, Hoonsoo; Lohumi, Santosh; Mo, Changyeun; Kim, Moon S; Cho, Byoung-Kwan
2018-03-01
The viability of seeds is important for determining their quality. A high-quality seed is one that has a high capability of germination that is necessary to ensure high productivity. Hence, developing technology for the detection of seed viability is a high priority in agriculture. Fourier transform near-infrared (FT-NIR) spectroscopy is one of the most popular devices among other vibrational spectroscopies. This study aims to use FT-NIR spectroscopy to determine the viability of soybean seeds. Viable and artificial ageing seeds as non-viable soybeans were used in this research. The FT-NIR spectra of soybean seeds were collected and analysed using a partial least-squares discriminant analysis (PLS-DA) to classify viable and non-viable soybean seeds. Moreover, the variable importance in projection (VIP) method for variable selection combined with the PLS-DA was employed. The most effective wavelengths were selected by the VIP method, which selected 146 optimal variables from the full set of 1557 variables. The results demonstrated that the FT-NIR spectral analysis with the PLS-DA method that uses all variables or the selected variables showed good performance based on the high value of prediction accuracy for soybean viability with an accuracy close to 100%. Hence, FT-NIR techniques with a chemometric analysis have the potential for rapidly measuring soybean seed viability. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Mudge, Elizabeth; Applequist, Wendy L.; Finley, Jamie; Lister, Patience; Townesmith, Andrew K.; Walker, Karen M.; Brown, Paula N.
2016-01-01
American elderberries are commonly collected from wild plants for use as food and medicinal products. The degree of phytochemical variation amongst wild populations has not been established and might affect the overall quality of elderberry dietary supplements. The three major flavonols identified in elderberries are rutin, quercetin and isoquercetin. Variation in the flavonols and chlorogenic acid was determined for 107 collections of elderberries from throughout the eastern United States using an optimized high performance liquid chromatography with ultraviolet detection method. The mean content was 71.9 mg per 100g fresh weight with variation ranging from 7.0 to 209.7 mg per 100 g fresh weight within the collected population. Elderberries collected from southeastern regions had significantly higher contents in comparison with those in more northern regions. The variability of the individual flavonol and chlorogenic acid profiles of the berries was complex and likely influenced by multiple factors. Several outliers were identified based on unique phytochemical profiles in comparison with average populations. This is the first study to determine the inherent variability of American elderberries from wild collections and can be used to identify potential new cultivars that may produce fruits of unique or high-quality phytochemical content for the food and dietary supplement industries. PMID:26877585
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-01-01
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957
Rivera-Sandoval, Javier; Monsalve, Timisay; Cattaneo, Cristina
2018-01-01
Studying bone collections with known data has proven to be useful in assessing reliability and accuracy of biological profile reconstruction methods used in Forensic Anthropology. Thus, it is necessary to calibrate these methods to clarify issues such as population variability and accuracy of estimations for the elderly. This work considers observations of morphological features examined by four innominate bone age assessment methods: (1) Suchey-Brooks Pubic Symphysis, (2) Lovejoy Iliac Auricular Surface, (3) Buckberry and Chamberlain Iliac Auricular Surface, and (4) Rouge-Maillart Iliac Auricular Surface and Acetabulum. This study conducted a blind test of a sample of 277 individuals from two contemporary skeletal collections from Universal and San Pedro cemeteries in Medellin, for which known pre-mortem data support the statistical analysis of results obtained using the four age assessment methods. Results from every method show tendency to increase bias and inaccuracy in relation to age, but Buckberry-Chamberlain and Rougé-Maillart's methods are the most precise for this particular Colombian population, where Buckberry-Chamberlain's is the best for analysis of older individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Validity of a portable glucose, total cholesterol, and triglycerides multi-analyzer in adults.
Coqueiro, Raildo da Silva; Santos, Mateus Carmo; Neto, João de Souza Leal; Queiroz, Bruno Morbeck de; Brügger, Nelson Augusto Jardim; Barbosa, Aline Rodrigues
2014-07-01
This study investigated the accuracy and precision of the Accutrend Plus system to determine blood glucose, total cholesterol, and plasma triglycerides in adults and evaluated its efficiency in measuring these blood variables. The sample consisted of 53 subjects (≥ 18 years). For blood variable laboratory determination, venous blood samples were collected and processed in a Labmax 240 analyzer. To measure blood variables with the Accutrend Plus system, samples of capillary blood were collected. In the analysis, the following tests were included: Wilcoxon and Student's t-tests for paired samples, Lin's concordance coefficient, Bland-Altman method, receiver operating characteristic curve, McNemar test, and k statistics. The results show that the Accutrend Plus system provided significantly higher values (p ≤ .05) of glucose and triglycerides but not of total cholesterol (p > .05) as compared to the values determined in the laboratory. However, the system showed good reproducibility (Lin's coefficient: glucose = .958, triglycerides = .992, total cholesterol = .940) and high concordance with the laboratory method (Lin's coefficient: glucose = .952, triglycerides = .990, total cholesterol = .944) and high sensitivity (glucose = 80.0%, triglycerides = 90.5%, total cholesterol = 84.4%) and specificity (glucose = 100.0%, triglycerides = 96.9%, total cholesterol = 95.2%) in the discrimination of high values of the three blood variables analyzed. It could be concluded that despite the tendency to overestimate glucose and triglyceride levels, a portable multi-analyzer is a valid alternative for the monitoring of metabolic disorders and cardiovascular risk factors. © The Author(s) 2013.
NASA Astrophysics Data System (ADS)
McMillan, N. J.; Chavez, A.; Chanover, N.; Voelz, D.; Uckert, K.; Tawalbeh, R.; Gariano, J.; Dragulin, I.; Xiao, X.; Hull, R.
2014-12-01
Rapid, in-situ methods for identification of biologic and non-biologic mineral precipitation sites permit mapping of biological hot spots. Two portable spectrometers, Laser-Induced Breakdown Spectroscopy (LIBS) and Acoustic-Optic Tunable Filter Reflectance Spectroscopy (AOTFRS) were used to differentiate between bacterially influenced and inorganically precipitated calcite specimens from Fort Stanton Cave, NM, USA. LIBS collects light emitted from the decay of excited electrons in a laser ablation plasma; the spectrum is a chemical fingerprint of the analyte. AOTFRS collects light reflected from the surface of a specimen and provides structural information about the material (i.e., the presence of O-H bonds). These orthogonal data sets provide a rigorous method to determine the origin of calcite in cave deposits. This study used a set of 48 calcite samples collected from Fort Stanton cave. Samples were examined in SEM for the presence of biologic markers; these data were used to separate the samples into biologic and non-biologic groups. Spectra were modeled using the multivariate technique Partial Least Squares Regression (PLSR). Half of the spectra were used to train a PLSR model, in which biologic samples were assigned to the independent variable "0" and non-biologic samples were assigned the variable "1". Values of the independent variable were calculated for each of the training samples, which were close to 0 for the biologic samples (-0.09 - 0.23) and close to 1 for the non-biologic samples (0.57 - 1.14). A Value of Apparent Distinction (VAD) of 0.55 was used to numerically distinguish between the two groups; any sample with an independent variable value < 0.55 was classified as having a biologic origin; a sample with a value > 0.55 was determined to be non-biologic in origin. After the model was trained, independent variable values for the remaining half of the samples were calculated. Biologic or non-biologic origin was assigned by comparison to the VAD. Using LIBS data alone, the model has a 92% success rate, correctly identifying 23 of 25 samples. Modeling of AOTFRS spectra and the combined LIBS-AOTFRS data set have similar success rates. This study demonstrates that rapid, portable LIBS and AOTFRS instruments can be used to map the spatial distribution of biologic precipitation in caves.
Comparison of laboratory and field remote sensing methods to measure forage quality.
Guo, Xulin; Wilmshurst, John F; Li, Zhaoqin
2010-09-01
Recent research in range ecology has emphasized the importance of forage quality as a key indicator of rangeland condition. However, we lack tools to evaluate forage quality at scales appropriate for management. Using canopy reflectance data to measure forage quality has been conducted at both laboratory and field levels separately, but little work has been conducted to evaluate these methods simultaneously. The objective of this study is to find a reliable way of assessing grassland quality through measuring forage chemistry with reflectance. We studied a mixed grass ecosystem in Grasslands National Park of Canada and surrounding pastures, located in southern Saskatchewan. Spectral reflectance was collected at both in-situ field level and in the laboratory. Vegetation samples were collected at each site, sorted into the green grass portion, and then sent to a chemical company for measuring forage quality variables, including protein, lignin, ash, moisture at 135 °C, Neutral Detergent Fiber (NDF), Acid Detergent Fiber (ADF), Total Digestible, Digestible Energy, Net Energy for Lactation, Net Energy for Maintenance, and Net Energy for Gain. Reflectance data were processed with the first derivative transformation and continuum removal method. Correlation analysis was conducted on spectral and forage quality variables. A regression model was further built to investigate the possibility of using canopy spectral measurements to predict the grassland quality. Results indicated that field level prediction of protein of mixed grass species was possible (r² = 0.63). However, the relationship between canopy reflectance and the other forage quality variables was not strong.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Bordallo, P N; Monteiro, A M R; Sousa, J A; Aragão, F A S
2017-02-23
Morinda citrifolia L., commonly known as noni, has been used for the treatment of various diseases for over two centuries. It was introduced and widely disseminated in Brazil because of its high market value and ease of adaptation to the soil and climatic conditions of the country. The aim of this study was to estimate the genetic variability of noni accessions from the collection of Embrapa Agroindústria Tropical in Brazil. We evaluated 36 plants of the 13 accessions of noni from the germplasm collection of M. citrifolia. Several methods of DNA extraction were tested. After definition of the method, the DNA of each sample was subjected to polymerase chain reactions using 20 random amplified polymorphic DNA primers. The band patterns on agarose gel were converted into a binary data matrix, which was used to estimate the genetic distances between the plants and to perform the cluster analyses. Of the total number of markers used in this study, 125 (81.1%) were polymorphic. The genetic distances between the genotypes ranged from 0.04 to 0.49. Regardless of the high number of polymorphic bands, the genetic variability of the noni plants evaluated was low since most of the genotypes belonged to the same cluster as shown by the dendrogram and Tocher's cluster analysis. The low genetic diversity among the studied noni individuals indicates that additional variability should be introduced in the germplasm collection of noni by gathering new individuals and/or by hybridizing contrasting individuals.
SSAGES: Software Suite for Advanced General Ensemble Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods, and that facilitates implementation of new techniquesmore » as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite.« less
SSAGES: Software Suite for Advanced General Ensemble Simulations.
Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J
2018-01-28
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
SSAGES: Software Suite for Advanced General Ensemble Simulations
NASA Astrophysics Data System (ADS)
Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.
2018-01-01
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
Social Network Analysis for Assessing College-Aged Adults' Health: A Systematic Review.
Patterson, Megan S; Go Odson, Patricia
2018-04-13
Social network analysis (SNA) is a useful, emerging method for studying health. College students are especially prone to social influence when it comes to health. This review aimed to identify network variables related to college student health and determine how SNA was used in the literature. A systematic review of relevant literature was conducted in October 2015. Studies employing egocentric or whole network analysis to study college student health were included. We used Garrard's Matrix Method to extract data from reviewed articles (n = 15). Drinking, smoking, aggression, homesickness, and stress were predicted by network variables in the reviewed literature. Methodological inconsistencies concerning boundary specification, data collection, nomination limits, and statistical analyses were revealed across studies. Results show the consistent relationship between network variables and college health outcomes, justifying further use of SNA to research college health. Suggestions and considerations for future use of SNA are provided.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Fatty acid methyl ester analysis to identify sources of soil in surface water.
Banowetz, Gary M; Whittaker, Gerald W; Dierksen, Karen P; Azevedo, Mark D; Kennedy, Ann C; Griffith, Stephen M; Steiner, Jeffrey J
2006-01-01
Efforts to improve land-use practices to prevent contamination of surface waters with soil are limited by an inability to identify the primary sources of soil present in these waters. We evaluated the utility of fatty acid methyl ester (FAME) profiles of dry reference soils for multivariate statistical classification of soils collected from surface waters adjacent to agricultural production fields and a wooded riparian zone. Trials that compared approaches to concentrate soil from surface water showed that aluminum sulfate precipitation provided comparable yields to that obtained by vacuum filtration and was more suitable for handling large numbers of samples. Fatty acid methyl ester profiles were developed from reference soils collected from contrasting land uses in different seasons to determine whether specific fatty acids would consistently serve as variables in multivariate statistical analyses to permit reliable classification of soils. We used a Bayesian method and an independent iterative process to select appropriate fatty acids and found that variable selection was strongly impacted by the season during which soil was collected. The apparent seasonal variation in the occurrence of marker fatty acids in FAME profiles from reference soils prevented preparation of a standardized set of variables. Nevertheless, accurate classification of soil in surface water was achieved utilizing fatty acid variables identified in seasonally matched reference soils. Correlation analysis of entire chromatograms and subsequent discriminant analyses utilizing a restricted number of fatty acid variables showed that FAME profiles of soils exposed to the aquatic environment still had utility for classification at least 1 wk after submersion.
Martinez, A L A; Araújo, J S P; Ragassi, C F; Buso, G S C; Reifschneider, F J B
2017-07-06
Capsicum peppers are native to the Americas, with Brazil being a significant diversity center. Capsicum baccatum accessions at Instituto Federal (IF) Goiano represent a portion of the species genetic resources from central Brazil. We aimed to characterize a C. baccatum working collection comprising 27 accessions and 3 commercial cultivars using morphological traits and molecular markers to describe its genetic and morphological variability and verify the occurrence of duplicates. This set included 1 C. baccatum var. praetermissum and 29 C. baccatum var. pendulum with potential for use in breeding programs. Twenty-two morphological descriptors, 57 inter-simple sequence repeat, and 34 random amplified polymorphic DNA markers were used. Genetic distance was calculated through the Jaccard similarity index and genetic variability through cluster analysis using the unweighted pair group method with arithmetic mean, resulting in dendrograms for both morphological analysis and molecular analysis. Genetic variability was found among C. baccatum var. pendulum accessions, and the distinction between the two C. baccatum varieties was evident in both the morphological and molecular analyses. The 29 C. baccatum var. pendulum genotypes clustered in four groups according to fruit type in the morphological analysis. They formed seven groups in the molecular analysis, without a clear correspondence with morphology. No duplicates were found. The results describe the genetic and morphological variability, provide a detailed characterization of genotypes, and discard the possibility of duplicates within the IF Goiano C. baccatum L. collection. This study will foment the use of this germplasm collection in C. baccatum breeding programs.
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Covert, S. Alex
2001-01-01
The U.S. Geological Survey (USGS) and Ohio Environmental Protection Agency (OEPA) collected data on fish from 10 stream sites in 1996 and 3 stream sites in 1997 as part of a comparative study of fish community assessment methods. The sites sampled represent a wide range of basin sizes (ranging from 132?6,330 square kilometers) and surrounding land-use types (urban, agricultural, and mixed). Each agency used its own fish-sampling protocol. Using the Index of Biotic Integrity and Modified Index of Well-Being, differences between data sets were tested for significance by means of the Wilcoxon signed-ranks test (a = 0.05). Results showed that the median of Index of Biotic Integrity differences between data sets was not significantly different from zero (p = 0.2521); however, the same statistical test showed the median differences in the Modified Index of Well-Being scores to be significantly different from zero (p = 0.0158). The differences observed in the Index of Biotic Integrity scores are likely due to natural variability, increased variability at sites with degraded water quality, differences in sampling methods, and low-end adjustments in the Index of Biotic Integrity calculation when fewer than 50 fish were collected. The Modified Index of Well-Being scores calculated by OEPA were significantly higher than those calculated by the USGS. This finding was attributed to the comparatively large numbers and biomass of fish collected by the OEPA. By combining the two indices and viewing them in terms of the percentage attainment of Ohio Warmwater Habitat criteria, the two agencies? data seemed comparable, although the Index of Biotic Integrity scores were more similar than the Modified Index of Well-Being scores.
Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert
2015-01-01
Objective: We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). Background: There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. Method: In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. Results: The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. Conclusions: The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. Application: The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. PMID:26646300
Islas-Granillo, H; Borges-Yañez, SA; Medina-Solís, CE; Galan-Vidal, CA; Navarrete-Hernández, JJ; Escoffié-Ramirez, M; Maupomé, G
2014-01-01
ABSTRACT Objective: To compare a limited array of chewing-stimulated saliva features (salivary flow, pH and buffer capacity) in a sample of elderly Mexicans with clinical, sociodemographic and socio-economic variables. Subjects and Methods: A cross-sectional study was carried out in 139 adults, 60 years old and older, from two retirement homes and a senior day care centre in the city of Pachuca, Mexico. Socio-demographic, socio-economic and behavioural variables were collected through a questionnaire. A trained and standardized examiner obtained the oral clinical variables. Chewing-stimulated saliva (paraffin method) was collected and the salivary flow rate, pH and buffer capacity were measured. The analysis was performed using non-parametric tests in Stata 9.0. Results: Mean age was 79.1 ± 9.8 years. Most of the subjects included were women (69.1%). Mean chewing-stimulated salivary flow was 0.75 ± 0.80 mL/minute, and the pH and buffer capacity were 7.88 ± 0.83 and 4.20 ± 1.24, respectively. Mean chewing-stimulated salivary flow varied (p < 0.05) across type of retirement home, tooth brushing frequency, number of missing teeth and use of dental prostheses. pH varied across the type of retirement home (p < 0.05) and marginally by age (p = 0.087); buffer capacity (p < 0.05) varied across type of retirement home, tobacco consumption and the number of missing teeth. Conclusions: These exploratory data add to the body of knowledge with regard to chewing-stimulated salivary features (salivary flow rate, pH and buffer capacity) and outline the variability of those features across selected sociodemographic, socio-economic and behavioural variables in a group of Mexican elders. PMID:25867562
Enhancing Important Fluctuations: Rare Events and Metadynamics from a Conceptual Viewpoint
NASA Astrophysics Data System (ADS)
Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele
2016-05-01
Atomistic simulations play a central role in many fields of science. However, their usefulness is often limited by the fact that many systems are characterized by several metastable states separated by high barriers, leading to kinetic bottlenecks. Transitions between metastable states are thus rare events that occur on significantly longer timescales than one can simulate in practice. Numerous enhanced sampling methods have been introduced to alleviate this timescale problem, including methods based on identifying a few crucial order parameters or collective variables and enhancing the sampling of these variables. Metadynamics is one such method that has proven successful in a great variety of fields. Here we review the conceptual and theoretical foundations of metadynamics. As demonstrated, metadynamics is not just a practical tool but can also be considered an important development in the theory of statistical mechanics.
Christensen, Bruce W; Asa, Cheryl S; Wang, Chong; Vansandt, Lindsey; Bauman, Karen; Callahan, Margaret; Jens, Jackie K; Ellinwood, N Matthew
2011-09-15
Genetic management of Mexican gray wolves includes semen banking, but due to the small number of animals in the population and handling restrictions, improvements in semen collection and cryopreservation rely on results from studies of domestic dogs. Semen collection from wolves requires anesthesia and electroejaculation, which introduce potentially important variables into species comparisons, as dog semen is typically collected manually from conscious animals. To investigate possible effects of collection method on semen quality, we compared semen collection by the traditional manual method and by electroejaculation (EE) in a group of dogs (n = 5) to collection by EE only in wolves (n = 7). Samples were divided into two aliquots: neat or diluted in Tris/egg yolk extender, with motility evaluated at intervals up to 24 h. There were no differences (P > 0.10) in sperm motility in either neat or extended samples at 24 h from EE dogs and wolves, although motility of the wolf neat samples declined more rapidly (P < 0.05). However, there were differences (P < 0.01) between EE and manually collected dog semen in motility at 24 h, in both the neat and extended samples. Therefore, general motility patterns of dog and wolf semen collected by EE were similar, especially when diluted with a Tris/egg yolk extender, but sperm collected from dogs by EE did not maintain motility as long as manually collected samples, perhaps related to the longer exposure of EE samples to more prostate fluid. Copyright © 2011 Elsevier Inc. All rights reserved.
Comparison between Mean Forces and Swarms-of-Trajectories String Methods.
Maragliano, Luca; Roux, Benoît; Vanden-Eijnden, Eric
2014-02-11
The original formulation of the string method in collective variable space is compared with a recent variant called string method with swarms-of-trajectories. The assumptions made in the original method are revisited and the significance of the minimum free energy path (MFEP) is discussed in the context of reactive events. These assumptions are compared to those made in the string method with swarms-of-trajectories, and shown to be equivalent in a certain regime: in particular an expression for the path identified by the swarms-of-trajectories method is given and shown to be closely related to the MFEP. Finally, the algorithmic aspects of both methods are compared.
This standard operating procedure (SOP) describes a new, rapid, and relatively inexpensive way to remove a precise area of paint from the substrate of building structures in preparation for quantitative analysis. This method has been applied successfully in the laboratory, as we...
Research in Reading in English as a Second Language.
ERIC Educational Resources Information Center
Devine, Joanne, Ed.; And Others
This collection of essays, most followed by comments, reflect some aspect of the general theme: reading is a multifacted, complex, interactive process that involves many subskills and many types of reader, as well as text, variables. Papers include: "The Eclectic Synergy of Methods of Reading Research" (Ulla Connor); "A View of…
Relationship between Self-Control and Facebook Use: Case of CEIT Students
ERIC Educational Resources Information Center
Firat, Mehmet
2017-01-01
This is an explanatory mixed-method study that analyzes the relationship between the variables of students' self-control and Facebook usage. TIME's online Facebook calculator and the Brief Self-Control Scale are used for data collection. The research participants are 60 students in a department of computer education and instructional technology…
Levels of Job Satisfaction of Coaches Providing Education to Mentally Retarded Children in Turkey
ERIC Educational Resources Information Center
Ilhan, Ekrem Levent
2012-01-01
The purpose of this research is to determine the levels of job satisfaction of sports coaches who are providing education to mentally retarded children and to examine as well as their job satisfaction according to different variables. Survey method was preferred as the data collection tool and "Minnesota Satisfaction…
Q-Type Factor Analysis of Healthy Aged Men.
ERIC Educational Resources Information Center
Kleban, Morton H.
Q-type factor analysis was used to re-analyze baseline data collected in 1957, on 47 men aged 65-91. Q-type analysis is the use of factor methods to study persons rather than tests. Although 550 variables were originally studied involving psychiatry, medicine, cerebral metabolism and chemistry, personality, audiometry, dichotic and diotic memory,…
Social Workers' Orientation toward the Evidence-Based Practice Process: A Dutch Survey
ERIC Educational Resources Information Center
van der Zwet, Renske J. M.; Kolmer, Deirdre M. Beneken genaamd; Schalk, René
2016-01-01
Objectives: This study assesses social workers' orientation toward the evidence-based practice (EBP) process and explores which specific variables (e.g. age) are associated. Methods: Data were collected from 341 Dutch social workers through an online survey which included a Dutch translation of the EBP Process Assessment Scale (EBPPAS), along with…
Culture- and PCR-based measurements of fecal pollution were determined and compared to hydrologic and land use indicators. Stream water samples (n = 235) were collected monthly over a two year period from ten streams draining headwatersheds with different land use intensities ra...
Farmers as Consumers of Agricultural Education Services: Willingness to Pay and Spend Time
ERIC Educational Resources Information Center
Charatsari, Chrysanthi; Papadaki-Klavdianou, Afroditi; Michailidis, Anastasios
2011-01-01
This study assessed farmers' willingness to pay for and spend time attending an Agricultural Educational Program (AEP). Primary data on the demographic and socio-economic variables of farmers were collected from 355 farmers selected randomly from Northern Greece. Descriptive statistics and multivariate analysis methods were used in order to meet…
Rainfall pattern variability as climate change impact in The Wallacea Region
NASA Astrophysics Data System (ADS)
Pujiastuti, I.; Nurjani, E.
2018-04-01
The objective of the study is to observe the characteristic variability of rainfall pattern in the city located in every rainfall type, local (Kendari), monsoon (Manado), and equatorial (Palu). The result will be compared to determine which has the most significantly precipitation changing due to climate change impact. Rainfall variability in Indonesia illustrates precipitation variation thus the important variability is the variability of monthly rainfall. Monthly precipitation data for the period of 1961-2010 are collected from Indonesian Agency for Meteorological, Climatological, and Geophysical Agency. This data is calculated with the normal test statistical method to analyze rainfall variability. The result showed the pattern of trend and variability of rainfall in every city with the own characteristic which determines the rainfall type. Moreover, there is comparison of rainfall pattern changing between every rainfall type. This information is useful for climate change mitigation and adaptation strategies especially in water resource management form precipitation as well as the occurrence of meteorological disasters.
Oztekin, Asil; Delen, Dursun; Kong, Zhenyu James
2009-12-01
Predicting the survival of heart-lung transplant patients has the potential to play a critical role in understanding and improving the matching procedure between the recipient and graft. Although voluminous data related to the transplantation procedures is being collected and stored, only a small subset of the predictive factors has been used in modeling heart-lung transplantation outcomes. The previous studies have mainly focused on applying statistical techniques to a small set of factors selected by the domain-experts in order to reveal the simple linear relationships between the factors and survival. The collection of methods known as 'data mining' offers significant advantages over conventional statistical techniques in dealing with the latter's limitations such as normality assumption of observations, independence of observations from each other, and linearity of the relationship between the observations and the output measure(s). There are statistical methods that overcome these limitations. Yet, they are computationally more expensive and do not provide fast and flexible solutions as do data mining techniques in large datasets. The main objective of this study is to improve the prediction of outcomes following combined heart-lung transplantation by proposing an integrated data-mining methodology. A large and feature-rich dataset (16,604 cases with 283 variables) is used to (1) develop machine learning based predictive models and (2) extract the most important predictive factors. Then, using three different variable selection methods, namely, (i) machine learning methods driven variables-using decision trees, neural networks, logistic regression, (ii) the literature review-based expert-defined variables, and (iii) common sense-based interaction variables, a consolidated set of factors is generated and used to develop Cox regression models for heart-lung graft survival. The predictive models' performance in terms of 10-fold cross-validation accuracy rates for two multi-imputed datasets ranged from 79% to 86% for neural networks, from 78% to 86% for logistic regression, and from 71% to 79% for decision trees. The results indicate that the proposed integrated data mining methodology using Cox hazard models better predicted the graft survival with different variables than the conventional approaches commonly used in the literature. This result is validated by the comparison of the corresponding Gains charts for our proposed methodology and the literature review based Cox results, and by the comparison of Akaike information criteria (AIC) values received from each. Data mining-based methodology proposed in this study reveals that there are undiscovered relationships (i.e. interactions of the existing variables) among the survival-related variables, which helps better predict the survival of the heart-lung transplants. It also brings a different set of variables into the scene to be evaluated by the domain-experts and be considered prior to the organ transplantation.
Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.
2015-01-01
Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.
Water Collection from Air Humidity in Bahrain
NASA Astrophysics Data System (ADS)
Dahman, Nidal A.; Al Juboori, Khalil J.; BuKamal, Eman A.; Ali, Fatima M.; AlSharooqi, Khadija K.; Al-Banna, Shaima A.
2017-11-01
The Kingdom of Bahrain falls geographically in one of the driest regions in the world. Conventional fresh surface water bodies, such as rivers and lakes, are nonexistent and for water consumption, Bahrain prominently relies on the desalination of sea water. This paper presents an ongoing project that is being pursued by a group of student and their advising professors to investigate the viability of extracting water from air humidity. Dehumidifiers have been utilized as water extraction devices. Those devices have been distributed on six areas that were selected based on a rigorous geospatial modeling of historical meteorological data. The areas fall in residential and industrial neighborhoods that are located in the main island and the island of Muharraq. Water samples have been collected three times every week since May of 2016 and the collection process will continue until May of 2017. The collected water samples have been analyzed against numerous variables individually and in combinations including: amount of water collected per hour versus geographical location, amount of water collected per hour versus meteorological factors, suitability of collected water for potable human consumption, detection of air pollution in the areas of collection and the economy of this method of water collection in comparison to other nonconventional methods. An overview of the completed analysis results is presented in this paper.
A Lean Six Sigma approach to the improvement of the selenium analysis method.
Cloete, Bronwyn C; Bester, André
2012-11-02
Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL). The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC). Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM) was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any laboratory, and represents both a management discipline, and a standardised approach to problem solving and process optimisation.
Filter forensics: microbiota recovery from residential HVAC filters.
Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A
2018-01-30
Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a better understanding of the microbial community variability attributable to sampling methodology and helps inform interpretation of data collected from other types of dust samples collected from indoor environments.
Family History Collection Practices: National Survey of Pediatric Primary Care Providers.
Tarini, Beth A; Gornick, Michele C; Zikmund-Fisher, Brian J; Saal, Howard M; Edmondson, Laurie; Uhlmann, Wendy R
2018-05-01
While family history (FH) collection is a core responsibility of pediatric primary care providers (PCPs), few details about this practice are known. We surveyed a random national sample of 1200 pediatricians and family medicine physicians about FH collection practices. A total of 86% of respondents (n = 289 pediatricians; n = 152 family medicine physicians) indicated that they collect a FH "always" or "most of the time" with 77% reporting collection at the first visit, regardless of whether it is a health maintenance or problem-focused visit. Less than half ask about relatives other than parents, siblings, or grandparents (36.3%). Among respondents, 42% routinely update the FH at every health maintenance visit while 6% updated FH at every visit. Pediatric PCPs use a variety of methods to collect a FH that is limited in scope and variably updated. Our results suggest that interventions are needed to help pediatric PCPs collect a systematic, efficient, and updated FH.
Rodman, Ashley R; Scott, J Thad
2017-07-01
Periphyton is an important component of stream bioassessment, yet methods for quantifying periphyton biomass can differ substantially. A case study within the Arkansas Ozarks is presented to demonstrate the potential for linking chlorophyll-a (chl-a) and ash-free dry mass (AFDM) data sets amassed using two frequently used periphyton sampling protocols. Method A involved collecting periphyton from a known area on the top surface of variably sized rocks gathered from relatively swift-velocity riffles without discerning canopy cover. Method B involved collecting periphyton from the entire top surface of cobbles systematically gathered from riffle-run habitat where canopy cover was intentionally avoided. Chl-a and AFDM measurements were not different between methods (p = 0.123 and p = 0.550, respectively), and there was no interaction between method and time in the repeated measures structure of the study. However, significantly different seasonal distinctions were observed for chl-a and AFDM from all streams when data from the methods were combined (p < 0.001 and p = 0.012, respectively), with greater mean biomass in the cooler sampling months. Seasonal trends were likely the indirect results of varying temperatures. Although the size and range of this study were small, results suggest data sets collected using different methods may effectively be used together with some minor considerations due to potential confounding factors. This study provides motivation for the continued investigation of combining data sets derived from multiple methods of data collection, which could be useful in stream bioassessment and particularly important for the development of regional stream nutrient criteria for the southern Ozarks.
Storytelling: A Qualitative Tool to Promote Health Among Vulnerable Populations.
Palacios, Janelle F; Salem, Benissa; Hodge, Felicia Schanche; Albarrán, Cyndi R; Anaebere, Ann; Hayes-Bautista, Teodocia Maria
2015-09-01
Storytelling is a basic cultural phenomenon that has recently been recognized as a valuable method for collecting research data and developing multidisciplinary interventions. The purpose of this article is to present a collection of nursing scholarship wherein the concept of storytelling, underpinned by cultural phenomena, is explored for data collection and intervention. A conceptual analysis of storytelling reveals key variables. Following a brief review of current research focused on storytelling used within health care, three case studies among three vulnerable populations (American Indian teen mothers, American Indian cancer survivors, and African American women at risk for HIV/AIDS) demonstrate the uses of storytelling for data collection and intervention. Implications for transcultural nursing regarding storytelling are discussed. © The Author(s) 2014.
Multiple-locus variable-number tandem repeat analysis for molecular typing of Aspergillus fumigatus
2010-01-01
Background Multiple-locus variable-number tandem repeat (VNTR) analysis (MLVA) is a prominent subtyping method to resolve closely related microbial isolates to provide information for establishing genetic patterns among isolates and to investigate disease outbreaks. The usefulness of MLVA was recently demonstrated for the avian major pathogen Chlamydophila psittaci. In the present study, we developed a similar method for another pathogen of birds: the filamentous fungus Aspergillus fumigatus. Results We selected 10 VNTR markers located on 4 different chromosomes (1, 5, 6 and 8) of A. fumigatus. These markers were tested with 57 unrelated isolates from different hosts or their environment (53 isolates from avian species in France, China or Morocco, 3 isolates from humans collected at CHU Henri Mondor hospital in France and the reference strain CBS 144.89). The Simpson index for individual markers ranged from 0.5771 to 0.8530. A combined loci index calculated with all the markers yielded an index of 0.9994. In a second step, the panel of 10 markers was used in different epidemiological situations and tested on 277 isolates, including 62 isolates from birds in Guangxi province in China, 95 isolates collected in two duck farms in France and 120 environmental isolates from a turkey hatchery in France. A database was created with the results of the present study http://minisatellites.u-psud.fr/MLVAnet/. Three major clusters of isolates were defined by using the graphing algorithm termed Minimum Spanning Tree (MST). The first cluster comprised most of the avian isolates collected in the two duck farms in France, the second cluster comprised most of the avian isolates collected in poultry farms in China and the third one comprised most of the isolates collected in the turkey hatchery in France. Conclusions MLVA displayed excellent discriminatory power. The method showed a good reproducibility. MST analysis revealed an interesting clustering with a clear separation between isolates according to their geographic origin rather than their respective hosts. PMID:21143842
Cao, Ying; Rajan, Suja S; Wei, Peng
2016-12-01
A Mendelian randomization (MR) analysis is performed to analyze the causal effect of an exposure variable on a disease outcome in observational studies, by using genetic variants that affect the disease outcome only through the exposure variable. This method has recently gained popularity among epidemiologists given the success of genetic association studies. Many exposure variables of interest in epidemiological studies are time varying, for example, body mass index (BMI). Although longitudinal data have been collected in many cohort studies, current MR studies only use one measurement of a time-varying exposure variable, which cannot adequately capture the long-term time-varying information. We propose using the functional principal component analysis method to recover the underlying individual trajectory of the time-varying exposure from the sparsely and irregularly observed longitudinal data, and then conduct MR analysis using the recovered curves. We further propose two MR analysis methods. The first assumes a cumulative effect of the time-varying exposure variable on the disease risk, while the second assumes a time-varying genetic effect and employs functional regression models. We focus on statistical testing for a causal effect. Our simulation studies mimicking the real data show that the proposed functional data analysis based methods incorporating longitudinal data have substantial power gains compared to standard MR analysis using only one measurement. We used the Framingham Heart Study data to demonstrate the promising performance of the new methods as well as inconsistent results produced by the standard MR analysis that relies on a single measurement of the exposure at some arbitrary time point. © 2016 WILEY PERIODICALS, INC.
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Role Variables VS. Contextual Variables in the Theory of Didactic Systems
NASA Astrophysics Data System (ADS)
Alberti, Monica; Cirina, Lucia; Paoli, Francesco
Partisans of the constructivist approach to mathematics education, such as Brousseau or Chevallard, developed an accurate theoretical framework in which didactical systems are viewed in a systemic perspective. What they somewhat fail to draw, however, is a sharp distinction between role variables - concerning the roles played in the didactical interaction by the individual elements of the system (Student-Teacher-Knowledge) - and contextual variables - concerning the action on the learning process of the system as a whole. Our research in progress on 2nd graders' word problem solving strategies applies the previous dichotomy to class management strategies adopted by teachers. Partial evidence collected so far points to the tentative conclusion according to which, contextual variables being equal, differences in teaching styles and methods may deeply reshape the role component of didactical systems. If we take into careful account this distinction, we can shed additional light into some hitherto unexplained phenomena observed in the literature.
Yoshioka, Craig; Pulokas, James; Fellmann, Denis; Potter, Clinton S.; Milligan, Ronald A.; Carragher, Bridget
2007-01-01
Visualization by electron microscopy has provided many insights into the composition, quaternary structure, and mechanism of macromolecular assemblies. By preserving samples in stain or vitreous ice it is possible to image them as discrete particles, and from these images generate three-dimensional structures. This ‘single-particle’ approach suffers from two major shortcomings; it requires an initial model to reconstitute 2D data into a 3D volume, and it often fails when faced with conformational variability. Random conical tilt (RCT) and orthogonal tilt (OTR) are methods developed to overcome these problems, but the data collection required, particularly for vitreous ice specimens, is difficult and tedious. In this paper we present an automated approach to RCT/OTR data collection that removes the burden of manual collection and offers higher quality and throughput than is otherwise possible. We show example datasets collected under stain and cryo conditions and provide statistics related to the efficiency and robustness of the process. Furthermore, we describe the new algorithms that make this method possible, which include new calibrations, improved targeting and feature-based tracking. PMID:17524663
Survey Field Methods for Expanded Biospecimen and Biomeasure Collection in NSHAP Wave 2
Jaszczak, Angela; Hoffmann, Joscelyn N.; You, Hannah M.; Kern, David W.; Pagel, Kristina; McPhillips, Jane; Schumm, L. Philip; Dale, William; Huang, Elbert S.; McClintock, Martha K.
2014-01-01
Objectives. The National Social Life, Health, and Aging Project is a nationally representative, longitudinal survey of older adults. A main component is the collection of biomeasures to objectively assess physiological status relevant to psychosocial variables, aging conditions, and disease. Wave 2 added novel biomeasures, refined those collected in Wave 1, and provides a reference for the collection protocols and strategy common to the biomeasures. The effects of aging, gender, and their interaction are presented in the specific biomeasure papers included in this Special Issue. Method. A transdisciplinary working group expanded the biomeasures collected to include physiological, genetic, anthropometric, functional, neuropsychological, and sensory measures, yielding 37 more than in Wave 1. All were designed for collection in respondents’ homes by nonmedically trained field interviewers. Results. Both repeated and novel biomeasures were successful. Those in Wave 1 were refined to improve quality, and ensure consistency for longitudinal analysis. Four new biospecimens yielded 27 novel measures. During the interview, 19 biomeasures were recorded covering anthropometric, functional, neuropsychological, and sensory measures and actigraphy provided data on activity and sleep. Discussion. Improved field methods included in-home collection, temperature control, establishment of a central survey biomeasure laboratory, and shipping, all of which were crucial for successful collection by the field interviewers and accurate laboratory assay of the biomeasures (92.1% average co-operation rate and 97.3% average assay success rate). Developed for home interviews, these biomeasures are readily applicable to other surveys. PMID:25360025
Polycyclic Aromatic Hydrocarbons in Residential Dust: Sources of Variability
Metayer, Catherine; Petreas, Myrto; Does, Monique; Buffler, Patricia A.; Rappaport, Stephen M.
2013-01-01
Background: There is interest in using residential dust to estimate human exposure to environmental contaminants. Objectives: We aimed to characterize the sources of variability for polycyclic aromatic hydrocarbons (PAHs) in residential dust and provide guidance for investigators who plan to use residential dust to assess exposure to PAHs. Methods: We collected repeat dust samples from 293 households in the Northern California Childhood Leukemia Study during two sampling rounds (from 2001 through 2007 and during 2010) using household vacuum cleaners, and measured 12 PAHs using gas chromatography–mass spectrometry. We used a random- and a mixed-effects model for each PAH to apportion observed variance into four components and to identify sources of variability. Results: Median concentrations for individual PAHs ranged from 10 to 190 ng/g of dust. For each PAH, total variance was apportioned into regional variability (1–9%), intraregional between-household variability (24–48%), within-household variability over time (41–57%), and within-sample analytical variability (2–33%). Regional differences in PAH dust levels were associated with estimated ambient air concentrations of PAH. Intraregional differences between households were associated with the residential construction date and the smoking habits of residents. For some PAHs, a decreasing time trend explained a modest fraction of the within-household variability; however, most of the within-household variability was unaccounted for by our mixed-effects models. Within-household differences between sampling rounds were largest when the interval between dust sample collections was at least 6 years in duration. Conclusions: Our findings indicate that it may be feasible to use residential dust for retrospective assessment of PAH exposures in studies of health effects. PMID:23461863
Development of an electronic database for Acute Pain Service outcomes
Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH
2012-01-01
BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364
Anna, Bluszcz
Nowadays methods of measurement and assessment of the level of sustained development at the international, national and regional level are a current research problem, which requires multi-dimensional analysis. The relative assessment of the sustainability level of the European Union member states and the comparative analysis of the position of Poland relative to other countries was the aim of the conducted studies in the article. EU member states were treated as objects in the multi-dimensional space. Dimensions of space were specified by ten diagnostic variables describing the sustainability level of UE countries in three dimensions, i.e., social, economic and environmental. Because the compiled statistical data were expressed in different units of measure, taxonomic methods were used for building an aggregated measure to assess the level of sustainable development of EU member states, which through normalisation of variables enabled the comparative analysis between countries. Methodology of studies consisted of eight stages, which included, among others: defining data matrices, calculating the variability coefficient for all variables, which variability coefficient was under 10 %, division of variables into stimulants and destimulants, selection of the method of variable normalisation, developing matrices of normalised data, selection of the formula and calculating the aggregated indicator of the relative level of sustainable development of the EU countries, calculating partial development indicators for three studies dimensions: social, economic and environmental and the classification of the EU countries according to the relative level of sustainable development. Statistical date were collected based on the Polish Central Statistical Office publication.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Top quark mass measurement using the template method at CDF
Aaltonen, T
2011-06-03
We present a measurement of the top quark mass in the lepton+jets and dilepton channels of tmore » $$\\bar{t}$$ decays using the template method. The data sample corresponds to an integrated luminosity of 5.6 fb -1 of p$$\\bar{p}$$ collisions at Tevatron with √s = 1.96 TeV, collected with the CDF II detector. The measurement is performed by constructing templates of three kinematic variables in the lepton+jets and two kinematic variables in the dilepton channel. The variables are two reconstructed top quark masses from different jets-to-quarks combinations and the invariant mass of two jets from the W decay in the lepton+jets channel, and a reconstructed top quark mass and m T2, a variable related to the transverse mass in events with two missing particles, in the dilepton channel. The simultaneous fit of the templates from signal and background events in the lepton+jets and dilepton channels to the data yields a measured top quark mass of M top = 172.1±1.1 (stat)±0.9 (syst) GeV/c 2.« less
Using Replicates in Information Retrieval Evaluation.
Voorhees, Ellen M; Samarov, Daniel; Soboroff, Ian
2017-09-01
This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions-something not possible without replicates-yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness.
Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M
2016-09-01
Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.
Using Replicates in Information Retrieval Evaluation
VOORHEES, ELLEN M.; SAMAROV, DANIEL; SOBOROFF, IAN
2018-01-01
This article explores a method for more accurately estimating the main effect of the system in a typical test-collection-based evaluation of information retrieval systems, thus increasing the sensitivity of system comparisons. Randomly partitioning the test document collection allows for multiple tests of a given system and topic (replicates). Bootstrap ANOVA can use these replicates to extract system-topic interactions—something not possible without replicates—yielding a more precise value for the system effect and a narrower confidence interval around that value. Experiments using multiple TREC collections demonstrate that removing the topic-system interactions substantially reduces the confidence intervals around the system effect as well as increases the number of significant pairwise differences found. Further, the method is robust against small changes in the number of partitions used, against variability in the documents that constitute the partitions, and the measure of effectiveness used to quantify system effectiveness. PMID:29905334
NASA Astrophysics Data System (ADS)
Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin
2017-12-01
Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.
Finding structure in data using multivariate tree boosting
Miller, Patrick J.; Lubke, Gitta H.; McArtor, Daniel B.; Bergeman, C. S.
2016-01-01
Technology and collaboration enable dramatic increases in the size of psychological and psychiatric data collections, but finding structure in these large data sets with many collected variables is challenging. Decision tree ensembles such as random forests (Strobl, Malley, & Tutz, 2009) are a useful tool for finding structure, but are difficult to interpret with multiple outcome variables which are often of interest in psychology. To find and interpret structure in data sets with multiple outcomes and many predictors (possibly exceeding the sample size), we introduce a multivariate extension to a decision tree ensemble method called gradient boosted regression trees (Friedman, 2001). Our extension, multivariate tree boosting, is a method for nonparametric regression that is useful for identifying important predictors, detecting predictors with nonlinear effects and interactions without specification of such effects, and for identifying predictors that cause two or more outcome variables to covary. We provide the R package ‘mvtboost’ to estimate, tune, and interpret the resulting model, which extends the implementation of univariate boosting in the R package ‘gbm’ (Ridgeway et al., 2015) to continuous, multivariate outcomes. To illustrate the approach, we analyze predictors of psychological well-being (Ryff & Keyes, 1995). Simulations verify that our approach identifies predictors with nonlinear effects and achieves high prediction accuracy, exceeding or matching the performance of (penalized) multivariate multiple regression and multivariate decision trees over a wide range of conditions. PMID:27918183
Specification and Verification of Medical Monitoring System Using Petri-nets.
Majma, Negar; Babamir, Seyed Morteza
2014-07-01
To monitor the patient behavior, data are collected from patient's body by a medical monitoring device so as to calculate the output using embedded software. Incorrect calculations may endanger the patient's life if the software fails to meet the patient's requirements. Accordingly, the veracity of the software behavior is a matter of concern in the medicine; moreover, the data collected from the patient's body are fuzzy. Some methods have already dealt with monitoring the medical monitoring devices; however, model based monitoring fuzzy computations of such devices have been addressed less. The present paper aims to present synthesizing a fuzzy Petri-net (FPN) model to verify behavior of a sample medical monitoring device called continuous infusion insulin (INS) because Petri-net (PN) is one of the formal and visual methods to verify the software's behavior. The device is worn by the diabetic patients and then the software calculates the INS dose and makes a decision for injection. The input and output of the infusion INS software are not crisp in the real world; therefore, we present them in fuzzy variables. Afterwards, we use FPN instead of clear PN to model the fuzzy variables. The paper follows three steps to synthesize an FPN to deal with verification of the infusion INS device: (1) Definition of fuzzy variables, (2) definition of fuzzy rules and (3) design of the FPN model to verify the software behavior.
Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.
Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H
2016-06-01
Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.
Microscopic theory of nuclear fission: a review
NASA Astrophysics Data System (ADS)
Schunck, N.; Robledo, L. M.
2016-11-01
This article reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree-Fock-Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections, are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel-Kramers-Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. Although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.
Microscopic Theory of Nuclear Fission: A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunck, N.; Robledo, L. M.
This paper reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree–Fock–Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections,more » are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel–Kramers–Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. In conclusion, although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.« less
Microscopic Theory of Nuclear Fission: A Review
Schunck, N.; Robledo, L. M.
2016-10-11
This paper reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree–Fock–Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections,more » are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel–Kramers–Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. In conclusion, although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.« less
Microscopic theory of nuclear fission: a review.
Schunck, N; Robledo, L M
2016-11-01
This article reviews how nuclear fission is described within nuclear density functional theory. A distinction should be made between spontaneous fission, where half-lives are the main observables and quantum tunnelling the essential concept, and induced fission, where the focus is on fragment properties and explicitly time-dependent approaches are often invoked. Overall, the cornerstone of the density functional theory approach to fission is the energy density functional formalism. The basic tenets of this method, including some well-known tools such as the Hartree-Fock-Bogoliubov (HFB) theory, effective two-body nuclear potentials such as the Skyrme and Gogny force, finite-temperature extensions and beyond mean-field corrections, are presented succinctly. The energy density functional approach is often combined with the hypothesis that the time-scale of the large amplitude collective motion driving the system to fission is slow compared to typical time-scales of nucleons inside the nucleus. In practice, this hypothesis of adiabaticity is implemented by introducing (a few) collective variables and mapping out the many-body Schrödinger equation into a collective Schrödinger-like equation for the nuclear wave-packet. The region of the collective space where the system transitions from one nucleus to two (or more) fragments defines what are called the scission configurations. The inertia tensor that enters the kinetic energy term of the collective Schrödinger-like equation is one of the most essential ingredients of the theory, since it includes the response of the system to small changes in the collective variables. For this reason, the two main approximations used to compute this inertia tensor, the adiabatic time-dependent HFB and the generator coordinate method, are presented in detail, both in their general formulation and in their most common approximations. The collective inertia tensor enters also the Wentzel-Kramers-Brillouin (WKB) formula used to extract spontaneous fission half-lives from multi-dimensional quantum tunnelling probabilities (For the sake of completeness, other approaches to tunnelling based on functional integrals are also briefly discussed, although there are very few applications.) It is also an important component of some of the time-dependent methods that have been used in fission studies. Concerning the latter, both the semi-classical approaches to time-dependent nuclear dynamics and more microscopic theories involving explicit quantum-many-body methods are presented. One of the hallmarks of the microscopic theory of fission is the tremendous amount of computing needed for practical applications. In particular, the successful implementation of the theories presented in this article requires a very precise numerical resolution of the HFB equations for large values of the collective variables. This aspect is often overlooked, and several sections are devoted to discussing the resolution of the HFB equations, especially in the context of very deformed nuclear shapes. In particular, the numerical precision and iterative methods employed to obtain the HFB solution are documented in detail. Finally, a selection of the most recent and representative results obtained for both spontaneous and induced fission is presented, with the goal of emphasizing the coherence of the microscopic approaches employed. Although impressive progress has been achieved over the last two decades to understand fission microscopically, much work remains to be done. Several possible lines of research are outlined in the conclusion.
Hybrid performance measurement of a business process outsourcing - A Malaysian company perspective
NASA Astrophysics Data System (ADS)
Oluyinka, Oludapo Samson; Tamyez, Puteri Fadzline; Kie, Cheng Jack; Freida, Ayodele Ozavize
2017-05-01
It's no longer new that customer perceived value for product and services are now greatly influenced by its psychological and social advantages. In order to meet up with the increasing operational cost, response time, quality and innovative capabilities many companies turned their fixed operational cost to a variable cost through outsourcing. Hence, the researcher explored different underlying outsourcing theories and infer that these theories are essential to performance improvement. In this study, the researcher evaluates the performance of a business process outsource company by a combination of lean and agile method. To test the hypotheses, we analyze different variability that a business process company faces, how lean and agile have been used in other industry to address such variability and discuss the result using a predictive multiple regression analysis on data collected from companies in Malaysia. The findings from this study revealed that while each method has its own advantage, a business process outsource company could achieve more (up to 87%) increase in performance level by developing a strategy which focuses on a perfect mixture of lean and agile improvement methods. Secondly, this study shows that performance indicator could be better evaluated with non-metrics variables of the agile method. Thirdly, this study also shows that business process outsourcing company could perform better when they concentrate more on strengthening internal process integration of employees.
Free energy and hidden barriers of the β-sheet structure of prion protein.
Paz, S Alexis; Abrams, Cameron F
2015-10-13
On-the-fly free-energy parametrization is a new collective variable biasing approach akin to metadynamics with one important distinction: rather than acquiring an accelerated distribution via a history-dependent bias potential, sampling on this distribution is achieved from the beginning of the simulation using temperature-accelerated molecular dynamics. In the present work, we compare the performance of both approaches to compute the free-energy profile along a scalar collective variable measuring the H-bond registry of the β-sheet structure of the mouse Prion protein. Both methods agree on the location of the free-energy minimum, but free-energy profiles from well-tempered metadynamics are subject to a much higher degree of statistical noise due to hidden barriers. The sensitivity of metadynamics to hidden barriers is shown to be a consequence of the history dependence of the bias potential, and we detail the nature of these barriers for the prion β-sheet. In contrast, on-the-fly parametrization is much less sensitive to these barriers and thus displays improved convergence behavior relative to that of metadynamics. While hidden barriers are a frequent and central issue in free-energy methods, on-the-fly free-energy parametrization appears to be a robust and preferable method to confront this issue.
NASA Astrophysics Data System (ADS)
Khare, A.; Kilbourne, K. H.; Schijf, J.
2017-12-01
Standard methods of reconstructing past sea surface temperatures (SSTs) with coral skeletal Sr/Ca ratios assume the seawater Sr/Ca ratio is constant. However, there is little data to support this assumption, in part because analytical techniques capable of determining seawater Sr/Ca with sufficient accuracy and precision are expensive and time consuming. We demonstrate a method to measure seawater Sr/Ca using inductively coupled plasma atomic emission spectrometry where we employ an intensity ratio calibration routine that reduces the self- matrix effects of calcium and cancels out the matrix effects that are common to both calcium and strontium. A seawater standard solution cross-calibrated with multiple instruments is used to correct for long-term instrument drift and any remnant matrix effects. The resulting method produces accurate seawater Sr/Ca determinations rapidly, inexpensively, and with a precision better than 0.2%. This method will make it easier for coral paleoclimatologists to quantify potentially problematic fluctuations in seawater Sr/Ca at their study locations. We apply our method to test for variability in surface seawater Sr/Ca along the Florida Keys Reef Tract. We are collecting winter and summer samples for two years in a grid with eleven nearshore to offshore transects across the reef, as well as continuous samples collected by osmotic pumps at four locations adjacent to our grid. Our initial analysis of the grid samples indicates a trend of decreasing Sr/Ca values offshore potentially due to a decreasing groundwater influence. The values differ by as much as 0.05 mmol/mol which could lead to an error of 1°C in mean SST reconstructions. Future work involves continued sampling in the Florida Keys to test for seasonal and interannual variability in seawater Sr/Ca, as well as collecting data from small reefs in the Virgin Islands to test the stability of seawater Sr/Ca under different geologic, hydrologic and hydrographic environments.
Using GPS, GIS, and Accelerometer Data to Predict Transportation Modes.
Brondeel, Ruben; Pannier, Bruno; Chaix, Basile
2015-12-01
Active transportation is a substantial source of physical activity, which has a positive influence on many health outcomes. A survey of transportation modes for each trip is challenging, time-consuming, and requires substantial financial investments. This study proposes a passive collection method and the prediction of modes at the trip level using random forests. The RECORD GPS study collected real-life trip data from 236 participants over 7 d, including the transportation mode, global positioning system, geographical information systems, and accelerometer data. A prediction model of transportation modes was constructed using the random forests method. Finally, we investigated the performance of models on the basis of a limited number of participants/trips to predict transportation modes for a large number of trips. The full model had a correct prediction rate of 90%. A simpler model of global positioning system explanatory variables combined with geographical information systems variables performed nearly as well. Relatively good predictions could be made using a model based on the 991 trips of the first 30 participants. This study uses real-life data from a large sample set to test a method for predicting transportation modes at the trip level, thereby providing a useful complement to time unit-level prediction methods. By enabling predictions on the basis of a limited number of observations, this method may decrease the workload for participants/researchers and provide relevant trip-level data to investigate relations between transportation and health.
Fast exploration of an optimal path on the multidimensional free energy surface
Chen, Changjun
2017-01-01
In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Using Student Health Data to Understand and Promote Academic Success in Higher Education Settings
ERIC Educational Resources Information Center
Larson, Mary; Orr, Megan; Warne, Donald
2016-01-01
The Problem: Institutions of higher education are interested in students' academic success as measured by GPA. Health is related to GPA and many institutions collect health data; however, this data is underutilized. This study used several health-related variables to examine relationships between health and GPA. Method: This study utilized a…
Constituent loads in small streams: the process and problems of estimating sediment flux
R. B. Thomas
1989-01-01
Constituent loads in small streams are often estimated poorly. This is especially true for discharge-related constituents like sediment, since their flux is highly variable and mainly occurs during infrequent high-flow events. One reason for low-quality estimates is that most prevailing data collection methods ignore sampling probabilities and only partly account for...
ERIC Educational Resources Information Center
Richards, K. Andrew R.; Gaudreault, Karen Lux; Woods, Amelia Mays
2018-01-01
Purpose: This study sought to develop a quantitative understanding of factors that reduce perceived isolation and marginalization among physical educators. A conceptual model for the relationships among study variables was developed. Method: Data were collected through an online survey completed by 419 inservice physical educators (210 females,…
ERIC Educational Resources Information Center
Piontak, Joy Rayanne; Schulman, Michael D.
2016-01-01
Background: Schools are important sites for interventions to prevent childhood obesity. This study examines how variables measuring the socioeconomic and racial composition of schools and counties affect the likelihood of obesity among third to fifth grade children. Methods: Body mass index data were collected from third to fifth grade public…
School Influences on the Physical Activity of African American, Latino, and White Girls
ERIC Educational Resources Information Center
Duncan, Susan C.; Strycker, Lisa A.; Chaumeton, Nigel R.
2015-01-01
Background: The purpose of this research was to examine the impact of school-related variables on the physical activity (PA) levels of early adolescent African American, Latino, and White girls. Methods: Data were collected from 353 African American (N?=?123), Latino (N?=?118), and White (N?=?112) girls. Physical activity levels included a PA…
ERIC Educational Resources Information Center
Lint, Anna H.
2013-01-01
This quantitative study evaluated and investigated the theoretical underpinnings of the Kember's (1995) student progress model that examines the direct or indirect effects of student persistence in online education by identifying the relationships between variables. The primary method of data collection in this study was a survey by exploring the…
Evaluating kriging as a tool to improve moderate resolution maps of forest biomass
Elizabeth A. Freeman; Gretchen G. Moisen
2007-01-01
The USDA Forest Service, Forest Inventory and Analysis program (FIA) recently produced a nationwide map of forest biomass by modeling biomass collected on forest inventory plots as nonparametric functions of moderate resolution satellite data and other environmental variables using Cubist software. Efforts are underway to develop methods to enhance this initial map. We...
Johnson, M.J.; Pupacko, Alex
1992-01-01
Micrometeorological data were collected at Ash Meadows and Corn Creek Springs, Nye and Clark Counties, Nevada, from October 1, 1986 through September 30, 1987. The data include accumulated measurements recorded hourly or every 30 minutes, at each site, for the following climatic variables: air temperature, wind speed, relative humidity, precipitation, solar radiation, net radiation, and soil-heat flux. Periodic sampling of sensible-heat flux and latent-heat flux were also recorded using 5-minute intervals of accumulated data. Evapotranspiration was calculated by both the eddy-correlation method and the Penman combination method. The data collected and the computer programs used to process the data are available separately on three magnetic diskettes in card-image format. (USGS)
Ridyard, Colin H; Hughes, Dyfrig A
2010-12-01
The UK Health Technology Assessment (HTA) program funds trials that address issues of clinical and cost-effectiveness to meet the needs of the National Health Service (NHS). The objective of this review was to systematically assess the methods of resource use data collection and costing; and to produce a best practice guide for data capture within economic analyses alongside clinical trials. All 100 HTA-funded primary research papers published to June 2009 were reviewed for the health economic methods employed. Data were extracted and summarized by: health technology assessed, costing perspective adopted, evidence of planning and piloting, data collection method, frequency of data collection, and sources of unit cost data. Ninety-five studies were identified as having conducted an economic analysis, of which 85 recorded patient-level resource use. The review identified important differences in how data are collected. These included: a priori evidence of analysts having identified important cost drivers; the piloting and validation of patient-completed resource use questionnaires; choice of costing perspective; and frequency of data collection. Areas of commonality included: the extensive use of routine medical records and reliance on patient recall; and the use of standard sources of unit costs. Economic data collection is variable, even among a homogeneous selection of trials designed to meet the needs of a common organization (NHS). Areas for improvement have been identified, and based on our findings and related reviews and guidelines, a checklist is proposed for good practice relating to economic data collection within clinical trials. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).
Establishing an efficient way to utilize the drought resistance germplasm population in wheat.
Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin
2013-01-01
Drought resistance breeding provides a hopeful way to improve yield and quality of wheat in arid and semiarid regions. Constructing core collection is an efficient way to evaluate and utilize drought-resistant germplasm resources in wheat. In the present research, 1,683 wheat varieties were divided into five germplasm groups (high resistant, HR; resistant, R; moderate resistant, MR; susceptible, S; and high susceptible, HS). The least distance stepwise sampling (LDSS) method was adopted to select core accessions. Six commonly used genetic distances (Euclidean distance, Euclid; Standardized Euclidean distance, Seuclid; Mahalanobis distance, Mahal; Manhattan distance, Manhat; Cosine distance, Cosine; and Correlation distance, Correlation) were used to assess genetic distances among accessions. Unweighted pair-group average (UPGMA) method was used to perform hierarchical cluster analysis. Coincidence rate of range (CR) and variable rate of coefficient of variation (VR) were adopted to evaluate the representativeness of the core collection. A method for selecting the ideal constructing strategy was suggested in the present research. A wheat core collection for the drought resistance breeding programs was constructed by the strategy selected in the present research. The principal component analysis showed that the genetic diversity was well preserved in that core collection.
A summary of methods for the collection and analysis of basic hydrologic data for arid regions
Rantz, S.E.; Eakin, T.E.
1971-01-01
This report summarizes and discusses current methods of collecting and analyzing the data required for a study of the basic hydrology of arid regions. The fundamental principles behind these methods are no different than those that apply to studies of humid regions, but in arid regions the infrequent occurrence of precipitation, the great variability of the many hydrologic elements, and the inaccessibility of most basins usually make it economically infeasible to use conventional levels of instrumentation. Because of these economic considerations hydrologic studies in arid regions have been commonly of the reconnaissance type; the more costly detailed studies are generally restricted to experimental basins and to those basins that now have major economic significance. A thorough search of the literature and personal communication with workers in the field of arid-land hydrology provided the basis for this summary of methods used in both reconnaissance and detailed hydrologic studies. The conclusions reached from a consideration of previously reported methods are interspersed in this report where appropriate.
NASA Astrophysics Data System (ADS)
Danczyk, Jennifer; Wollocko, Arthur; Farry, Michael; Voshell, Martin
2016-05-01
Data collection processes supporting Intelligence, Surveillance, and Reconnaissance (ISR) missions have recently undergone a technological transition accomplished by investment in sensor platforms. Various agencies have made these investments to increase the resolution, duration, and quality of data collection, to provide more relevant and recent data to warfighters. However, while sensor improvements have increased the volume of high-resolution data, they often fail to improve situational awareness and actionable intelligence for the warfighter because it lacks efficient Processing, Exploitation, and Dissemination and filtering methods for mission-relevant information needs. The volume of collected ISR data often overwhelms manual and automated processes in modern analysis enterprises, resulting in underexploited data, insufficient, or lack of answers to information requests. The outcome is a significant breakdown in the analytical workflow. To cope with this data overload, many intelligence organizations have sought to re-organize their general staffing requirements and workflows to enhance team communication and coordination, with hopes of exploiting as much high-value data as possible and understanding the value of actionable intelligence well before its relevance has passed. Through this effort we have taken a scholarly approach to this problem by studying the evolution of Processing, Exploitation, and Dissemination, with a specific focus on the Army's most recent evolutions using the Functional Resonance Analysis Method. This method investigates socio-technical processes by analyzing their intended functions and aspects to determine performance variabilities. Gaps are identified and recommendations about force structure and future R and D priorities to increase the throughput of the intelligence enterprise are discussed.
Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Marinica, Mihai-Cosmin
2018-03-01
The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.
NASA Astrophysics Data System (ADS)
Giorgino, Toni
2018-07-01
The proper choice of collective variables (CVs) is central to biased-sampling free energy reconstruction methods in molecular dynamics simulations. The PLUMED 2 library, for instance, provides several sophisticated CV choices, implemented in a C++ framework; however, developing new CVs is still time consuming due to the need to provide code for the analytical derivatives of all functions with respect to atomic coordinates. We present two solutions to this problem, namely (a) symbolic differentiation and code generation, and (b) automatic code differentiation, in both cases leveraging open-source libraries (SymPy and Stan Math, respectively). The two approaches are demonstrated and discussed in detail implementing a realistic example CV, the local radius of curvature of a polymer. Users may use the code as a template to streamline the implementation of their own CVs using high-level constructs and automatic gradient computation.
Zhao, Xiaoyan; Qureshi, Ferhan; Eastman, P Scott; Manning, William C; Alexander, Claire; Robinson, William H; Hesterberg, Lyndal K
2012-04-30
Variability in pre-analytical blood sampling and handling can significantly impact results obtained in quantitative immunoassays. Understanding the impact of these variables is critical for accurate quantification and validation of biomarker measurements. Particularly, in the design and execution of large clinical trials, even small differences in sample processing and handling can have dramatic effects in analytical reliability, results interpretation, trial management and outcome. The effects of two common blood sampling methods (serum vs. plasma) and two widely-used serum handling methods (on the clot with ambient temperature shipping, "traditional", vs. centrifuged with cold chain shipping, "protocol") on protein and autoantibody concentrations were examined. Matched serum and plasma samples were collected from 32 rheumatoid arthritis (RA) patients representing a wide range of disease activity status. Additionally, a set of matched serum samples with two sample handling methods was collected. One tube was processed per manufacturer's instructions and shipped overnight on cold packs (protocol). The matched tube, without prior centrifugation, was simultaneously shipped overnight at ambient temperatures (traditional). Upon delivery, the traditional tube was centrifuged. All samples were subsequently aliquoted and frozen prior to analysis of protein and autoantibody biomarkers. Median correlation between paired serum and plasma across all autoantibody assays was 0.99 (0.98-1.00) with a median % difference of -3.3 (-7.5 to 6.0). In contrast, observed protein biomarker concentrations were significantly affected by sample types, with median correlation of 0.99 (0.33-1.00) and a median % difference of -10 (-55 to 23). When the two serum collection/handling methods were compared, the median correlation between paired samples for autoantibodies was 0.99 (0.91-1.00) with a median difference of 4%. In contrast, significant increases were observed in protein biomarker concentrations among certain biomarkers in samples processed with the 'traditional' method. Autoantibody quantification appears robust to both sample type (plasma vs. serum) and pre-analytical sample collection/handling methods (protocol vs. traditional). In contrast, for non-antibody protein biomarker concentrations, sample type had a significant impact; plasma samples generally exhibit decreased protein biomarker concentrations relative to serum. Similarly, sample handling significantly impacted the variability of protein biomarker concentrations. When biomarker concentrations are combined algorithmically into a single test score such as a multi-biomarker disease activity test for rheumatoid arthritis (MBDA), changes in protein biomarker concentrations may result in a bias of the score. These results illustrate the importance of characterizing pre-analytical methodology, sample type, sample processing and handling procedures for clinical testing in order to ensure test accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
Sapkota, Lok Mani; Shrestha, Rajendra Prasad; Jourdain, Damien; Shivakoti, Ganesh P
2015-01-01
The attributes of social ecological systems affect the management of commons. Strengthening and enhancing social capital and the enforcement of rules and sanctions aid in the collective action of communities in forest fire management. Using a set of variables drawn from previous studies on the management of commons, we conducted a study across 20 community forest user groups in Central Siwalik, Nepal, by dividing the groups into two categories based on the type and level of their forest fire management response. Our study shows that the collective action in forest fire management is consistent with the collective actions in other community development activities. However, the effectiveness of collective action is primarily dependent on the complex interaction of various variables. We found that strong social capital, strong enforcement of rules and sanctions, and users' participation in crafting the rules were the major variables that strengthen collective action in forest fire management. Conversely, users' dependency on a daily wage and a lack of transparency were the variables that weaken collective action. In fire-prone forests such as the Siwalik, our results indicate that strengthening social capital and forming and enforcing forest fire management rules are important variables that encourage people to engage in collective action in fire management.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
[Measurement of Water COD Based on UV-Vis Spectroscopy Technology].
Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei
2016-01-01
Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.
Charting molecular free-energy landscapes with an atlas of collective variables
NASA Astrophysics Data System (ADS)
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2016-11-01
Collective variables (CVs) are a fundamental tool to understand molecular flexibility, to compute free energy landscapes, and to enhance sampling in molecular dynamics simulations. However, identifying suitable CVs is challenging, and is increasingly addressed with systematic data-driven manifold learning techniques. Here, we provide a flexible framework to model molecular systems in terms of a collection of locally valid and partially overlapping CVs: an atlas of CVs. The specific motivation for such a framework is to enhance the applicability and robustness of CVs based on manifold learning methods, which fail in the presence of periodicities in the underlying conformational manifold. More generally, using an atlas of CVs rather than a single chart may help us better describe different regions of conformational space. We develop the statistical mechanics foundation for our multi-chart description and propose an algorithmic implementation. The resulting atlas of data-based CVs are then used to enhance sampling and compute free energy surfaces in two model systems, alanine dipeptide and β-D-glucopyranose, whose conformational manifolds have toroidal and spherical topologies.
Berrozpe, Pablo; Lamattina, Daniela; Santini, María Soledad; Araujo, Analía Vanesa; Utgés, María Eugenia; Salomón, Oscar Daniel
2017-10-01
Visceral leishmaniasis (VL) is an endemic disease in northeastern Argentina including the Corrientes province, where the presence of the vector and canine cases of VL were recently confirmed in December 2008. The objective of this study was to assess the modelling of micro- and macro-habitat variables to evaluate the urban environmental suitability for the spatial distribution of Lutzomyia longipalpis presence and abundance in an urban scenario. Sampling of 45 sites distributed throughout Corrientes city (Argentina) was carried out using REDILA-BL minilight traps in December 2013. The sampled specimens were identified according to methods described by Galati (2003). The analysis of variables derived from the processing of satellite images (macro-habitat variables) and from the entomological sampling and surveys (micro-habitat variables) was performed using the statistical software R. Three generalised linear models were constructed composed of micro- and macro-habitat variables to explain the spatial distribution of the abundance of Lu. longipalpis and one composed of micro-habitat variables to explain the occurrence of the vector. A total of 609 phlebotominae belonging to five species were collected, of which 56% were Lu. longipalpis. In addition, the presence of Nyssomyia neivai and Migonemya migonei, which are vectors of tegumentary leishmaniasis, were also documented and represented 34.81% and 6.74% of the collections, respectively. The explanatory variable normalised difference vegetation index (NDVI) described the abundance distribution, whereas the presence of farmyard animals was important for explaining both the abundance and the occurrence of the vector. The results contribute to the identification of variables that can be used to establish priority areas for entomological surveillance and provide an efficient transfer tool for the control and prevention of vector-borne diseases.
Covell, Christine L; Sidani, Souraya; Ritchie, Judith A
2012-06-01
The sequence used for collecting quantitative and qualitative data in concurrent mixed-methods research may influence participants' responses. Empirical evidence is needed to determine if the order of data collection in concurrent mixed methods research biases participants' responses to closed and open-ended questions. To examine the influence of the quantitative-qualitative sequence on responses to closed and open-ended questions when assessing the same variables or aspects of a phenomenon simultaneously within the same study phase. A descriptive cross-sectional, concurrent mixed-methods design was used to collect quantitative (survey) and qualitative (interview) data. The setting was a large multi-site health care centre in Canada. A convenience sample of 50 registered nurses was selected and participated in the study. Participants were randomly assigned to one of two sequences for data collection, quantitative-qualitative or qualitative-quantitative. Independent t-tests were performed to compare the two groups' responses to the survey items. Directed content analysis was used to compare the participants' responses to the interview questions. The sequence of data collection did not greatly affect the participants' responses to the closed-ended questions (survey items) or the open-ended questions (interview questions). The sequencing of data collection, when using both survey and semi-structured interviews, may not bias participants' responses to closed or open-ended questions. Additional research is required to confirm these findings. Copyright © 2011 Elsevier Ltd. All rights reserved.
Odong, T L; Jansen, J; van Eeuwijk, F A; van Hintum, T J L
2013-02-01
Definition of clear criteria for evaluation of the quality of core collections is a prerequisite for selecting high-quality cores. However, a critical examination of the different methods used in literature, for evaluating the quality of core collections, shows that there are no clear guidelines on the choices of quality evaluation criteria and as a result, inappropriate analyses are sometimes made leading to false conclusions being drawn regarding the quality of core collections and the methods to select such core collections. The choice of criteria for evaluating core collections appears to be based mainly on the fact that those criteria have been used in earlier publications rather than on the actual objectives of the core collection. In this study, we provide insight into different criteria used for evaluating core collections. We also discussed different types of core collections and related each type of core collection to their respective evaluation criteria. Two new criteria based on genetic distance are introduced. The consequences of the different evaluation criteria are illustrated using simulated and experimental data. We strongly recommend the use of the distance-based criteria since they not only allow the simultaneous evaluation of all variables describing the accessions, but they also provide intuitive and interpretable criteria, as compared with the univariate criteria generally used for the evaluation of core collections. Our findings will provide genebank curators and researchers with possibilities to make informed choices when creating, comparing and using core collections.
Evaluation of a standardized micro-vacuum sampling method for collection of surface dust.
Ashley, Kevin; Applegate, Gregory T; Wise, Tamara J; Fernback, Joseph E; Goldcamp, Michael J
2007-03-01
A standardized procedure for collecting dust samples from surfaces using a micro-vacuum sampling technique was evaluated. Experiments were carried out to investigate the collection efficiency of the vacuum sampling method described in ASTM Standard D7144, "Standard Practice for Collection of Surface Dust by Micro-Vacuum Sampling for Subsequent Metals Determination." Weighed masses ( approximately 5, approximately 10 and approximately 25 mg) of three NIST Standard Reference Materials (SRMs) were spiked onto surfaces of various substrates. The SRMs used were: (1) Powdered Lead-Based Paint; (2) Urban Particulate Matter; and (3) Trace Elements in Indoor Dust. Twelve different substrate materials were chosen to be representative of surfaces commonly encountered in occupational and/or indoor settings: (1) wood, (2) tile, (3) linoleum, (4) vinyl, (5) industrial carpet, (6) plush carpet, (7,8) concrete block (painted and unpainted), (9) car seat material, (10) denim, (11) steel, and (12) glass. Samples of SRMs originally spiked onto these surfaces were collected using the standardized micro-vacuum sampling procedure. Gravimetric analysis of material collected within preweighed Accucapinserts (housed within the samplers) was used to measure SRM recoveries. Recoveries ranged from 21.6% (+/- 10.4%, 95% confidence limit [CL]) for SRM 1579 from industrial carpet to 59.2% (+/- 11.0%, 95% CL) for SRM 1579 from glass. For most SRM/substrate combinations, recoveries ranged from approximately 25% to approximately 50%; variabilities differed appreciably. In general, SRM recoveries were higher from smooth and hard surfaces and lower from rough and porous surfaces. Material captured within collection nozzles attached to the sampler inlets was also weighed. A significant fraction of SRM originally spiked onto substrate surfaces was captured within collection nozzles. Percentages of SRMs captured within collection nozzles ranged from approximately 13% (+/- 4 - +/- 5%, 95% CLs) for SRMs 1579 and 2583 from industrial carpet to approximately 45% (+/- 7 - +/- 26%, 95% CLs) for SRM 1648 from glass, tile and steel. For some substrates, loose material from the substrate itself (i.e., substrate particles and fibers) was sometimes collected along with the SRM, both within Accucaps as well as collection nozzles. Co-collection of substrate material can bias results and contribute to sampling variability. The results of this work have provided performance data on the standardized micro-vacuum sampling procedure.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.
2009-01-01
We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925
Analysis of Decentralized Variable Structure Control for Collective Search by Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feddema, J.; Goldsmith, S.; Robinett, R.
1998-11-04
This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha-beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roIes. In an alpha-beta team. alpha agents are motivated to improve their status by exploring new regions of the search space. Beta a~ents are conservative, and reiy on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its currentmore » role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws . In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha-beta aIgorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.« less
Analysis of decentralized variable structure control for collective search by mobile robots
NASA Astrophysics Data System (ADS)
Goldsmith, Steven Y.; Feddema, John T.; Robinett, Rush D., III
1998-10-01
This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha- beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roles. In an alpha- beta team, alpha agents are motivated to improve their status by exploring new regions of the search space. Beta agents are conservative, and rely on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its current role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws. In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha- beta algorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
Zounemat-Kermani, Mohammad; Ramezani-Charmahineh, Abdollah; Adamowski, Jan; Kisi, Ozgur
2018-06-13
Chlorination, the basic treatment utilized for drinking water sources, is widely used for water disinfection and pathogen elimination in water distribution networks. Thereafter, the proper prediction of chlorine consumption is of great importance in water distribution network performance. In this respect, data mining techniques-which have the ability to discover the relationship between dependent variable(s) and independent variables-can be considered as alternative approaches in comparison to conventional methods (e.g., numerical methods). This study examines the applicability of three key methods, based on the data mining approach, for predicting chlorine levels in four water distribution networks. ANNs (artificial neural networks, including the multi-layer perceptron neural network, MLPNN, and radial basis function neural network, RBFNN), SVM (support vector machine), and CART (classification and regression tree) methods were used to estimate the concentration of residual chlorine in distribution networks for three villages in Kerman Province, Iran. Produced water (flow), chlorine consumption, and residual chlorine were collected daily for 3 years. An assessment of the studied models using several statistical criteria (NSC, RMSE, R 2 , and SEP) indicated that, in general, MLPNN has the greatest capability for predicting chlorine levels followed by CART, SVM, and RBF-ANN. Weaker performance of the data-driven methods in the water distribution networks, in some cases, could be attributed to improper chlorination management rather than the methods' capability.
Crowdsourcing Language Change with Smartphone Applications
Leemann, Adrian; Kolly, Marie-José; Purves, Ross; Britain, David; Glaser, Elvira
2016-01-01
Crowdsourcing linguistic phenomena with smartphone applications is relatively new. In linguistics, apps have predominantly been developed to create pronunciation dictionaries, to train acoustic models, and to archive endangered languages. This paper presents the first account of how apps can be used to collect data suitable for documenting language change: we created an app, Dialäkt Äpp (DÄ), which predicts users’ dialects. For 16 linguistic variables, users select a dialectal variant from a drop-down menu. DÄ then geographically locates the user’s dialect by suggesting a list of communes where dialect variants most similar to their choices are used. Underlying this prediction are 16 maps from the historical Linguistic Atlas of German-speaking Switzerland, which documents the linguistic situation around 1950. Where users disagree with the prediction, they can indicate what they consider to be their dialect’s location. With this information, the 16 variables can be assessed for language change. Thanks to the playfulness of its functionality, DÄ has reached many users; our linguistic analyses are based on data from nearly 60,000 speakers. Results reveal a relative stability for phonetic variables, while lexical and morphological variables seem more prone to change. Crowdsourcing large amounts of dialect data with smartphone apps has the potential to complement existing data collection techniques and to provide evidence that traditional methods cannot, with normal resources, hope to gather. Nonetheless, it is important to emphasize a range of methodological caveats, including sparse knowledge of users’ linguistic backgrounds (users only indicate age, sex) and users’ self-declaration of their dialect. These are discussed and evaluated in detail here. Findings remain intriguing nevertheless: as a means of quality control, we report that traditional dialectological methods have revealed trends similar to those found by the app. This underlines the validity of the crowdsourcing method. We are presently extending DÄ architecture to other languages. PMID:26726775
The Impact of Soil Sampling Errors on Variable Rate Fertilization
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Hoskinson; R C. Rope; L G. Blackwood
2004-07-01
Variable rate fertilization of an agricultural field is done taking into account spatial variability in the soil’s characteristics. Most often, spatial variability in the soil’s fertility is the primary characteristic used to determine the differences in fertilizers applied from one point to the next. For several years the Idaho National Engineering and Environmental Laboratory (INEEL) has been developing a Decision Support System for Agriculture (DSS4Ag) to determine the economically optimum recipe of various fertilizers to apply at each site in a field, based on existing soil fertility at the site, predicted yield of the crop that would result (and amore » predicted harvest-time market price), and the current costs and compositions of the fertilizers to be applied. Typically, soil is sampled at selected points within a field, the soil samples are analyzed in a lab, and the lab-measured soil fertility of the point samples is used for spatial interpolation, in some statistical manner, to determine the soil fertility at all other points in the field. Then a decision tool determines the fertilizers to apply at each point. Our research was conducted to measure the impact on the variable rate fertilization recipe caused by variability in the measurement of the soil’s fertility at the sampling points. The variability could be laboratory analytical errors or errors from variation in the sample collection method. The results show that for many of the fertility parameters, laboratory measurement error variance exceeds the estimated variability of the fertility measure across grid locations. These errors resulted in DSS4Ag fertilizer recipe recommended application rates that differed by up to 138 pounds of urea per acre, with half the field differing by more than 57 pounds of urea per acre. For potash the difference in application rate was up to 895 pounds per acre and over half the field differed by more than 242 pounds of potash per acre. Urea and potash differences accounted for almost 87% of the cost difference. The sum of these differences could result in a $34 per acre cost difference for the fertilization. Because of these differences, better analysis or better sampling methods may need to be done, or more samples collected, to ensure that the soil measurements are truly representative of the field’s spatial variability.« less
Huibers, Linda; Christensen, Bo; Christensen, Morten Bondo
2018-01-01
Background Paper questionnaires have traditionally been the first choice for data collection in research. However, declining response rates over the past decade have increased the risk of selection bias in cross-sectional studies. The growing use of the Internet offers new ways of collecting data, but trials using Web-based questionnaires have so far seen mixed results. A secure, online digital mailbox (e-Boks) linked to a civil registration number became mandatory for all Danish citizens in 2014 (exemption granted only in extraordinary cases). Approximately 89% of the Danish population have a digital mailbox, which is used for correspondence with public authorities. Objective We aimed to compare response rates, completeness of data, and financial costs for different invitation methods: traditional surface mail and digital mail. Methods We designed a cross-sectional comparative study. An invitation to participate in a survey on help-seeking behavior in out-of-hours care was sent to two groups of randomly selected citizens from age groups 30-39 and 50-59 years and parents to those aged 0-4 years using either traditional surface mail (paper group) or digital mail sent to a secure online mailbox (digital group). Costs per respondent were measured by adding up all costs for handling, dispatch, printing, and work salary and then dividing the total figure by the number of respondents. Data completeness was assessed by comparing the number of missing values between the two methods. Socioeconomic variables (age, gender, family income, education duration, immigrant status, and job status) were compared both between respondents and nonrespondents and within these groups to evaluate the degree of selection bias. Results A total 3600 citizens were invited in each group; 1303 (36.29%) responded to the digital invitation and 1653 (45.99%) to the paper invitation (difference 9.66%, 95% CI 7.40-11.92). The costs were €1.51 per respondent for the digital group and €15.67 for paper group respondents. Paper questionnaires generally had more missing values; this was significant in five of 17 variables (P<.05). Substantial differences were found in the socioeconomic variables between respondents and nonrespondents, whereas only minor differences were seen within the groups of respondents and nonrespondents. Conclusions Although we found lower response rates for Web-based invitations, this solution was more cost-effective (by a factor of 10) and had slightly lower numbers of missing values than questionnaires sent with paper invitations. Analyses of socioeconomic variables showed almost no difference between nonrespondents in both groups, which could imply that the lower response rate in the digital group does not necessarily increase the level of selection bias. Invitations to questionnaire studies via digital mail may be an excellent option for collecting research data in the future. This study may serve as the foundational pillar of digital data collection in health care research in Scandinavia and other countries considering implementing similar systems. PMID:29362206
NASA Astrophysics Data System (ADS)
Harvey, J.; Fisher, J. L.; Johnson, S.; Morgan, S.; Peterson, W. T.; Satterthwaite, E. V.; Vrijenhoek, R. C.
2016-02-01
Our ability to accurately characterize the diversity of planktonic organisms is affected by both the methods we use to collect water samples and our approaches to assessing sample contents. Plankton nets collect organisms from high volumes of water, but integrate sample contents along the net's path. In contrast, plankton pumps collect water from discrete depths. Autonomous underwater vehicles (AUVs) can collect water samples with pinpoint accuracy from physical features such as upwelling fronts or biological features such as phytoplankton blooms, but sample volumes are necessarily much smaller than those possible with nets. Characterization of plankton diversity and abundances in water samples may also vary with the assessment method we apply. Morphological taxonomy provides visual identification and enumeration of organisms via microscopy, but is labor intensive. Next generation DNA sequencing (NGS) shows great promise for assessing plankton diversity in water samples but accurate assessment of relative abundances may not be possible in all cases. Comparison of morphological taxonomy to molecular approaches is necessary to identify areas of overlap and also areas of disagreement between these methods. We have compared morphological taxonomic assessments to mitochondrial COI and nuclear 28S ribosomal RNA NGS results for plankton net samples collected in Monterey bay, California. We have made a similar comparison for plankton pump samples, and have also applied our NGS methods to targeted, small volume water samples collected by an AUV. Our goal is to communicate current results and lessons learned regarding application of traditional taxonomy and novel molecular approaches to the study of plankton diversity in spatially and temporally variable, coastal marine environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steger, J.L.; Bursey, J.T.; Merrill, R.G.
1999-03-01
This report presents the results of laboratory studies to develop and evaluate a method for the sampling and analysis of phosgene from stationary sources of air emissions using diethylamine (DEA) in toluene as the collection media. The method extracts stack gas from emission sources and stabilizes the reactive gas for subsequent analysis. DEA was evaluated both in a benchtop study and in a laboratory train spiking study. This report includes results for both the benchtop study and the train spiking study. Benchtop studies to evaluate the suitability of DEA for collecting and analyzing phosgene investigated five variables: storage time, DEAmore » concentration, moisture/pH, phosgene concentration, and sample storage temperature. Prototype sampling train studies were performed to determine if the benchtop chemical studies were transferable to a Modified Method 5 sampling train collecting phosgene in the presence of clean air mixed with typical stack gas components. Four conditions, which varied the moisture and phosgene spike were evaluated in triplicate. In addition to research results, the report includes a detailed draft method for sampling and analysis of phosgene from stationary source emissions.« less
ERIC Educational Resources Information Center
Hartono, Edy; Wahyudi, Sugeng; Harahap, Pahlawansjah; Yuniawan, Ahyar
2017-01-01
This study aims to analyze the relationship between lecturers' performance and their teaching competence, measured by antecedent variables of organizational learning and need for achievement. It used the Structure Equation Model as data analysis technique, and the random sampling method to collect data from 207 lecturers of private universities in…
Efficient Energy-Storage Concept
NASA Technical Reports Server (NTRS)
Brantley, L. W. J.; Rupp, C.
1982-01-01
Space-platform energy-storage and attitude-stabilization system utilizes variable moment of inertia of two masses attached to ends of retractable cable. System would be brought to its initial operating speed by gravity-gradient pumping. When fully developed, concept could be part of an orbiting solar-energy collection system. Energy would be temporarily stored in system then transmitted to Earth by microwaves or other method.
The Examining Reading Motivation of Primary Students in the Terms of Some Variables
ERIC Educational Resources Information Center
Biyik, Merve Atas; Erdogan, Tolga; Yildiz, Mustafa
2017-01-01
The purpose of this research, is to examine reading motivation of the primary 2, 3 and 4th grade students in the terms of gender, class and socioeconomic status. Research is structured according to model of survey in the descriptive type. In the collection, analysis and interpretation of the data "mix method". The sample consists of…
Variability of Hormonal Stress Markers Collected from a Managed Dolphin Population
2011-09-30
physiological indicators of stress in wild marine mammals and the interrelationships between different stress markers can be used to estimate the impact of...samples will be processed for adrenocorticosteroids (ACTH, cortisol, aldosterone ), catecholamines (epinephrine, norepinephrine), and thyroid hormones...T3 and T4) via radioimmunoassay (RIA). Radioimmunoassay methods have previously been validated for cortisol and aldosterone in this species (Houser
Eric D. Boyda
2013-01-01
The high costs associated with physically harvesting plant biomass may prevent sufficient data collection, which is necessary to account for the natural variability of vegetation at a landscape scale. A biomass estimation technique was previously developed using representative samples or "reference units", which eliminated the need to harvest biomass from all...
The Analysis on Sport Attitudes of Students at High School Education in Turkey
ERIC Educational Resources Information Center
Atalay, Ahmet
2016-01-01
The research objective is to determine different variables on sport attitudes of the 1st, 2nd, 3rd, and 4th grade high school students throughout Turkey. Data were collected using face to face survey method with students studying in 21 provinces within seven different geographical regions of Turkey. 5862 randomized students are selected throughout…
Mahdavi, A; Aghaei, M; Besharat, M A; Khaki Seddigh, F; Akbari, S H; Hamidifar, Z
2015-01-01
This research was conducted to compare the depression level and the identity styles between students in Allameh University and Islamic Seminary in Tehran city. The research method was the ex post facto or causal-comparative kind. In this research, all the students of Allameh University and Islamic Seminary were chosen as the research population. Among the statistical population, by using the convenience sampling method, a sample consisting of 100 male students was chosen (50-50 from both universities). Afterwards, the Identity Styles Inventory (ISI-6G) and the Beck Depression Inventory (21 questions) were employed in order to collect the data. By using ANOVA and systematic regression, the collected data were analyzed. The findings of the research indicated that the average values of the normative component (p-value = 0.03) and the depression level (p-value = 0.000) of seminary's students were higher compared to the ones specific for the Allameh's students. Among the various identity styles, commitment style could totally predict 16% of depression variable changes of Allameh's students. Moreover, information and normative styles could totally predict 19% of the depression variable changes of the seminary's students.
Mahdavi, A; Aghaei, M; Besharat, MA; Khaki Seddigh, F; Akbari, SH; Hamidifar, Z
2015-01-01
This research was conducted to compare the depression level and the identity styles between students in Allameh University and Islamic Seminary in Tehran city. The research method was the ex post facto or causal-comparative kind. In this research, all the students of Allameh University and Islamic Seminary were chosen as the research population. Among the statistical population, by using the convenience sampling method, a sample consisting of 100 male students was chosen (50-50 from both universities). Afterwards, the Identity Styles Inventory (ISI-6G) and the Beck Depression Inventory (21 questions) were employed in order to collect the data. By using ANOVA and systematic regression, the collected data were analyzed. The findings of the research indicated that the average values of the normative component (p-value = 0.03) and the depression level (p-value = 0.000) of seminary’s students were higher compared to the ones specific for the Allameh’s students. Among the various identity styles, commitment style could totally predict 16% of depression variable changes of Allameh’s students. Moreover, information and normative styles could totally predict 19% of the depression variable changes of the seminary’s students. PMID:28316715
Shahpouri, Samira; Namdari, Kourosh; Abedi, Ahmad
2016-05-01
One of the latest models proposed with regard to work engagement is the detailed model put forward by Bakker and Demerouti (2007). The present study aims at investigating the effect of job resources and personal resources on turnover intention with the mediator role of work engagement among female nurses at Isfahan Alzahra Hospital. In the current study, job and personal resources were considered as the predictors of job turnover and work engagement was considered as the mediator variable among predictive and criterion variables. The data of the present study were collected from 208 female nurses who were selected by systematic random sampling. As for the analysis of the collected data, structural equations model, normal distribution method, and Bootstrap method in Macro, Preacher and Hayes, (2004) program were deployed. The findings showed that the personal resources affect the turnover intention both directly and indirectly (through work engagement); however, job resources are just associated with turnover intention with the mediating role of work engagement. The results of the study have important implications for organizations' managers about improving work engagement. Copyright © 2015 Elsevier Inc. All rights reserved.
Lohmander, Anette; Olsson, Maria
2004-01-01
This review of 88 articles in three international journals was undertaken for the purpose of investigating the methodology for perceptual speech assessment in patients with cleft palate. The articles were published between 1980 and 2000 in the Cleft Palate-Craniofacial Journal, the International Journal of Language and Communication Disorders, and Folia Phoniatrica et Logopaedica. The majority of articles (76) were published in the Cleft Palate-Craniofacial Journal, with an increase in articles during the 1990s and 2000. Information about measures or variables was clearly given in all articles. However, the review raises several major concerns regarding method for collection and documentation of data and method for measurement. The most distressing findings were the use of a cross-sectional design in studies of few patients with large age ranges and different types of clefts, the use of highly variable speech samples, and the lack of information about listeners and on reliability. It is hoped that ongoing national and international collaborative efforts to standardize procedures for collection and analysis of perceptual data will help to eliminate such concerns and thus make comparison of published results possible in the future.
Handheld computers for self-administered sensitive data collection: A comparative study in Peru
Bernabe-Ortiz, Antonio; Curioso, Walter H; Gonzales, Marco A; Evangelista, Wilfredo; Castagnetto, Jesus M; Carcamo, Cesar P; Hughes, James P; Garcia, Patricia J; Garnett, Geoffrey P; Holmes, King K
2008-01-01
Background Low-cost handheld computers (PDA) potentially represent an efficient tool for collecting sensitive data in surveys. The goal of this study is to evaluate the quality of sexual behavior data collected with handheld computers in comparison with paper-based questionnaires. Methods A PDA-based program for data collection was developed using Open-Source tools. In two cross-sectional studies, we compared data concerning sexual behavior collected with paper forms to data collected with PDA-based forms in Ancon (Lima). Results The first study enrolled 200 participants (18–29 years). General agreement between data collected with paper format and handheld computers was 86%. Categorical variables agreement was between 70.5% and 98.5% (Kappa: 0.43–0.86) while numeric variables agreement was between 57.1% and 79.8% (Spearman: 0.76–0.95). Agreement and correlation were higher in those who had completed at least high school than those with less education. The second study enrolled 198 participants. Rates of responses to sensitive questions were similar between both kinds of questionnaires. However, the number of inconsistencies (p = 0.0001) and missing values (p = 0.001) were significantly higher in paper questionnaires. Conclusion This study showed the value of the use of handheld computers for collecting sensitive data, since a high level of agreement between paper and PDA responses was reached. In addition, a lower number of inconsistencies and missing values were found with the PDA-based system. This study has demonstrated that it is feasible to develop a low-cost application for handheld computers, and that PDAs are feasible alternatives for collecting field data in a developing country. PMID:18366687
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802
Entropy as a collective variable
NASA Astrophysics Data System (ADS)
Parrinello, Michele
Sampling complex free energy surfaces that exhibit long lived metastable states separated by kinetic bottlenecks is one of the most pressing issues in the atomistic simulations of matter. Not surprisingly many solutions to this problem have been suggested. Many of them are based on the identification of appropriate collective variables that span the manifold of the slow varying modes of the system. While much effort has been put in devising and even constructing on the fly appropriate collective variables there is still a cogent need of introducing simple, generic, physically transparent, and yet effective collective variables. Motivated by the physical observation that in many case transitions between one metastable state and another result from a trade off between enthalpy and entropy we introduce appropriate collective variables that are able to represent in a simple way these two physical properties. We use these variables in the context of the recently introduced variationally enhanced sampling and apply it them with success to the simulation of crystallization from the liquid and to conformational transitions in protein. Department of Chemistry and Applied Biosciences, ETH Zurich, and Facolta' di Informatica, Istituto di Scienze Computazionali, Universita' della Svizzera Italiana, Via G. Buffi 13, 6900 Lugano, Switzerland.
A Bayesian network approach for modeling local failure in lung cancer
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-03-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.
Nasci, R S
1982-03-01
Adult treehole mosquitoes were collected by vacuum-sweeping of vegetation in urban, suburban, and rural woodlots in northern Indiana. The sibling species Aedes triseriatus and Ae. hendersoni were identified by polyacrylamide gel electrophoresis. Blood meals were identified by the modified precipitin method. Ae. triseriatus fed predominantly on chipmunks and deer, and Ae. hendersoni fed mainly on tree squirrels and racoon. The relative rates of feeding on the major hosts were variable depending on the location of collection, and probably reflected differences in host density. No blood-feeding on humans was detected.
An international survey of cerebral palsy registers and surveillance systems
Goldsmith, Shona; McIntyre, Sarah; Smithers-Sheedy, Hayley; Blair, Eve; Cans, Christine; Watson, Linda; Yeargin-Allsopp, Marshalyn
2016-01-01
AIM To describe cerebral palsy (CP) surveillance programmes and identify similarities and differences in governance and funding, aims and scope, definition, inclusion/exclusion criteria, ascertainment and data collection, to enhance the potential for research collaboration. METHOD Representatives from 38 CP surveillance programmes were invited to participate in an online survey and submit their data collection forms. Descriptive statistics were used to summarize information submitted. RESULTS Twenty-seven surveillance programmes participated (25 functioning registers, two closed owing to lack of funding). Their aims spanned five domains: resource for CP research, surveillance, aetiology/prevention, service planning, and information provision (in descending order of frequency). Published definitions guided decision making for the definition of CP and case eligibility for most programmes. Consent, case identification, and data collection methods varied widely. Ten key data items were collected by all programmes and a further seven by at least 80% of programmes. All programmes reported an interest in research collaboration. INTERPRETATION Despite variability in methodologies, similarities exist across programmes in terms of their aims, definitions, and data collected. These findings will facilitate harmonization of data and collaborative research efforts, which are so necessary on account of the heterogeneity and relatively low prevalence of CP. PMID:26781543
A continuous quality improvement project to improve the quality of cervical Papanicolaou smears.
Burkman, R T; Ward, R; Balchandani, K; Kini, S
1994-09-01
To improve the quality of cervical Papanicolaou smears by continuous quality improvement techniques. The study used a Papanicolaou smear data base of over 200,000 specimens collected between June 1988 and December 1992. A team approach employing techniques such as process flow-charting, cause and effect diagrams, run charts, and a randomized trial of collection methods was used to evaluate potential causes of Papanicolaou smear reports with the notation "inadequate" or "less than optimal" due to too few or absent endocervical cells. Once a key process variable (method of collection) was identified, the proportion of Papanicolaou smears with inadequate or absent endocervical cells was determined before and after employment of a collection technique using a spatula and Cytobrush. We measured the rate of less than optimal Papanicolaou smears due to too few or absent endocervical cells. Before implementing the new collection technique fully by June 1990, the overall rate of less than optimal cervical Papanicolaou smears ranged from 20-25%; by December 1993, it had stabilized at about 10%. Continuous quality improvement can be used successfully to study a clinical process and implement change that will lead to improvement.
2013-01-01
Background Scanning electron microscopy (SEM) has been used for high-resolution imaging of plant cell surfaces for many decades. Most SEM imaging employs the secondary electron detector under high vacuum to provide pseudo-3D images of plant organs and especially of surface structures such as trichomes and stomatal guard cells; these samples generally have to be metal-coated to avoid charging artefacts. Variable pressure-SEM allows examination of uncoated tissues, and provides a flexible range of options for imaging, either with a secondary electron detector or backscattered electron detector. In one application, we used the backscattered electron detector under low vacuum conditions to collect images of uncoated barley leaf tissue followed by simple quantification of cell areas. Results Here, we outline methods for backscattered electron imaging of a variety of plant tissues with particular focus on collecting images for quantification of cell size and shape. We demonstrate the advantages of this technique over other methods to obtain high contrast cell outlines, and define a set of parameters for imaging Arabidopsis thaliana leaf epidermal cells together with a simple image analysis protocol. We also show how to vary parameters such as accelerating voltage and chamber pressure to optimise imaging in a range of other plant tissues. Conclusions Backscattered electron imaging of uncoated plant tissue allows acquisition of images showing details of plant morphology together with images of high contrast cell outlines suitable for semi-automated image analysis. The method is easily adaptable to many types of tissue and suitable for any laboratory with standard SEM preparation equipment and a variable-pressure-SEM or tabletop SEM. PMID:24135233
Collignon, Bertrand
2016-01-01
Recent studies show differences in individual motion and shoaling tendency between strains of the same species. Here, we analyse collective motion and response to visual stimuli in two morphologically different strains (TL and AB) of zebrafish. For both strains, we observed 10 groups of 5 and 10 zebrafish swimming freely in a large experimental tank with two identical landmarks (cylinders or discs) for 1 h. We tracked the positions of the fish by an automated tracking method and compute several metrics at the group level. First, the probability of the presence shows that both strains avoid free space and are more likely to swim in the vicinity of the walls of the tank and the landmarks. Second, the analysis of landmarks occupancy shows that AB zebrafish are more present in their vicinity than TL ones and that both strains regularly transit from one to the other one with no preference on the long duration. Finally, TL zebrafish show a higher cohesion than AB zebrafish. Thus, environmental heterogeneity and duration of the trials allow to reveal individual and collective behavioural variabilities among different strains of zebrafish. These results provide a new insight into the need to take into account individual variability of zebrafish strains for studying collective behaviour. PMID:27853558
Beukelman, Timothy; Anink, Janneke; Berntson, Lillemor; Duffy, Ciaran; Ellis, Justine A; Glerup, Mia; Guzman, Jaime; Horneff, Gerd; Kearsley-Fleet, Lianne; Klein, Ariane; Klotsche, Jens; Magnusson, Bo; Minden, Kirsten; Munro, Jane E; Niewerth, Martina; Nordal, Ellen; Ruperto, Nicolino; Santos, Maria Jose; Schanberg, Laura E; Thomson, Wendy; van Suijlekom-Smit, Lisette; Wulffraat, Nico; Hyrich, Kimme
2017-04-19
To characterize the existing national and multi-national registries and cohort studies in juvenile idiopathic arthritis (JIA) and identify differences as well as areas of potential future collaboration. We surveyed investigators from North America, Europe, and Australia about existing JIA cohort studies and registries. We excluded cross-sectional studies. We captured information about study design, duration, location, inclusion criteria, data elements and collection methods. We received survey results from 18 studies, including 11 national and 7 multi-national studies representing 37 countries in total. Study designs included inception cohorts, prevalent disease cohorts, and new treatment cohorts (several of which contribute to pharmacosurveillance activities). Despite numerous differences, the data elements collected across the studies was quite similar, with most studies collecting at least 5 of the 6 American College of Rheumatology core set variables and the data needed to calculate the 3-variable clinical juvenile disease activity score. Most studies were collecting medication initiation and discontinuation dates and were attempting to capture serious adverse events. There is a wide-range of large, ongoing JIA registries and cohort studies around the world. Our survey results indicate significant potential for future collaborative work using data from different studies and both combined and comparative analyses.
Electrogeochemical sampling with NEOCHIM - results of tests over buried gold deposits
Leinz, R.W.; Hoover, D.B.; Fey, D.L.; Smith, D.B.; Patterson, T.
1998-01-01
Electrogeochemical extraction methods are based on the migration of ions in an electric field. Ions present in soil moisture are transported by an applied current into fluids contained in special electrodes placed on the soil. The fluids are then collected and analyzed. Extractions are governed by Faraday's and Ohm's laws and are modeled by the operation of a simple Hittord transference apparatus. Calculations show that the volume of soil sampled in an ideal electrogeochemical extraction can be orders of magnitude greater than the volumes used in more popular geochemical extraction methods, although this has not been verified experimentally. CHIM is a method of in-situ electrogeochemical extraction that was developed in the former Soviet Union and has been tested and applied internationally to exploration for buried mineral deposits. Tests carried out at the US Geological Survey (USGS) indicated that there were problems inherent in the use of CHIM technology. The cause of the problems was determined to be the diffusion of acid from the conventional electrode into the soil. The NEOCHIM electrode incorporates two compartments and a salt bridge in a design that inhibits diffusion of acid and enables the collection of anions or cations. Tests over a gold-enriched vein in Colorado and over buried, Carlin-type, disseminated gold deposits in northern Nevada show that there are similarities and differences between NEOCHIM results and those by partial extractions of soils which include simple extractions with water, dilute acids and solutions of salts used as collector fluids in the electrodes. Results of both differ from the results obtained by total chemical digestion. The results indicate that NEOCHIM responds to mineralized faults associated with disseminated gold deposits whereas partial and total chemical extraction methods do not. This suggests that faults are favored channels for the upward migration of metals and that NEOCHIM may be more effective in exploration for the deposits. It defines anomalies that are often narrow and intense, an observation previously made by CHIM researchers. The field tests show that NEOCHIM is less affected by surface contamination. A test over the Mike disseminated gold deposit indicates that the method may not be effective for locating deposits with impermeable cover. Faradaic extraction efficiencies of 20-30%, or more, are frequently achieved with NEOCHIM and the method generally shows good reproducibility, especially in extraction of major cations. However, ions of other metals that are useful in exploration, including Au and As, may be collected in low and temporally variable concentrations. The reason for this variability is unclear and requires further investigation.CHIM is a method of in-situ electrogeochemical extraction developed for the exploration of buried mineral deposits. However, electrode problems like diffusion of acid into the soil were encountered during the use of CHIM. The NEOCHIM electrode was developed to inhibit the diffusion of acid and enable collection of anions or cations. Tests over buried gold deposits showed that NEOCHIM responds to mineralized faults associated with disseminated gold deposits whereas partial and total chemical extraction methods do not. This suggests that faults are favored channels for the upward migration of metals and NEOCHIM may be effective in exploration for the deposits. But ions of metals may be collected in low and variable concentration.
A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.
Brusco, Michael J; Shireman, Emilie; Steinley, Douglas
2017-09-01
The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Assessment of published models and prognostic variables in epithelial ovarian cancer at Mayo Clinic
Hendrickson, Andrea Wahner; Hawthorne, Kieran M.; Goode, Ellen L.; Kalli, Kimberly R.; Goergen, Krista M.; Bakkum-Gamez, Jamie N.; Cliby, William A.; Keeney, Gary L.; Visscher, Dan W.; Tarabishy, Yaman; Oberg, Ann L.; Hartmann, Lynn C.; Maurer, Matthew J.
2015-01-01
Objectives Epithelial ovarian cancer (EOC) is an aggressive disease in which first line therapy consists of a surgical staging/debulking procedure and platinum based chemotherapy. There is significant interest in clinically applicable, easy to use prognostic tools to estimate risk of recurrence and overall survival. In this study we used a large prospectively collected cohort of women with EOC to validate currently published models and assess prognostic variables. Methods Women with invasive ovarian, peritoneal, or fallopian tube cancer diagnosed between 2000-2011 and prospectively enrolled into the Mayo Clinic Ovarian Cancer registry were identified. Demographics and known prognostic markers as well as epidemiologic exposure variables were abstracted from the medical record and collected via questionnaire. Six previously published models of overall and recurrence-free survival were assessed for external validity. In addition, predictors of outcome were assessed in our dataset. Results Previously published models validated with a range of c-statistics (0.587-0.827), though application of models containing variables not part of routine practice were somewhat limited by missing data; utilization of all applicable models and comparison of results is suggested. Examination of prognostic variables identified only the presence of ascites and ASA score to be independent predictors of prognosis in our dataset, albeit with marginal gain in prognostic information, after accounting for stage and debulking. Conclusions Existing prognostic models for newly diagnosed EOC showed acceptable calibration in our cohort for clinical application. However, modeling of prospective variables in our dataset reiterates that stage and debulking remain the most important predictors of prognosis in this setting. PMID:25620544
Harmon, Brook E.; Nigg, Claudio R.; Long, Camonia; Amato, Katie; Anwar, Mahabub-Ul; Kutchman, Eve; Anthamatten, Peter; Browning, Raymond C.; Brink, Lois; Hill, James O.
2014-01-01
Objectives Social Cognitive Theory (SCT) has often been used as a guide to predict and modify physical activity (PA) behavior. We assessed the ability of commonly investigated SCT variables and perceived school environment variables to predict PA among elementary students. We also examined differences in influences between Hispanic and non-Hispanic students. Design This analysis used baseline data collected from eight schools who participated in a four-year study of a combined school-day curriculum and environmental intervention. Methods Data were collected from 393 students. A 3-step linear regression was used to measure associations between PA level, SCT variables (self-efficacy, social support, enjoyment), and perceived environment variables (schoolyard structures, condition, equipment/supervision). Logistic regression assessed associations between variables and whether students met PA recommendations. Results School and sex explained 6% of the moderate-to-vigorous PA models' variation. SCT variables explained an additional 15% of the models' variation, with much of the model's predictive ability coming from self-efficacy and social support. Sex was more strongly associated with PA level among Hispanic students, while self-efficacy was more strongly associated among non-Hispanic students. Perceived environment variables contributed little to the models. Conclusions Our findings add to the literature on the influences of PA among elementary-aged students. The differences seen in the influence of sex and self-efficacy among non-Hispanic and Hispanic students suggests these are areas where PA interventions could be tailored to improve efficacy. Additional research is needed to understand if different measures of perceived environment or perceptions at different ages may better predict PA. PMID:24772004
NASA Astrophysics Data System (ADS)
Carisi, Francesca; Domeneghetti, Alessio; Kreibich, Heidi; Schröter, Kai; Castellarin, Attilio
2017-04-01
Flood risk is function of flood hazard and vulnerability, therefore its accurate assessment depends on a reliable quantification of both factors. The scientific literature proposes a number of objective and reliable methods for assessing flood hazard, yet it highlights a limited understanding of the fundamental damage processes. Loss modelling is associated with large uncertainty which is, among other factors, due to a lack of standard procedures; for instance, flood losses are often estimated based on damage models derived in completely different contexts (i.e. different countries or geographical regions) without checking its applicability, or by considering only one explanatory variable (i.e. typically water depth). We consider the Secchia river flood event of January 2014, when a sudden levee-breach caused the inundation of nearly 200 km2 in Northern Italy. In the aftermath of this event, local authorities collected flood loss data, together with additional information on affected private households and industrial activities (e.g. buildings surface and economic value, number of company's employees and others). Based on these data we implemented and compared a quadratic-regression damage function, with water depth as the only explanatory variable, and a multi-variable model that combines multiple regression trees and considers several explanatory variables (i.e. bagging decision trees). Our results show the importance of data collection revealing that (1) a simple quadratic regression damage function based on empirical data from the study area can be significantly more accurate than literature damage-models derived for a different context and (2) multi-variable modelling may outperform the uni-variable approach, yet it is more difficult to develop and apply due to a much higher demand of detailed data.
Geostatistics for spatial genetic structures: study of wild populations of perennial ryegrass.
Monestiez, P; Goulard, M; Charmet, G
1994-04-01
Methods based on geostatistics were applied to quantitative traits of agricultural interest measured on a collection of 547 wild populations of perennial ryegrass in France. The mathematical background of these methods, which resembles spatial autocorrelation analysis, is briefly described. When a single variable is studied, the spatial structure analysis is similar to spatial autocorrelation analysis, and a spatial prediction method, called "kriging", gives a filtered map of the spatial pattern over all the sampled area. When complex interactions of agronomic traits with different evaluation sites define a multivariate structure for the spatial analysis, geostatistical methods allow the spatial variations to be broken down into two main spatial structures with ranges of 120 km and 300 km, respectively. The predicted maps that corresponded to each range were interpreted as a result of the isolation-by-distance model and as a consequence of selection by environmental factors. Practical collecting methodology for breeders may be derived from such spatial structures.
The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.
O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard
2010-08-01
The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.
Lin, Steve; Morrison, Laurie J; Brooks, Steven C
2011-04-01
The widely accepted Utstein style has standardized data collection and analysis in resuscitation and post resuscitation research. However, collection of many of these variables poses significant practical challenges. In addition, several important variables in post resuscitation research are missing. Our aim was to develop a comprehensive data dictionary and web-based data collection tool as part of the Strategies for Post Arrest Resuscitation Care (SPARC) Network project, which implemented a knowledge translation program for post cardiac arrest therapeutic hypothermia in 37 Ontario hospitals. A list of data variables was generated based on the current Utstein style, previous studies and expert opinion within our group of investigators. We developed a data dictionary by creating clear definitions and establishing abstraction instructions for each variable. The data dictionary was integrated into a web-based collection form allowing for interactive data entry. Two blinded investigators piloted the data collection tool, by performing a retrospective chart review. A total of 454 variables were included of which 400 were Utstein, 2 were adapted from existing studies and 52 were added to address missing elements. Kappa statistics for two outcome variables, survival to discharge and induction of therapeutic hypothermia were 0.86 and 0.64, respectively. This is the first attempt in the literature to develop a data dictionary as part of a standardized, pragmatic data collection tool for post cardiac arrest research patients. In addition, our dataset defined important variables that were previously missing. This data collection tool can serve as a reference for future trials in post cardiac arrest care. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Background concentrations of metals in soils from selected regions in the State of Washington
Ames, K.C.; Prych, E.A.
1995-01-01
Soil samples from 60 sites in the State of Washington were collected and analyzed to determine the magnitude and variability of background concen- trations of metals in soils of the State. Samples were collected in areas that were relatively undisturbed by human activity from the most pre- dominant soils in 12 different regions that are representative of large areas of Washington State. Concentrations of metals were determined by five different laboratory methods. Concentrations of mercury and nickel determined by both the total and total-recoverable methods displayed the greatest variability, followed by chromium and copper determined by the total-recoverable method. Concentrations of other metals, such as aluminum and barium determined by the total method, varied less. Most metals concentrations were found to be more nearly log-normally than normally distributed. Total metals concentrations were not significantly different among the different regions. However, total-recoverable metals concentrations were not as similar among different regions. Cluster analysis revealed that sampling sites in three regions encompassing the Puget Sound could be regrouped to form two new regions and sites in three regions in south-central and southeastern Washington State could also be regrouped into two new regions. Concentrations for 7 of 11 total-recoverable metals correlated with total metals concentrations. Concen- trations of six total metals also correlated positively with organic carbon. Total-recoverable metals concentrations did not correlate with either organic carbon or particle size. Concentrations of metals determined by the leaching methods did not correlate with total or total-recoverable metals concentrations, nor did they correlate with organic carbon or particle size.
Mapping the Riverscape of the Middle Fork John Day River with Structure-from-Motion
NASA Astrophysics Data System (ADS)
Dietrich, J. T.
2014-12-01
Aerial photography has proven an efficient method to collect a wide range of continuous variables for large sections of rivers. These data include variables such as the planimetric shape, low-flow and bank-full widths, bathymetry, and sediment sizes. Mapping these variables in a continuous manner allows us to explore the heterogeneity of the river and build a more complete picture of the holistic riverscape. To explore a low-cost option for aerial photography and riverscape mapping, I used the combination of a piloted helicopter and an off-the-shelf digital SLR camera to collect aerial imagery for a 32 km segment of the Middle Fork John Day River in eastern Oregon. This imagery was processed with Structure-from-Motion (SfM) photogrammetry to produce high-resolution 10 cm orthophotos and digital surface models that were used to extract riverscape variables. The Middle Fork John Day River is an important spawning river for anadromous Chinnook and Steelhead and has been the focus of widespread restoration and conservation activities in response to the legacies of extensive grazing and mining activity. By mapping the riverscape of the Middle Fork John Day, I explored downstream relationships between several geomorphic variables with hyperscale analysis. These riverscape data also provided an opportunity to make a continuous map of habitat suitability for migrating adult Chinook. Both the geomorphic and habitat suitability analysis provide an important assessment of the natural variation in the river and the impact of human modification, both positive and negative.
Kapelner, Adam; Krieger, Abba; Blanford, William J
2016-10-14
When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.
Surface-Water Techniques: On Demand Training Opportunities
,
2007-01-01
The U.S. Geological Survey (USGS) has been collecting streamflow information since 1889 using nationally consistent methods. The need for such information was envisioned by John Wesley Powell as a key component for settlement of the arid western United States. Because of Powell?s vision the nation now has a rich streamflow data base that can be analyzed with confidence in both space and time. This means that data collected at a stream gaging station in Maine in 1903 can be compared to data collected in 2007 at the same gage in Maine or at a different gage in California. Such comparisons are becoming increasingly important as we work to assess climate variability and anthropogenic effects on streamflow. Training employees in proper and consistent techniques to collect and analyze streamflow data forms a cornerstone for maintaining the integrity of this rich data base.
Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W
1988-08-01
The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.
Hormonal contraception and female pain, orgasm and sexual pleasure.
Smith, Nicole K; Jozkowski, Kristen N; Sanders, Stephanie A
2014-02-01
Almost half of all pregnancies in the United States are unintentional, unplanned, or mistimed. Most unplanned pregnancies result from inconsistent, incorrect, or nonuse of a contraceptive method. Diminished sexual function and pleasure may be a barrier to using hormonal contraception. This study explores sexual function and behaviors of women in relation to the use of hormonal vs. nonhormonal methods of contraception. Data were collected as part of an online health and sexuality study of women. Main outcomes variables assess frequencies in two domains: (i) sexual function (proportion of sexual events with experiences of pain or discomfort, arousal, contentment and satisfaction, pleasure and enjoyment, lubrication difficulty, and orgasm) and (ii) sexual behavior (number of times engaged in sexual activity, proportion of sexual events initiated by the woman, and proportion of sexual events for which a lubricant was used). Sociodemographic variables and contraceptive use were used as sample descriptors and correlates. The recall period was the past 4 weeks. The sample included 1,101 women with approximately half (n = 535) using a hormonal contraceptive method exclusively or a combination of a hormonal and nonhormonal method, and about half (n = 566) using a nonhormonal method of contraception exclusively. Hierarchical regression analyses were conducted to examine the relation of hormonal contraceptive use to each of the dependent variables. Women using a hormonal contraceptive method experienced less frequent sexual activity, arousal, pleasure, and orgasm and more difficulty with lubrication even when controlling for sociodemographic variables. This study adds to the literature on the potential negative sexual side effects experienced by many women using hormonal contraception. Prospective research with diverse women is needed to enhance the understanding of potential negative sexual side effects of hormonal contraceptives, their prevalence, and possible mechanisms. Clinical and counseling implications are discussed. © 2013 International Society for Sexual Medicine.
Olson, Nathan D.; Lund, Steven P.; Zook, Justin M.; Rojas-Cornejo, Fabiola; Beck, Brian; Foy, Carole; Huggett, Jim; Whale, Alexandra S.; Sui, Zhiwei; Baoutina, Anna; Dobeson, Michael; Partis, Lina; Morrow, Jayne B.
2015-01-01
This study presents the results from an interlaboratory sequencing study for which we developed a novel high-resolution method for comparing data from different sequencing platforms for a multi-copy, paralogous gene. The combination of PCR amplification and 16S ribosomal RNA gene (16S rRNA) sequencing has revolutionized bacteriology by enabling rapid identification, frequently without the need for culture. To assess variability between laboratories in sequencing 16S rRNA, six laboratories sequenced the gene encoding the 16S rRNA from Escherichia coli O157:H7 strain EDL933 and Listeria monocytogenes serovar 4b strain NCTC11994. Participants performed sequencing methods and protocols available in their laboratories: Sanger sequencing, Roche 454 pyrosequencing®, or Ion Torrent PGM®. The sequencing data were evaluated on three levels: (1) identity of biologically conserved position, (2) ratio of 16S rRNA gene copies featuring identified variants, and (3) the collection of variant combinations in a set of 16S rRNA gene copies. The same set of biologically conserved positions was identified for each sequencing method. Analytical methods using Bayesian and maximum likelihood statistics were developed to estimate variant copy ratios, which describe the ratio of nucleotides at each identified biologically variable position, as well as the likely set of variant combinations present in 16S rRNA gene copies. Our results indicate that estimated variant copy ratios at biologically variable positions were only reproducible for high throughput sequencing methods. Furthermore, the likely variant combination set was only reproducible with increased sequencing depth and longer read lengths. We also demonstrate novel methods for evaluating variable positions when comparing multi-copy gene sequence data from multiple laboratories generated using multiple sequencing technologies. PMID:27077030
Molloy Elreda, Lauren; Coatsworth, J Douglas; Gest, Scott D; Ram, Nilam; Bamberger, Katharine
2016-11-01
Although the majority of evidence-based programs are designed for group delivery, group process and its role in participant outcomes have received little empirical attention. Data were collected from 20 groups of participants (94 early adolescents, 120 parents) enrolled in an efficacy trial of a mindfulness-based adaptation of the Strengthening Families Program (MSFP). Following each weekly session, participants reported on their relations to group members. Social network analysis and methods sensitive to intraindividual variability were integrated to examine weekly covariation between group process and participant progress, and to predict post-intervention outcomes from levels and changes in group process. Results demonstrate hypothesized links between network indices of group process and intervention outcomes and highlight the value of this unique analytic approach to studying intervention group process.
Kumar, Shivendra; Ambreen, Heena; Variath, Murali T.; Rao, Atmakuri R.; Agarwal, Manu; Kumar, Amar; Goel, Shailendra; Jagannath, Arun
2016-01-01
Safflower (Carthamus tinctorius L.) is a dryland oilseed crop yielding high quality edible oil. Previous studies have described significant phenotypic variability in the crop and used geographical distribution and phenotypic trait values to develop core collections. However, the molecular diversity component was lacking in the earlier collections thereby limiting their utility in breeding programs. The present study evaluated the phenotypic variability for 12 agronomically important traits during two growing seasons (2011–12 and 2012–13) in a global reference collection of 531 safflower accessions, assessed earlier by our group for genetic diversity and population structure using AFLP markers. Significant phenotypic variation was observed for all the agronomic traits in the representative collection. Cluster analysis of phenotypic data grouped the accessions into five major clusters. Accessions from the Indian Subcontinent and America harbored maximal phenotypic variability with unique characters for a few traits. MANOVA analysis indicated significant interaction between genotypes and environment for both the seasons. Initially, six independent core collections (CC1–CC6) were developed using molecular marker and phenotypic data for two seasons through POWERCORE and MSTRAT. These collections captured the entire range of trait variability but failed to include complete genetic diversity represented in 19 clusters reported earlier through Bayesian analysis of population structure (BAPS). Therefore, we merged the three POWERCORE core collections (CC1–CC3) to generate a composite core collection, CartC1 and three MSTRAT core collections (CC4–CC6) to generate another composite core collection, CartC2. The mean difference percentage, variance difference percentage, variable rate of coefficient of variance percentage, coincidence rate of range percentage, Shannon's diversity index, and Nei's gene diversity for CartC1 were 11.2, 43.7, 132.4, 93.4, 0.47, and 0.306, respectively while the corresponding values for CartC2 were 9.3, 58.8, 124.6, 95.8, 0.46, and 0.301. Each composite core collection represented the complete range of phenotypic and genetic variability of the crop including 19 BAPS clusters. This is the first report describing development of core collections in safflower using molecular marker data with phenotypic values and geographical distribution. These core collections will facilitate identification of genetic determinants of trait variability and effective utilization of the prevalent diversity in crop improvement programs. PMID:27807441
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
NASA Astrophysics Data System (ADS)
Braun, Jean; Gemignani, Lorenzo; van der Beek, Peter
2018-03-01
One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo-Siang-Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo-Siang-Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of any given catchment to the river distribution. The method additionally allows comparing modern erosion rates to long-term exhumation rates. We provide a simple implementation of the method in Python code within a Jupyter Notebook that includes the data used in this paper for illustration purposes.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
ERIC Educational Resources Information Center
Leenders, Nicole Y. J. M.; Silver, Lorraine Wallace; White, Susan L.; Buckworth, Janet; Sherman, W. Michael
2002-01-01
Used a street-based survey to assess college students' physical activity level, exercise self-efficacy, and stages of change for exercise behavior. A large proportion of respondents were not regularly active. Exercise self-efficacy was an important variable in exercise behavior. The low cost, ease of data collection, and short turnaround for…
Gretchen G. Moisen; Elizabeth A. Freeman; Jock A. Blackard; Tracey S. Frescino; Niklaus E. Zimmermann; Thomas C. Edwards
2006-01-01
Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's© See5 and Cubist (for binary and continuous responses,...
ERIC Educational Resources Information Center
Salthouse, Timothy A.
2011-01-01
The commentaries on my article contain a number of points with which I disagree but also several with which I agree. For example, I continue to believe that the existence of many cases in which between-person variability does not increase with age indicates that greater variance with increased age is not inevitable among healthy individuals up to…
Variability of Hormonal Stress Markers Collected from a Managed Dolphin Population
2015-09-30
546-7090 email: Nick.Kellar@noaa.gov Award Number: N000141110436 http://www.nmmf.org/ LONG-TERM GOALS Quantifying physiological ...methods. Metabolites of cortisol, aldosterone and thyroid hormone will be extracted from fecal samples and measured via RIA using established...have been analyzed, except for serum aldosterone (to be processed under the extension grant described in RELATED PROJECTS). Age (yrs) Male Female 5
Variability of Hormonal Stress Markers Collected from a Managed Dolphin Population
2013-09-30
physiological indicators of stress in wild marine mammals and the interrelationships between different stress markers can be used to estimate the impact...Radioimmunoassay methods have previously been validated for cortisol and aldosterone in this species (Houser et al., 2011). Parallel processing of...for these hormones.. Metabolites of cortisol, aldosterone and thyroid hormone will be extracted from fecal samples and measured via RIA using
Laurence R. Schimleck; Justin A. Tyson; David Jones; Gary F. Peter; Richard F. Daniels; Alexander III Clark
2007-01-01
Near infrared (NIR) spectroscopy provides a rapid, non-destructive method for the estimation of several wood properties of increment cores. MR spectra are collected from adjacent sections of the same core; however, not all spectra are required for calibration purposes as spectra from the same core are autocorrelated. Previously, we showed that wood property...
Various Treatment Techniques on Signs and Symptoms of Delayed Onset Muscle Soreness
Gulick, Dawn T.; Kimura, Iris F.; Sitler, Michael; Paolone, Albert; Kelly, John D.
1996-01-01
Eccentric activities are an important component of physical conditioning and everyday activities. Delayed onset muscle soreness (DOMS) can result from strenuous eccentric tasks and can be a limiting factor in motor performance for several days after exercise. An efficacious method of treatment for DOMS would enhance athletic performance and hasten the return to activities of daily living. The purpose of this study was to identify a treatment method which could assist in the recovery of DOMS. In the selection of treatment methods, emphasis was directed toward treatments that could be rendered independently by an individual, therefore making the treatment valuable to an athletic trainer in team setting. DOMS was induced in 70 untrained volunteers via 15 sets of 15 eccentric contractions of the forearm extensor muscles on a Lido isokinetic dynamometer. All subjects performed a pilot exercise bout for a minimum of 9 weeks before data collection to assure that DOMS would be produced. Data were collected on 15 dependent variables: active and passive wrist flexion and extension, forearm girth, limb volume, visual analogue pain scale, muscle soreness index, isometric strength, concentric and eccentric wrist total work, concentric and eccentric angle of peak torque. Data were collected on six occasions: pre- and post-induced DOMS, 20 minutes after treatment, and 24, 48, and 72 hours after treatment. Subjects were randomly assigned to 1 of 7 groups (6 treatment and 1 control). Treatments included a nonsteroidal anti-inflammatory drug, high velocity concentric muscle contractions on an upper extremity ergometer, ice massage, 10-minute static stretching, topical Amica montana ointment, and sublingual A. montana pellets. A 7 × 6 ANOVA with repeated measures on time was performed on the delta values of each of the 15 dependent variables. Significant main effects (p < .05) were found for all of the dependent variables on time only. There were no significant differences between treatments. Therefore, we conclude that none of the treatments were effective in abating the signs and symptoms of DOMS. In fact, the NSAID and A. montana treatments appeared to impede recovery of muscle function. PMID:16558388
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenblatt, Jeffery B.; Yang, Hung-Chia; Desroches, Louis-Benoit
2013-04-01
We present two post-stratification weighting methods to validate survey data collected using Amazon Mechanical Turk (AMT). Two surveys focused on appliance and consumer electronics devices were administered in the spring and summer of 2012 to each of approximately 3,000 U.S. households. Specifically, the surveys asked questions about residential refrigeration products, televisions (TVs) and set-top boxes (STBs). Filtered data were assigned weights using each of two weighting methods, termed “sequential” and “simultaneous,” by examining up to eight demographic variables (income, education, gender, race, Hispanic origin, number of occupants, ages of occupants, and geographic region) in comparison to reference U.S. demographic datamore » from the 2009 Residential Energy Consumption Survey (RECS). Five key questions from the surveys (number of refrigerators, number of freezers, number of TVs, number of STBs and primary service provider) were evaluated with a set of statistical tests to determine whether either method improved the agreement of AMT with reference data, and if so, which method was better. The statistical tests used were: differences in proportions, distributions of proportions (using Pearson’s chi-squared test), and differences in average numbers of devices as functions of all demographic variables. The results indicated that both methods generally improved the agreement between AMT and reference data, sometimes greatly, but that the simultaneous method was usually superior to the sequential method. Some differences in sample populations were found between the AMT and reference data. Differences in the proportion of STBs reflected large changes in the STB market since the time our reference data was acquired in 2009. Differences in the proportions of some primary service providers suggested real sample bias, with the possible explanation that AMT user are more likely to subscribe to providers who also provide home internet service. Differences in other variables, while statistically significant in some cases, were nonetheless considered to be minor. Depending on the intended purpose of the data collected using AMT, these biases may or may not be important; to correct them, additional questions and/or further post-survey adjustments could be employed. In general, based on the analysis methods and the sample datasets used in this study, AMT surveys appeared to provide useful data on appliance and consumer electronics devices.« less
Emerging Methods and Systems for Observing Life in the Sea
NASA Astrophysics Data System (ADS)
Chavez, F.; Pearlman, J.; Simmons, S. E.
2016-12-01
There is a growing need for observations of life in the sea at time and space scales consistent with those made for physical and chemical parameters. International programs such as the Global Ocean Observing System (GOOS) and Marine Biodiversity Observation Networks (MBON) are making the case for expanded biological observations and working diligently to prioritize essential variables. Here we review past, present and emerging systems and methods for observing life in the sea from the perspective of maintaining continuous observations over long time periods. Methods that rely on ships with instrumentation and over-the-side sample collections will need to be supplemented and eventually replaced with those based from autonomous platforms. Ship-based optical and acoustic instruments are being reduced in size and power for deployment on moorings and autonomous vehicles. In parallel a new generation of low power, improved resolution sensors are being developed. Animal bio-logging is evolving with new, smaller and more sophisticated tags being developed. New genomic methods, capable of assessing multiple trophic levels from a single water sample, are emerging. Autonomous devices for genomic sample collection are being miniaturized and adapted to autonomous vehicles. The required processing schemes and methods for these emerging data collections are being developed in parallel with the instrumentation. An evolving challenge will be the integration of information from these disparate methods given that each provides their own unique view of life in the sea.
Well-Tempered Metadynamics: A Smoothly Converging and Tunable Free-Energy Method
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-01
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
Well-tempered metadynamics: a smoothly converging and tunable free-energy method.
Barducci, Alessandro; Bussi, Giovanni; Parrinello, Michele
2008-01-18
We present a method for determining the free-energy dependence on a selected number of collective variables using an adaptive bias. The formalism provides a unified description which has metadynamics and canonical sampling as limiting cases. Convergence and errors can be rigorously and easily controlled. The parameters of the simulation can be tuned so as to focus the computational effort only on the physically relevant regions of the order parameter space. The algorithm is tested on the reconstruction of an alanine dipeptide free-energy landscape.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, M; Jung, Y
2014-06-01
Purpose: Arterial spin labeling (ASL) is an MRI perfusion imaging method from which quantitative cerebral blood flow (CBF) maps can be calculated. Acquisition with variable post-labeling delays (PLD) and variable TRs allows for arterial transit time (ATT) mapping and leads to more accurate CBF quantification with a scan time saving of 48%. In addition, T1 and M0 maps can be obtained without a separate scan. In order to accurately estimate ATT and T1 of brain tissue from the ASL data, variable labeling durations were invented, entitled variable-bolus ASL. Methods: All images were collected on a healthy subject with a 3Tmore » Siemens Skyra scanner. Variable-bolus Psuedo-continuous ASL (PCASL) images were collected with 7 TI times ranging 100-4300ms in increments of 700ms with TR ranging 1000-5200ms. All boluses were 1600ms when the TI allowed, otherwise the bolus duration was 100ms shorter than the TI. All TI times were interleaved to reduce sensitivity to motion. Voxel-wise T1 and M0 maps were estimated using a linear least squares fitting routine from the average singal from each TI time. Then pairwise subtraction of each label/control pair and averaging for each TI time was performed. CBF and ATT maps were created using the standard model by Buxton et al. with a nonlinear fitting routine using the T1 tissue map. Results: CBF maps insensitive to ATT were produced along with ATT maps. Both maps show patterns and averages consistent with literature. The T1 map also shows typical T1 contrast. Conclusion: It has been demonstrated that variablebolus ASL produces CBF maps free from the errors due to ATT and tissue T1 variations and provides M0, T1, and ATT maps which have potential utility. This is accomplished with a single scan in a feasible scan time (under 6 minutes) with low sensivity to motion.« less
Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan
2016-09-01
To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.
Bon, C; Toutain, P L; Concordet, D; Gehring, R; Martin-Jimenez, T; Smith, J; Pelligand, L; Martinez, M; Whittem, T; Riviere, J E; Mochel, J P
2018-04-01
A common feature of human and veterinary pharmacokinetics is the importance of identifying and quantifying the key determinants of between-patient variability in drug disposition and effects. Some of these attributes are already well known to the field of human pharmacology such as bodyweight, age, or sex, while others are more specific to veterinary medicine, such as species, breed, and social behavior. Identification of these attributes has the potential to allow a better and more tailored use of therapeutic drugs both in companion and food-producing animals. Nonlinear mixed effects (NLME) have been purposely designed to characterize the sources of variability in drug disposition and response. The NLME approach can be used to explore the impact of population-associated variables on the relationship between drug administration, systemic exposure, and the levels of drug residues in tissues. The latter, while different from the method used by the US Food and Drug Administration for setting official withdrawal times (WT) can also be beneficial for estimating WT of approved animal drug products when used in an extralabel manner. Finally, NLME can also prove useful to optimize dosing schedules, or to analyze sparse data collected in situations where intensive blood collection is technically challenging, as in small animal species presenting limited blood volume such as poultry and fish. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Widodo, Edy; Kariyam
2017-03-01
To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Long-term lightcurves from combined unified very high energy γ-ray data
NASA Astrophysics Data System (ADS)
Tluczykont, M.; Bernardini, E.; Satalecka, K.; Clavero, R.; Shayduk, M.; Kalekin, O.
2010-12-01
Context. Very high-energy (VHE, E > 100 GeV) γ-ray data are a valuable input for multi-wavelength and multi-messenger (e.g. combination with neutrino data) studies. Aims: We aim at the conservation and homogenization of historical, current, and future VHE γ-ray-data on active galactic nuclei (AGN). Methods: We have collected lightcurve data taken by major VHE experiments since 1991 and combined them into long-term lightcurves for several AGN, and now provide our collected datasets for further use. Due to the lack of common data formats in VHE γ-ray astronomy, we have defined relevant datafields to be stored in standard data formats. The time variability of the combined VHE lightcurve data was investigated, and correlation with archival X-ray data collected by RXTE/ASM tested. Results: The combination of data on the prominent blazar Mrk 421 from different experiments yields a lightcurve spanning more than a decade. From this combined dataset we derive an integral baseline flux from Mrk 421 that must be lower than 33% of the Crab Nebula flux above 1 TeV. The analysis of the time variability yields log-normal flux variations in the VHE-data on Mrk 421. Conclusions: Existing VHE data contain valuable information concerning the variability of AGN and can be an important ingredient for multi-wavelength or multi-messenger studies. In the future, upcoming and planned experiments will provide more data from many transient objects, and the interaction of VHE astronomy with classical astronomy will intensify. In this context a unified and exchangeable data format will become increasingly important. Our data collection is available at the url: http://nuastro-zeuthen.desy.de/magic_experiment/projects/light_curve_archive/index_eng.html
Prodhan, M D H; Papadakis, Emmanouil-N; Papadopoulou-Mourkidou, Euphemia
2016-09-01
To estimate the variability of pesticide residue levels present in cauliflower units, a total of 142 samples were collected from a field trial of a cooperative farmer, and 120 samples were collected from different market places in Thessaloniki, Greece. The collected samples were extracted using the quick, easy, cheap, effective, rugged, and safe (QuEChERS) extraction technique, and the residues were determined by liquid chromatography-tandem mass spectrometry. The developed method was validated by evaluating the accuracy, precision, linearity, limit of detection (LOD), and limit of quantification (LOQ). The average recoveries for all the analytes, derived from the data of control samples fortified at 0.01, 0.05, 0.1, and 0.2 mg/kg, ranged from 74 to 110% with a relative standard deviation of ≤8%. The correlation coefficient (R(2)) was ≥0.997 for all the analytes using matrix-matched calibration standards. The LOD values ranged from 0.001 to 0.003 mg/kg, and the LOQ was determined at 0.01 mg/kg for all the sought analytes. The matrix effect was found to be at a considerable level, especially for cypermethrin and deltamethrin, amounting to +90% and +145%, respectively. For the field samples, the unit-to-unit variability factors (VFs) calculated for cypermethrin and deltamethrin were 2.38 and 2.32, respectively, while the average VF for the market basket samples was 5.11. In the market basket samples, residues of cypermethrin, deltamethrin, chlorpyrifos, and indoxacarb were found at levels ≥LOQ and their respective VFs were 7.12, 5.67, 5.28, and 2.40.
National audit of continence care: laying the foundation.
Mian, Sarah; Wagg, Adrian; Irwin, Penny; Lowe, Derek; Potter, Jonathan; Pearson, Michael
2005-12-01
National audit provides a basis for establishing performance against national standards, benchmarking against other service providers and improving standards of care. For effective audit, clinical indicators are required that are valid, feasible to apply and reliable. This study describes the methods used to develop clinical indicators of continence care in preparation for a national audit. To describe the methods used to develop and test clinical indicators of continence care with regard to validity, feasibility and reliability. A multidisciplinary working group developed clinical indicators that measured the structure, process and outcome of care as well as case-mix variables. Literature searching, consensus workshops and a Delphi process were used to develop the indicators. The indicators were tested in 15 secondary care sites, 15 primary care sites and 15 long-term care settings. The process of development produced indicators that received a high degree of consensus within the Delphi process. Testing of the indicators demonstrated an internal reliability of 0.7 and an external reliability of 0.6. Data collection required significant investment in terms of staff time and training. The method used produced indicators that achieved a high degree of acceptance from health care professionals. The reliability of data collection was high for this audit and was similar to the level seen in other successful national audits. Data collection for the indicators was feasible to collect, however, issues of time and staffing were identified as limitations to such data collection. The study has described a systematic method for developing clinical indicators for national audit. The indicators proved robust and reliable in primary and secondary care as well as long-term care settings.
Ahmed, Rana; Robinson, Ryan; Elsony, Asma; Thomson, Rachael; Squire, S. Bertel; Malmborg, Rasmus; Burney, Peter
2018-01-01
Introduction Data collection using paper-based questionnaires can be time consuming and return errors affect data accuracy, completeness, and information quality in health surveys. We compared smartphone and paper-based data collection systems in the Burden of Obstructive Lung Disease (BOLD) study in rural Sudan. Methods This exploratory pilot study was designed to run in parallel with the cross-sectional household survey. The Open Data Kit was used to programme questionnaires in Arabic into smartphones. We included 100 study participants (83% women; median age = 41.5 ± 16.4 years) from the BOLD study from 3 rural villages in East-Gezira and Kamleen localities of Gezira state, Sudan. Questionnaire data were collected using smartphone and paper-based technologies simultaneously. We used Kappa statistics and inter-rater class coefficient to test agreement between the two methods. Results Symptoms reported included cough (24%), phlegm (15%), wheezing (17%), and shortness of breath (18%). One in five were or had been cigarette smokers. The two data collection methods varied between perfect to slight agreement across the 204 variables evaluated (Kappa varied between 1.00 and 0.02 and inter-rater coefficient between 1.00 and -0.12). Errors were most commonly seen with paper questionnaires (83% of errors seen) vs smartphones (17% of errors seen) administered questionnaires with questions with complex skip-patterns being a major source of errors in paper questionnaires. Automated checks and validations in smartphone-administered questionnaires avoided skip-pattern related errors. Incomplete and inconsistent records were more likely seen on paper questionnaires. Conclusion Compared to paper-based data collection, smartphone technology worked well for data collection in the study, which was conducted in a challenging rural environment in Sudan. This approach provided timely, quality data with fewer errors and inconsistencies compared to paper-based data collection. We recommend this method for future BOLD studies and other population-based studies in similar settings. PMID:29518132
RAPD analysis of the genetic diversity of mango (Mangifera indica) germplasm in Brazil.
Souza, I G B; Valente, S E S; Britto, F B; de Souza, V A B; Lima, P S C
2011-12-14
We evaluated genetic variability of mango (Mangifera indica) accessions maintained in the Active Germplasm Bank of Embrapa Meio-Norte in Teresina, Piauí, Brazil, using RAPDs. Among these accessions, 35 originated from plantings in Brazil, six from the USA and one from India. Genomic DNA, extracted from leaf material using a commercial purification kit, was subjected to PCR with the primers A01, A09, G03, G10, N05, and M16. Fifty-five polymorphic loci were identified, with mean of 9.16 ± 3.31 bands per primer and 100% polymorphism. Application of unweighted pair group method using arithmetic average cluster analysis demonstrated five genotypic groups among the accessions examined. The genotypes Rosa 41, Rosa 48 and Rosa 49 were highly similar (94% similarity), whereas genotypes Sensation and Rosa 18 were the most divergent (only 7% similarity). The mango accessions were found to have considerable genetic variability, demonstrating the importance of analyzing each genotype in a collection in order to efficiently maintain the germplasm collection.
Precise time series photometry for the Kepler-2.0 mission
NASA Astrophysics Data System (ADS)
Aigrain, S.; Hodgkin, S. T.; Irwin, M. J.; Lewis, J. R.; Roberts, S. J.
2015-03-01
The recently approved NASA K2 mission has the potential to multiply by an order of magnitude the number of short-period transiting planets found by Kepler around bright and low-mass stars, and to revolutionize our understanding of stellar variability in open clusters. However, the data processing is made more challenging by the reduced pointing accuracy of the satellite, which has only two functioning reaction wheels. We present a new method to extract precise light curves from K2 data, combining list-driven, soft-edged aperture photometry with a star-by-star correction of systematic effects associated with the drift in the roll angle of the satellite about its boresight. The systematics are modelled simultaneously with the stars' intrinsic variability using a semiparametric Gaussian process model. We test this method on a week of data collected during an engineering test in 2014 January, perform checks to verify that our method does not alter intrinsic variability signals, and compute the precision as a function of magnitude on long-cadence (30 min) and planetary transit (2.5 h) time-scales. In both cases, we reach photometric precisions close to the precision reached during the nominal Kepler mission for stars fainter than 12th magnitude, and between 40 and 80 parts per million for brighter stars. These results confirm the bright prospects for planet detection and characterization, asteroseismology and stellar variability studies with K2. Finally, we perform a basic transit search on the light curves, detecting two bona fide transit-like events, seven detached eclipsing binaries and 13 classical variables.
Dewhirst, Oliver P; Roskilly, Kyle; Hubel, Tatjana Y; Jordan, Neil R; Golabek, Krystyna A; McNutt, J Weldon; Wilson, Alan M
2017-02-01
Changes in stride frequency and length with speed are key parameters in animal locomotion research. They are commonly measured in a laboratory on a treadmill or by filming trained captive animals. Here, we show that a clustering approach can be used to extract these variables from data collected by a tracking collar containing a GPS module and tri-axis accelerometers and gyroscopes. The method enables stride parameters to be measured during free-ranging locomotion in natural habitats. As it does not require labelled data, it is particularly suitable for use with difficult to observe animals. The method was tested on large data sets collected from collars on free-ranging lions and African wild dogs and validated using a domestic dog. © 2017. Published by The Company of Biologists Ltd.
Genome-wide regression and prediction with the BGLR statistical package.
Pérez, Paulino; de los Campos, Gustavo
2014-10-01
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.
Review and discussion of homogenisation methods for climate data
NASA Astrophysics Data System (ADS)
Ribeiro, S.; Caineta, J.; Costa, A. C.
2016-08-01
The quality of climate data is of extreme relevance, since these data are used in many different contexts. However, few climate time series are free from non-natural irregularities. These inhomogeneities are related to the process of collecting, digitising, processing, transferring, storing and transmitting climate data series. For instance, they can be caused by changes of measuring instrumentation, observing practices or relocation of weather stations. In order to avoid errors and bias in the results of analysis that use those data, it is particularly important to detect and remove those non-natural irregularities prior to their use. Moreover, due to the increase of storage capacity, the recent gathering of massive amounts of weather data implies also a toilsome effort to guarantee its quality. The process of detection and correction of irregularities is named homogenisation. A comprehensive summary and description of the available homogenisation methods is critical to climatologists and other experts, who are looking for a homogenisation method wholly considered as the best. The effectiveness of homogenisation methods depends on the type, temporal resolution and spatial variability of the climatic variable. Several comparison studies have been published so far. However, due to the absence of time series where irregularities are known, only a few of those comparisons indicate the level of success of the homogenisation methods. This article reviews the characteristics of the most important procedures used in the homogenisation of climatic variables based on a thorough literature research. It also summarises many methods applications in order to illustrate their applicability, which may help climatologists and other experts to identify adequate method(s) for their particular needs. This review study also describes comparison studies, which evaluated the efficiency of homogenisation methods, and provides a summary of conclusions and lessons learned regarding good practices for the use of homogenisation methods.
Accounting for Heaping in Retrospectively Reported Event Data – A Mixture-Model Approach
Bar, Haim Y.; Lillard, Dean R.
2012-01-01
When event data are retrospectively reported, more temporally distal events tend to get “heaped” on even multiples of reporting units. Heaping may introduce a type of attenuation bias because it causes researchers to mismatch time-varying right-hand side variables. We develop a model-based approach to estimate the extent of heaping in the data, and how it affects regression parameter estimates. We use smoking cessation data as a motivating example, but our method is general. It facilitates the use of retrospective data from the multitude of cross-sectional and longitudinal studies worldwide that collect and potentially could collect event data. PMID:22733577
Fu, Haiyan; Fan, Yao; Zhang, Xu; Lan, Hanyue; Yang, Tianming; Shao, Mei; Li, Sihan
2015-01-01
As an effective method, the fingerprint technique, which emphasized the whole compositions of samples, has already been used in various fields, especially in identifying and assessing the quality of herbal medicines. High-performance liquid chromatography (HPLC) and near-infrared (NIR), with their unique characteristics of reliability, versatility, precision, and simple measurement, played an important role among all the fingerprint techniques. In this paper, a supervised pattern recognition method based on PLSDA algorithm by HPLC and NIR has been established to identify the information of Hibiscus mutabilis L. and Berberidis radix, two common kinds of herbal medicines. By comparing component analysis (PCA), linear discriminant analysis (LDA), and particularly partial least squares discriminant analysis (PLSDA) with different fingerprint preprocessing of NIR spectra variables, PLSDA model showed perfect functions on the analysis of samples as well as chromatograms. Most important, this pattern recognition method by HPLC and NIR can be used to identify different collection parts, collection time, and different origins or various species belonging to the same genera of herbal medicines which proved to be a promising approach for the identification of complex information of herbal medicines. PMID:26345990
Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina
2018-01-01
Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
Linear Modeling and Evaluation of Controls on Flow Response in Western Post-Fire Watersheds
NASA Astrophysics Data System (ADS)
Saxe, S.; Hogue, T. S.; Hay, L.
2015-12-01
This research investigates the impact of wildfires on watershed flow regimes throughout the western United States, specifically focusing on evaluation of fire events within specified subregions and determination of the impact of climate and geophysical variables in post-fire flow response. Fire events were collected through federal and state-level databases and streamflow data were collected from U.S. Geological Survey stream gages. 263 watersheds were identified with at least 10 years of continuous pre-fire daily streamflow records and 5 years of continuous post-fire daily flow records. For each watershed, percent changes in runoff ratio (RO), annual seven day low-flows (7Q2) and annual seven day high-flows (7Q10) were calculated from pre- to post-fire. Numerous independent variables were identified for each watershed and fire event, including topographic, land cover, climate, burn severity, and soils data. The national watersheds were divided into five regions through K-clustering and a lasso linear regression model, applying the Leave-One-Out calibration method, was calculated for each region. Nash-Sutcliffe Efficiency (NSE) was used to determine the accuracy of the resulting models. The regions encompassing the United States along and west of the Rocky Mountains, excluding the coastal watersheds, produced the most accurate linear models. The Pacific coast region models produced poor and inconsistent results, indicating that the regions need to be further subdivided. Presently, RO and HF response variables appear to be more easily modeled than LF. Results of linear regression modeling showed varying importance of watershed and fire event variables, with conflicting correlation between land cover types and soil types by region. The addition of further independent variables and constriction of current variables based on correlation indicators is ongoing and should allow for more accurate linear regression modeling.
Smith, Kristen W.; Braun, Joe M.; Williams, Paige L.; Ehrlich, Shelley; Correia, Katharine F.; Calafat, Antonia M.; Ye, Xiaoyun; Ford, Jennifer; Keller, Myra; Meeker, John D.
2012-01-01
Background: Parabens are suspected endocrine disruptors and ubiquitous preservatives used in personal care products, pharmaceuticals, and foods. No studies have assessed the variability of parabens in women, including during pregnancy. Objective: We evaluated predictors and variability of urinary paraben concentrations. Methods: We measured urinary concentrations of methyl (MP), propyl (PP), and butyl paraben (BP) among couples from a fertility center. Mixed-effects regression models were fit to examine demographic predictors of paraben concentrations and to calculate intraclass correlation coefficients (ICCs). Results: Between 2005 and 2010, we collected 2,721 spot urine samples from 245 men and 408 women. The median concentrations were 112 µg/L (MP), 24.2 µg/L (PP), and 0.70 µg/L (BP). Urinary MP and PP concentrations were 4.6 and 7.8 times higher in women than men, respectively, and concentrations of both MP and PP were 3.8 times higher in African Americans than Caucasians. MP and PP concentrations we CI re slightly more variable in women (ICC = 0.42, 0.43) than men (ICC = 0.54, 0.51), and were weakly correlated between partners (r = 0.27–0.32). Among 129 pregnant women, urinary paraben concentrations were 25–45% lower during pregnancy than before pregnancy, and MP and PP concentrations were more variable (ICCs of 0.38 and 0.36 compared with 0.46 and 0.44, respectively). Conclusions: Urinary paraben concentrations were more variable in women compared with men, and during pregnancy compared with before pregnancy. However, results for this study population suggest that a single urine sample may reasonably represent an individual’s exposure over several months, and that a single sample collected during pregnancy may reasonably classify gestational exposure. PMID:22721761
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reboul, S. H.
Two surface samples (HTF-10-17-30 and HTF-10-17-31) and two variable depth samples (HTF-10-17-32 and HTF-10-17-33) were collected from SRS Tank 10 during March 2017 and submitted to SRNL for characterization. At SRNL, the two surface samples were combined in one container, the two variable depth samples (VDSs) were combined in another container, and then the two composite samples were each characterized by a series of physical, ionic, radiological, and elemental analysis methods. The surface sample composite was characterized primarily for Tank Farm corrosion control purposes, while the VDS composite was characterized primarily for Tank Closure Cesium Removal (TCCR) purposes.
Consumer Perception and Buying Decisions(The Pasta Study)
NASA Astrophysics Data System (ADS)
Kazmi, Syeda Quratulain
2012-11-01
The project ìconsumer perception and buying behavior (the pasta studyî) is basically measures the development of perception through different variables and identify those factors which stimulate buying decision of consumer. Among various variables which effect consumer buying pattern I choose AWARENESS and AVAILABILITY of the product as two main variables which have strong effect on popularity and sale of pasta product. As my research is totally based on qualitative method thatís why I choose quota sampling technique and collect data by interviewing house wives resides in different areas of Karachi. The reason of choosing only house wives as respondent is that house wives can give true insight factors which hinder the popularity of pasta products in Pakistan. Focus group discussions have been conducted to extract findings. 30 house wives have been interviewed and their responses have been analyzed.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
Developing a framework for evaluating tallgrass prairie reconstruction methods and management
Larson, Diane L.; Ahlering, Marissa; Drobney, Pauline; Esser, Rebecca; Larson, Jennifer L.; Viste-Sparkman, Karen
2018-01-01
The thousands of hectares of prairie reconstructed each year in the tallgrass prairie biome can provide a valuable resource for evaluation of seed mixes, planting methods, and post-planting management if methods used and resulting characteristics of the prairies are recorded and compiled in a publicly accessible database. The objective of this study was to evaluate the use of such data to understand the outcomes of reconstructions over a 10-year period at two U.S. Fish and Wildlife Service refuges. Variables included number of species planted, seed source (combine-harvest or combine-harvest plus hand-collected), fire history, and planting method and season. In 2015 we surveyed vegetation on 81 reconstructions and calculated proportion of planted species observed; introduced species richness; native species richness, evenness and diversity; and mean coefficient of conservatism. We conducted exploratory analyses to learn how implied communities based on seed mix compared with observed vegetation; which seeding or management variables were influential in the outcome of the reconstructions; and consistency of responses between the two refuges. Insights from this analysis include: 1) proportion of planted species observed in 2015 declined as planted richness increased, but lack of data on seeding rate per species limited conclusions about value of added species; 2) differing responses to seeding and management between the two refuges suggest the importance of geographic variability that could be addressed using a public database; and 3) variables such as fire history are difficult to quantify consistently and should be carefully evaluated in the context of a public data repository.
Macaluso, P J
2011-02-01
Digital photogrammetric methods were used to collect diameter, area, and perimeter data of the acetabulum for a twentieth-century skeletal sample from France (Georges Olivier Collection, Musée de l'Homme, Paris) consisting of 46 males and 36 females. The measurements were then subjected to both discriminant function and logistic regression analyses in order to develop osteometric standards for sex assessment. Univariate discriminant functions and logistic regression equations yielded overall correct classification accuracy rates for both the left and the right acetabula ranging from 84.1% to 89.6%. The multivariate models developed in this study did not provide increased accuracy over those using only a single variable. Classification sex bias ratios ranged between 1.1% and 7.3% for the majority of models. The results of this study, therefore, demonstrate that metric analysis of acetabular size provides a highly accurate, and easily replicable, method of discriminating sex in this documented skeletal collection. The results further suggest that the addition of area and perimeter data derived from digital images may provide a more effective method of sex assessment than that offered by traditional linear measurements alone. Copyright © 2010 Elsevier GmbH. All rights reserved.
Blanchin, Myriam; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Blanchard, Claire; Mirallié, Eric; Sébille, Véronique
2011-04-15
Health sciences frequently deal with Patient Reported Outcomes (PRO) data for the evaluation of concepts, in particular health-related quality of life, which cannot be directly measured and are often called latent variables. Two approaches are commonly used for the analysis of such data: Classical Test Theory (CTT) and Item Response Theory (IRT). Longitudinal data are often collected to analyze the evolution of an outcome over time. The most adequate strategy to analyze longitudinal latent variables, which can be either based on CTT or IRT models, remains to be identified. This strategy must take into account the latent characteristic of what PROs are intended to measure as well as the specificity of longitudinal designs. A simple and widely used IRT model is the Rasch model. The purpose of our study was to compare CTT and Rasch-based approaches to analyze longitudinal PRO data regarding type I error, power, and time effect estimation bias. Four methods were compared: the Score and Mixed models (SM) method based on the CTT approach, the Rasch and Mixed models (RM), the Plausible Values (PV), and the Longitudinal Rasch model (LRM) methods all based on the Rasch model. All methods have shown comparable results in terms of type I error, all close to 5 per cent. LRM and SM methods presented comparable power and unbiased time effect estimations, whereas RM and PV methods showed low power and biased time effect estimations. This suggests that RM and PV methods should be avoided to analyze longitudinal latent variables. Copyright © 2010 John Wiley & Sons, Ltd.
Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin
1987-01-01
Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harwood, Caroline S
The goal of this project is to identify gene networks that are critical for efficient biohydrogen production by leveraging variation in gene content and gene expression in independently isolated Rhodopseudomonas palustris strains. Coexpression methods were applied to large data sets that we have collected to define probabilistic causal gene networks. To our knowledge this a first systems level approach that takes advantage of strain-to strain variability to computationally define networks critical for a particular bacterial phenotypic trait.
Bedogni, Giorgio; Biasini, Beatrice; Zavaroni, Ivana; Ventura, Marco; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Bonadonna, Riccardo C.; Passeri, Giovanni
2018-01-01
Adequate visual function has a strong impact on the quality of life of people. Several foods and food components have been hypothesized to play a role in the maintenance of normal visual function and in the prevention of eye diseases. Some of these foods/food components have been the object of a request of authorization for use of health claims under Articles 13(5) or 14 of the Regulation (EC) 1924/2006. Most of these requests have received a negative opinion from the European Food Safety Authority (EFSA) due to the choice of inappropriate outcome variables (OVs) and/or methods of measurement (MMs) applied in the studies used to substantiate the claims. This manuscript refers to the collection, collation and critical analysis of OVs and MMs related to vision. Guidance document and requests for authorization of health claims were used to collect OVs and MMs related to vision. A literature review was performed to critically analyse OVs and MMs, with the aim of defining their appropriateness in the context of a specific claimed effect related to vision. The results highlight the importance of adequate choices of OVs and MMs for an effective substantiation of claims related to visual function. PMID:29443929
Cortelazzi, Chiara; Zavaroni, Ivana; Bedogni, Giorgio; Musci, Marilena; Pruneti, Carlo; Passeri, Giovanni; Ventura, Marco; Galli, Daniela; Vitale, Marco; Bonadonna, Riccardo C.; Di Nuzzo, Sergio; De Felici, Maria Beatrice
2017-01-01
Evidence suggests a protective role for several nutrients and foods in the maintenance of skin function. Nevertheless, all the requests for authorization to use health claims under Article 13(5) in the framework of maintenance of skin function presented to the European Food Safety Authority (EFSA) have received a negative opinion. Reasons for such failures are mainly due to an insufficient substantiation of the claimed effects, including the choice of inappropriate outcome variables (OVs) and methods of measurement (MMs). The present paper reports the results of an investigation aimed at collecting, collating and critically analyzing the information with relation to claimed effects (CEs), OVs and MMs related to skin health compliance with Regulation 1924/2006. CEs, OVs and MMs were collected from both the EFSA Guidance document and from the authorization requests of health claims under Article 13(5). The critical analysis of OVs and MMs was based on a literature review, and was aimed at defining their appropriateness (alone or in combination with others) in the context of a specific CE. The results highlight the importance of an adequate choice of OVs and MMs for an effective substantiation of the claims. PMID:29271939
Marinelli, Fabrizio
2013-01-01
In this work a new method for the automatic exploration and calculation of multidimensional free energy landscapes is proposed. Inspired by metadynamics, it uses several collective variables that are relevant for the investigated process and a bias potential that discourages the sampling of already visited configurations. The latter potential allows escaping a local free energy minimum following the direction of slow motions. This is different from metadynamics in which there is no specific direction of the biasing force and the computational effort increases significantly with the number of collective variables. The method is tested on the Ace-Ala3-Nme peptide, and then it is applied to investigate the Trp-cage folding mechanism. For this protein, within a few hundreds of nanoseconds, a broad range of conformations is explored, including nearly native ones, initiating the simulation from a completely unfolded conformation. Finally, several folding/unfolding trajectories give a systematic description of the Trp-cage folding pathways, leading to a unified view for the folding mechanisms of this protein. The proposed mechanism is consistent with NMR chemical shift data at increasing temperature and recent experimental observations pointing to a pivotal role of secondary structure elements in directing the folding process toward the native state. PMID:24010667
Doidge, James C
2018-02-01
Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.
Martini, Daniela; Angelino, Donato; Cortelazzi, Chiara; Zavaroni, Ivana; Bedogni, Giorgio; Musci, Marilena; Pruneti, Carlo; Passeri, Giovanni; Ventura, Marco; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Dei Cas, Alessandra; Bonadonna, Riccardo C; Di Nuzzo, Sergio; De Felici, Maria Beatrice; Del Rio, Daniele
2017-12-22
Evidence suggests a protective role for several nutrients and foods in the maintenance of skin function. Nevertheless, all the requests for authorization to use health claims under Article 13(5) in the framework of maintenance of skin function presented to the European Food Safety Authority (EFSA) have received a negative opinion. Reasons for such failures are mainly due to an insufficient substantiation of the claimed effects, including the choice of inappropriate outcome variables (OVs) and methods of measurement (MMs). The present paper reports the results of an investigation aimed at collecting, collating and critically analyzing the information with relation to claimed effects (CEs), OVs and MMs related to skin health compliance with Regulation 1924/2006. CEs, OVs and MMs were collected from both the EFSA Guidance document and from the authorization requests of health claims under Article 13(5). The critical analysis of OVs and MMs was based on a literature review, and was aimed at defining their appropriateness (alone or in combination with others) in the context of a specific CE. The results highlight the importance of an adequate choice of OVs and MMs for an effective substantiation of the claims.
Martini, Daniela; Innocenti, Augusto; Cosentino, Chiara; Bedogni, Giorgio; Angelino, Donato; Biasini, Beatrice; Zavaroni, Ivana; Ventura, Marco; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Dei Cas, Alessandra; Bonadonna, Riccardo C; Passeri, Giovanni; Pruneti, Carlo; Del Rio, Daniele
2018-02-14
Adequate visual function has a strong impact on the quality of life of people. Several foods and food components have been hypothesized to play a role in the maintenance of normal visual function and in the prevention of eye diseases. Some of these foods/food components have been the object of a request of authorization for use of health claims under Articles 13(5) or 14 of the Regulation (EC) 1924/2006. Most of these requests have received a negative opinion from the European Food Safety Authority (EFSA) due to the choice of inappropriate outcome variables (OVs) and/or methods of measurement (MMs) applied in the studies used to substantiate the claims. This manuscript refers to the collection, collation and critical analysis of OVs and MMs related to vision. Guidance document and requests for authorization of health claims were used to collect OVs and MMs related to vision. A literature review was performed to critically analyse OVs and MMs, with the aim of defining their appropriateness in the context of a specific claimed effect related to vision. The results highlight the importance of adequate choices of OVs and MMs for an effective substantiation of claims related to visual function.
Stagg, Camille L.; Sharp, Leigh A.; McGinnis, Thomas E.; Snedden, Gregg A.
2013-01-01
Since its implementation in 2003, the Coastwide Reference Monitoring System (CRMS) in Louisiana has facilitated the creation of a comprehensive dataset that includes, but is not limited to, vegetation, hydrologic, and soil metrics on a coastwide scale. The primary impetus for this data collection is to assess land management activities, including restoration efforts, across the coast. The aim of the CRMS analytical team is to provide a method to synthesize this data to enable multiscaled evaluations of activities in Louisiana’s coastal wetlands. Several indices have been developed to facilitate data synthesis and interpretation, including a Floristic Quality Index, a Hydrologic Index, and a Landscape Index. This document details the development of the Submergence Vulnerability Index, which incorporates sediment-elevation data as well as hydrologic data to determine the vulnerability of a wetland based on its ability to keep pace with sea-level rise. The objective of this document is to provide Federal and State sponsors, project managers, planners, landowners, data users, and the rest of the coastal restoration community with the following: (1) data collection and model development methods for the sediment-elevation response variables, and (2) a description of how these response variables will be used to evaluate CWPPRA project and program effectiveness.
Friend, M.
1999-01-01
The previous chapters provide information about some of the chemical toxins that have lethal effects on wild birds. The material presented in Section 7, Chemical Toxins, is far from comprehensive because wild birds are poisoned by a wide variety of toxic substances. Also, monitoring of wild bird mortality is not yet organized so that diagnostic findings can be extended to reflect the relative impacts among the types of toxins, within populations, or among species, geographic areas, and time. The data that are available are not collectively based on random sampling, nor do specimen collection and submission follow methodical assessment methods. Instead, most data simply document individual bird poisoning events. The inherent biases in this information include the species of birds observed dead (large birds in open areas are more likely to be observed dead than small forest birds); the species of birds likely to be submitted for analysis (bald eagles are more likely to be submitted than house sparrows); collection sites (agricultural fields are more likely to be observed than urban environments); geographic area of the country; season; reasons for submissions; and other variables. Nevertheless, findings from individual events reflect the causes of mortality associated with those events and collectively identify chemical toxins that repeatedly cause bird mortalities which result in carcass collection and sub
Weinzierl, Michael S.; Reich, Christopher D.; Hickey, T. Donald; Bartlett, Lucy A.; Kuffner, Ilsa B.
2016-11-29
Cores from living coral colonies were collected from Dry Tortugas National Park, Florida, U.S.A., to obtain skeletal records of past coral growth and allow geochemical reconstruction of environmental variables during the corals’ centuries-long lifespans. The samples were collected as part of the U.S. Geological Survey Coral Reef Ecosystems Studies project (http:/coastal.er.usgs.gov/crest) that provides science to assist resource managers tasked with the stewardship of coral reef resources. Three colonies each of the coral species Orbicella faveolata and Siderastrea siderea were collected in May 2012 using the methods described herein and as approved under National Park Service scientific collecting permit number DRTO-2012-SCI-0001 and are cataloged under accession number DRTO-353. These coral samples can be used to retroactively construct environmental parameters, including sea-surface temperature, by measuring the elemental composition of the coral skeleton. The cores described here, and others (see http://olga.er.usgs.gov/coreviewer/), can be requested, on loan, for scientific study. Photographic images for each coral in its ocean environment, the coral cores as curated and slabbed, and the X-rays of the slabs can be found in an associated U.S. Geological Survey Data Release.
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B; D'Astous, Jacques L
2013-01-01
Several multisegment foot models have been proposed and some have been used to study foot pathologies. These models have been tested and validated on typically developed populations; however application of such models to feet with significant deformities presents an additional set of challenges. For the first time, in this study, a multisegment foot model is tested for repeatability in a population of children with symptomatic abnormal feet. The results from this population are compared to the same metrics collected from an age matched (8-14 years) typically developing population. The modified Shriners Hospitals for Children, Greenville (mSHCG) foot model was applied to ten typically developing children and eleven children with planovalgus feet by two clinicians. Five subjects in each group were retested by both clinicians after 4-6 weeks. Both intra-clinician and inter-clinician repeatability were evaluated using static and dynamic measures. A plaster mold method was used to quantify variability arising from marker placement error. Dynamic variability was measured by examining trial differences from the same subjects when multiple clinicians carried out the data collection multiple times. For hindfoot and forefoot angles, static and dynamic variability in both groups was found to be less than 4° and 6° respectively. The mSHCG model strategy of minimal reliance on anatomical markers for dynamic measures and inherent flexibility enabled by separate anatomical and technical coordinate systems resulted in a model equally repeatable in typically developing and planovalgus populations. Copyright © 2012 Elsevier B.V. All rights reserved.
Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes
2016-12-01
The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.
Farnham, David J; Gibson, Rebecca A; Hsueh, Diana Y; McGillis, Wade R; Culligan, Patricia J; Zain, Nina; Buchanan, Rob
2017-02-15
To protect recreational water users from waterborne pathogen exposure, it is crucial that waterways are monitored for the presence of harmful bacteria. In NYC, a citizen science campaign is monitoring waterways impacted by inputs of storm water and untreated sewage during periods of rainfall. However, the spatial and temporal scales over which the monitoring program can sample are constrained by cost and time, thus hindering the construction of databases that benefit both scientists and citizens. In this study, we first illustrate the scientific value of a citizen scientist monitoring campaign by using the data collected through the campaign to characterize the seasonal variability of sampled bacterial concentration as well as its response to antecedent rainfall. Second, we examine the efficacy of the HyServe Compact Dry ETC method, a lower cost and time-efficient alternative to the EPA-approved IDEXX Enterolert method for fecal indicator monitoring, through a paired sample comparison of IDEXX and HyServe (total of 424 paired samples). The HyServe and IDEXX methods return the same result for over 80% of the samples with regard to whether a water sample is above or below the EPA's recreational water quality criteria for a single sample of 110 enterococci per 100mL. The HyServe method classified as unsafe 90% of the 119 water samples that were classified as having unsafe enterococci concentrations by the more established IDEXX method. This study seeks to encourage other scientists to engage with citizen scientist communities and to also pursue the development of cost- and time-efficient methodologies to sample environmental variables that are not easily collected or analyzed in an automated manner. Copyright © 2016 Elsevier B.V. All rights reserved.
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B
2012-04-01
Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
2012-01-01
Background Documentation of posture measurement costs is rare and cost models that do exist are generally naïve. This paper provides a comprehensive cost model for biomechanical exposure assessment in occupational studies, documents the monetary costs of three exposure assessment methods for different stakeholders in data collection, and uses simulations to evaluate the relative importance of cost components. Methods Trunk and shoulder posture variables were assessed for 27 aircraft baggage handlers for 3 full shifts each using three methods typical to ergonomic studies: self-report via questionnaire, observation via video film, and full-shift inclinometer registration. The cost model accounted for expenses related to meetings to plan the study, administration, recruitment, equipment, training of data collectors, travel, and onsite data collection. Sensitivity analyses were conducted using simulated study parameters and cost components to investigate the impact on total study cost. Results Inclinometry was the most expensive method (with a total study cost of € 66,657), followed by observation (€ 55,369) and then self report (€ 36,865). The majority of costs (90%) were borne by researchers. Study design parameters such as sample size, measurement scheduling and spacing, concurrent measurements, location and travel, and equipment acquisition were shown to have wide-ranging impacts on costs. Conclusions This study provided a general cost modeling approach that can facilitate decision making and planning of data collection in future studies, as well as investigation into cost efficiency and cost efficient study design. Empirical cost data from a large field study demonstrated the usefulness of the proposed models. PMID:22738341
Nie, Xiaolu; Zhang, Ying; Wu, Zehao; Jia, Lulu; Wang, Xiaoling; Langan, Sinéad M; Benchimol, Eric I; Peng, Xiaoxia
2018-06-01
To appraise the reporting quality of studies which concerned linezolid related thrombocytopenia referring to REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement. Medline, Embase, Cochrane library and clinicaltrial.gov were searched for observational studies concerning linezolid related thrombocytopenia using routinely collected health data from 2000 to 2017. Two reviewers screened potential eligible articles and extracted data independently. Finally, reporting quality assessment was performed by two senior researchers using RECORD statement. Of 25 included studies, 11 (44.0%) mentioned the type of data in the title and/or abstract. In 38 items derived from RECORD statement, the median number of items reported in the included studies was 22 (interquartile range (IQR) 18 to 27). Inadequate reporting issues were discovered in the following aspects: validation studies of the codes or algorithms, study size estimation, quantitative variables, subgroup statistical methods, missing data, follow-up/matching or sampling strategy, sensitivity analysis and cleaning methods, funding and role of funders and accessibility of protocol, raw data. This study provides the evidence that the reporting quality of post-marketing safety evaluation studies conducted using routinely collected health data was often insufficient. Future stakeholders are encouraged to endorse the RECORD guidelines in pharmacovigilance.
Akmatov, Manas K; Koch, Nadine; Vital, Marius; Ahrens, Wolfgang; Flesch-Janys, Dieter; Fricke, Julia; Gatzemeier, Anja; Greiser, Halina; Günther, Kathrin; Illig, Thomas; Kaaks, Rudolf; Krone, Bastian; Kühn, Andrea; Linseisen, Jakob; Meisinger, Christine; Michels, Karin; Moebus, Susanne; Nieters, Alexandra; Obi, Nadia; Schultze, Anja; Six-Merker, Julia; Pieper, Dietmar H; Pessler, Frank
2017-05-12
We examined acceptability, preference and feasibility of collecting nasal and oropharyngeal swabs, followed by microbiome analysis, in a population-based study with 524 participants. Anterior nasal and oropharyngeal swabs were collected by certified personnel. In addition, participants self-collected nasal swabs at home four weeks later. Four swab types were compared regarding (1) participants' satisfaction and acceptance and (2) detection of microbial community structures based on deep sequencing of the 16 S rRNA gene V1-V2 variable regions. All swabbing methods were highly accepted. Microbial community structure analysis revealed 846 phylotypes, 46 of which were unique to oropharynx and 164 unique to nares. The calcium alginate tipped swab was found unsuitable for microbiome determinations. Among the remaining three swab types, there were no differences in oropharyngeal microbiomes detected and only marginal differences in nasal microbiomes. Microbial community structures did not differ between staff-collected and self-collected nasal swabs. These results suggest (1) that nasal and oropharyngeal swabbing are highly feasible methods for human population-based studies that include the characterization of microbial community structures in these important ecological niches, and (2) that self-collection of nasal swabs at home can be used to reduce cost and resources needed, particularly when serial measurements are to be taken.
Field, Christopher R.; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C.; Rose-Pehrsson, Susan L.
2014-01-01
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples. PMID:25145416
Field, Christopher R; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C; Rose-Pehrsson, Susan L
2014-07-25
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
Ebert, Jonas Fynboe; Huibers, Linda; Christensen, Bo; Christensen, Morten Bondo
2018-01-23
Paper questionnaires have traditionally been the first choice for data collection in research. However, declining response rates over the past decade have increased the risk of selection bias in cross-sectional studies. The growing use of the Internet offers new ways of collecting data, but trials using Web-based questionnaires have so far seen mixed results. A secure, online digital mailbox (e-Boks) linked to a civil registration number became mandatory for all Danish citizens in 2014 (exemption granted only in extraordinary cases). Approximately 89% of the Danish population have a digital mailbox, which is used for correspondence with public authorities. We aimed to compare response rates, completeness of data, and financial costs for different invitation methods: traditional surface mail and digital mail. We designed a cross-sectional comparative study. An invitation to participate in a survey on help-seeking behavior in out-of-hours care was sent to two groups of randomly selected citizens from age groups 30-39 and 50-59 years and parents to those aged 0-4 years using either traditional surface mail (paper group) or digital mail sent to a secure online mailbox (digital group). Costs per respondent were measured by adding up all costs for handling, dispatch, printing, and work salary and then dividing the total figure by the number of respondents. Data completeness was assessed by comparing the number of missing values between the two methods. Socioeconomic variables (age, gender, family income, education duration, immigrant status, and job status) were compared both between respondents and nonrespondents and within these groups to evaluate the degree of selection bias. A total 3600 citizens were invited in each group; 1303 (36.29%) responded to the digital invitation and 1653 (45.99%) to the paper invitation (difference 9.66%, 95% CI 7.40-11.92). The costs were €1.51 per respondent for the digital group and €15.67 for paper group respondents. Paper questionnaires generally had more missing values; this was significant in five of 17 variables (P<.05). Substantial differences were found in the socioeconomic variables between respondents and nonrespondents, whereas only minor differences were seen within the groups of respondents and nonrespondents. Although we found lower response rates for Web-based invitations, this solution was more cost-effective (by a factor of 10) and had slightly lower numbers of missing values than questionnaires sent with paper invitations. Analyses of socioeconomic variables showed almost no difference between nonrespondents in both groups, which could imply that the lower response rate in the digital group does not necessarily increase the level of selection bias. Invitations to questionnaire studies via digital mail may be an excellent option for collecting research data in the future. This study may serve as the foundational pillar of digital data collection in health care research in Scandinavia and other countries considering implementing similar systems. ©Jonas Fynboe Ebert, Linda Huibers, Bo Christensen, Morten Bondo Christensen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 23.01.2018.
Measuring the implementation of early childhood development programs.
Aboud, Frances E; Prado, Elizabeth L
2018-05-01
In this paper we describe ways to measure variables of interest when evaluating the implementation of a program to improve early childhood development (ECD). The variables apply to programs delivered to parents in group sessions and home or clinic visits, as well as in early group care for children. Measurements for four categories of variables are included: training and assessment of delivery agents and supervisors; program features such as quality of delivery, reach, and dosage; recipients' acceptance and enactment; and stakeholders' engagement. Quantitative and qualitative methods are described, along with when measures might be taken throughout the processes of planning, preparing, and implementing. A few standard measures are available, along with others that researchers can select and modify according to their goals. Descriptions of measures include who might collect the information, from whom, and when, along with how information might be analyzed and findings used. By converging on a set of common methods to measure implementation variables, investigators can work toward improving programs, identifying gaps that impede the scalability and sustainability of programs, and, over time, ascertain program features that lead to successful outcomes. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
Tacholess order-tracking approach for wind turbine gearbox fault detection
NASA Astrophysics Data System (ADS)
Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang
2017-09-01
Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard
2016-02-28
Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.
ACCESS 3. Approximation concepts code for efficient structural synthesis: User's guide
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
A user's guide is presented for ACCESS-3, a research oriented program which combines dual methods and a collection of approximation concepts to achieve excellent efficiency in structural synthesis. The finite element method is used for structural analysis and dual algorithms of mathematical programming are applied in the design optimization procedure. This program retains all of the ACCESS-2 capabilities and the data preparation formats are fully compatible. Four distinct optimizer options were added: interior point penalty function method (NEWSUMT); second order primal projection method (PRIMAL2); second order Newton-type dual method (DUAL2); and first order gradient projection-type dual method (DUAL1). A pure discrete and mixed continuous-discrete design variable capability, and zero order approximation of the stress constraints are also included.
Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco
2005-01-01
A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204
Influences of Social and Style Variables on Adult Usage of African American English Features
Craig, Holly K.; Grogger, Jeffrey T.
2013-01-01
Purpose In this study, the authors examined the influences of selected social (gender, employment status, educational achievement level) and style variables (race of examiner, interview topic) on the production of African American English (AAE) by adults. Method Participants were 50 African American men and women, ages 20–30 years. The authors used Rapid and Anonymous Survey (RAS) methods to collect responses to questions on informal situational and formal message-oriented topics in a short interview with an unacquainted interlocutor. Results Results revealed strong systematic effects for academic achievement, but not gender or employment status. Most features were used less frequently by participants with higher educational levels, but sharp declines in the usage of 5 specific features distinguished the participants differing in educational achievement. Strong systematic style effects were found for the 2 types of questions, but not race of addressee. The features that were most commonly used across participants—copula absence, variable subject–verb agreement, and appositive pronouns—were also the features that showed the greatest style shifting. Conclusions The findings lay a foundation with mature speakers for rate-based and feature inventory methods recently shown to be informative for the study of child AAE and demonstrate the benefits of the RAS. PMID:22361105
Ouyang, Pamela; Wenger, Nanette K; Taylor, Doris; Rich-Edwards, Janet W; Steiner, Meir; Shaw, Leslee J; Berga, Sarah L; Miller, Virginia M; Merz, Noel Bairey
2016-01-01
In 2001, the Institute of Medicine's (IOM) report, "Exploring the Biological Contributions to Human Health: Does Sex Matter?" advocated for better understanding of the differences in human diseases between the sexes, with translation of these differences into clinical practice. Sex differences are well documented in the prevalence of cardiovascular (CV) risk factors, the clinical manifestation and incidence of cardiovascular disease (CVD), and the impact of risk factors on outcomes. There are also physiologic and psychosocial factors unique to women that may affect CVD risk, such as issues related to reproduction. The Society for Women's Health Research (SWHR) CV Network compiled an inventory of sex-specific strategies and methods for the study of women and CV health and disease across the lifespan. References for methods and strategy details are provided to gather and evaluate this information. Some items comprise robust measures; others are in development. To address female-specific CV health and disease in population, physiology, and clinical trial research, data should be collected on reproductive history, psychosocial variables, and other factors that disproportionately affect CVD in women. Variables related to reproductive health include the following: age of menarche, menstrual cycle regularity, hormone levels, oral contraceptive use, pregnancy history/complications, polycystic ovary syndrome (PCOS) components, menopause age, and use and type of menopausal hormone therapy. Other factors that differentially affect women's CV risk include diabetes mellitus, autoimmune inflammatory disease, and autonomic vasomotor control. Sex differences in aging as well as psychosocial variables such as depression and stress should also be considered. Women are frequently not included/enrolled in mixed-sex CVD studies; when they are included, information on these variables is generally not collected. These omissions limit the ability to determine the role of sex-specific contributors to CV health and disease. Lack of sex-specific knowledge contributes to the CVD health disparities that women face. The purpose of this review is to encourage investigators to consider ways to increase the usefulness of physiological and psychosocial data obtained from clinical populations, in an effort to improve the understanding of sex differences in clinical CVD research and health-care delivery for women and men.
Testing of technology readiness index model based on exploratory factor analysis approach
NASA Astrophysics Data System (ADS)
Ariani, AF; Napitupulu, D.; Jati, RK; Kadar, JA; Syafrullah, M.
2018-04-01
SMEs readiness in using ICT will determine the adoption of ICT in the future. This study aims to evaluate the model of technology readiness in order to apply the technology on SMEs. The model is tested to find if TRI model is relevant to measure ICT adoption, especially for SMEs in Indonesia. The research method used in this paper is survey to a group of SMEs in South Tangerang. The survey measures the readiness to adopt ICT based on four variables which is Optimism, Innovativeness, Discomfort, and Insecurity. Each variable contains several indicators to make sure the variable is measured thoroughly. The data collected through survey is analysed using factor analysis methodwith the help of SPSS software. The result of this study shows that TRI model gives more descendants on some indicators and variables. This result can be caused by SMEs owners’ knowledge is not homogeneous about either the technology that they are used, knowledge or the type of their business.
Organisational injustice and impaired cardiovascular regulation among female employees
Elovainio, M; Kivimäki, M; Puttonen, S; Lindholm, H; Pohjonen, T; Sinervo, T
2006-01-01
Objectives To examine the relation between perceived organisational justice and cardiovascular reactivity in women. Methods The participants were 57 women working in long term care homes. Heart rate variability and systolic arterial pressure variability were used as markers of autonomic function. Organisational justice was measured using the scale of Moorman. Data on other risk factors were also collected. Results Results from logistic regression models showed that the risk for increased low frequency band systolic arterial pressure variability was 3.8–5.8 times higher in employees with low justice than in employees with high justice. Low perceived justice was also related to an 80% excess risk of reduced high frequency heart rate variability compared to high perceived justice, but this association was not statistically significant. Conclusions These findings are consistent with the hypothesis that cardiac dysregulation is one stress mechanism through which a low perceived justice of decision making procedures and interpersonal treatment increases the risk of health problems in personnel. PMID:16421394
Zhu, Y Q; Long, Q; Xiao, Q F; Zhang, M; Wei, Y L; Jiang, H; Tang, B
2018-03-13
Objective: To investigate the association of blood pressure variability and sleep stability in essential hypertensive patients with sleep disorder by cardiopulmonary coupling. Methods: Performed according to strict inclusion and exclusion criteria, 88 new cases of essential hypertension who came from the international department and the cardiology department of china-japan friendship hospital were enrolled. Sleep stability and 24 h ambulatory blood pressure data were collected by the portable sleep monitor based on cardiopulmonary coupling technique and 24 h ambulatory blood pressure monitor. Analysis the correlation of blood pressure variability and sleep stability. Results: In the nighttime, systolic blood pressure standard deviation, systolic blood pressure variation coefficient, the ratio of the systolic blood pressure minimum to the maximum, diastolic blood pressure standard deviation, diastolic blood pressure variation coefficient were positively correlated with unstable sleep duration ( r =0.185, 0.24, 0.237, 0.43, 0.276, P <0.05). Conclusions: Blood pressure variability is associated with sleep stability, especially at night, the longer the unstable sleep duration, the greater the variability in night blood pressure.
Zhang, Houxi; Zhuang, Shunyao; Qian, Haiyan; Wang, Feng; Ji, Haibao
2015-01-01
Understanding the spatial variability of soil organic carbon (SOC) must be enhanced to improve sampling design and to develop soil management strategies in terrestrial ecosystems. Moso bamboo (Phyllostachys pubescens Mazel ex Houz.) forests have a high SOC storage potential; however, they also vary significantly spatially. This study investigated the spatial variability of SOC (0-20 cm) in association with other soil properties and with spatial variables in the Moso bamboo forests of Jian’ou City, which is a typical bamboo hometown in China. 209 soil samples were collected from Moso bamboo stands and then analyzed for SOC, bulk density (BD), pH, cation exchange capacity (CEC), and gravel content (GC) based on spatial distribution. The spatial variability of SOC was then examined using geostatistics. A Kriging map was produced through ordinary interpolation and required sample numbers were calculated by classical and Kriging methods. An aggregated boosted tree (ABT) analysis was also conducted. A semivariogram analysis indicated that ln(SOC) was best fitted with an exponential model and that it exhibited moderate spatial dependence, with a nugget/sill ratio of 0.462. SOC was significantly and linearly correlated with BD (r = −0.373**), pH (r = −0.429**), GC (r = −0.163*), CEC (r = 0.263**), and elevation (r = 0.192**). Moreover, the Kriging method requires fewer samples than the classical method given an expected standard error level as per a variance analysis. ABT analysis indicated that the physicochemical variables of soil affected SOC variation more significantly than spatial variables did, thus suggesting that the SOC in Moso bamboo forests can be strongly influenced by management practices. Thus, this study provides valuable information in relation to sampling strategy and insight into the potential of adjustments in agronomic measure, such as in fertilization for Moso bamboo production. PMID:25789615
Toward the identification of molecular cogs.
Dziubiński, Maciej; Lesyng, Bogdan
2016-04-05
Computer simulations of molecular systems allow determination of microscopic interactions between individual atoms or groups of atoms, as well as studies of intramolecular motions. Nevertheless, description of structural transformations at the mezoscopic level and identification of causal relations associated with these transformations is very difficult. Structural and functional properties are related to free energy changes. Therefore, to better understand structural and functional properties of molecular systems, it is required to deepen our knowledge of free energy contributions arising from molecular subsystems in the course of structural transformations. The method presented in this work quantifies the energetic contribution of each pair of atoms to the total free energy change along a given collective variable. Next, with the help of a genetic clustering algorithm, the method proposes a division of the system into two groups of atoms referred to as molecular cogs. Atoms which cooperate to push the system forward along a collective variable are referred to as forward cogs, and those which work in the opposite direction as reverse cogs. The procedure was tested on several small molecules for which the genetic clustering algorithm successfully found optimal partitionings into molecular cogs. The primary result of the method is a plot depicting the energetic contributions of the identified molecular cogs to the total Potential of Mean Force (PMF) change. Case-studies presented in this work should help better understand the implications of our approach, and were intended to pave the way to a future, publicly available implementation. © 2015 Wiley Periodicals, Inc.
Nie, Zhi; Yang, Tao; Liu, Yashu; Li, Qingyang; Narayan, Vaibhav A; Wittenberg, Gayle; Ye, Jieping
2015-01-01
Recent studies have revealed that melancholic depression, one major subtype of depression, is closely associated with the concentration of some metabolites and biological functions of certain genes and pathways. Meanwhile, recent advances in biotechnologies have allowed us to collect a large amount of genomic data, e.g., metabolites and microarray gene expression. With such a huge amount of information available, one approach that can give us new insights into the understanding of the fundamental biology underlying melancholic depression is to build disease status prediction models using classification or regression methods. However, the existence of strong empirical correlations, e.g., those exhibited by genes sharing the same biological pathway in microarray profiles, tremendously limits the performance of these methods. Furthermore, the occurrence of missing values which are ubiquitous in biomedical applications further complicates the problem. In this paper, we hypothesize that the problem of missing values might in some way benefit from the correlation between the variables and propose a method to learn a compressed set of representative features through an adapted version of sparse coding which is capable of identifying correlated variables and addressing the issue of missing values simultaneously. An efficient algorithm is also developed to solve the proposed formulation. We apply the proposed method on metabolic and microarray profiles collected from a group of subjects consisting of both patients with melancholic depression and healthy controls. Results show that the proposed method can not only produce meaningful clusters of variables but also generate a set of representative features that achieve superior classification performance over those generated by traditional clustering and data imputation techniques. In particular, on both datasets, we found that in comparison with the competing algorithms, the representative features learned by the proposed method give rise to significantly improved sensitivity scores, suggesting that the learned features allow prediction with high accuracy of disease status in those who are diagnosed with melancholic depression. To our best knowledge, this is the first work that applies sparse coding to deal with high feature correlations and missing values, which are common challenges in many biomedical applications. The proposed method can be readily adapted to other biomedical applications involving incomplete and high-dimensional data.
Can Observation Skills of Citizen Scientists Be Estimated Using Species Accumulation Curves?
Kelling, Steve; Johnston, Alison; Hochachka, Wesley M; Iliff, Marshall; Fink, Daniel; Gerbracht, Jeff; Lagoze, Carl; La Sorte, Frank A; Moore, Travis; Wiggins, Andrea; Wong, Weng-Keen; Wood, Chris; Yu, Jun
2015-01-01
Volunteers are increasingly being recruited into citizen science projects to collect observations for scientific studies. An additional goal of these projects is to engage and educate these volunteers. Thus, there are few barriers to participation resulting in volunteer observers with varying ability to complete the project's tasks. To improve the quality of a citizen science project's outcomes it would be useful to account for inter-observer variation, and to assess the rarely tested presumption that participating in a citizen science projects results in volunteers becoming better observers. Here we present a method for indexing observer variability based on the data routinely submitted by observers participating in the citizen science project eBird, a broad-scale monitoring project in which observers collect and submit lists of the bird species observed while birding. Our method for indexing observer variability uses species accumulation curves, lines that describe how the total number of species reported increase with increasing time spent in collecting observations. We find that differences in species accumulation curves among observers equates to higher rates of species accumulation, particularly for harder-to-identify species, and reveals increased species accumulation rates with continued participation. We suggest that these properties of our analysis provide a measure of observer skill, and that the potential to derive post-hoc data-derived measurements of participant ability should be more widely explored by analysts of data from citizen science projects. We see the potential for inferential results from analyses of citizen science data to be improved by accounting for observer skill.
Seroprevalence of HBV, HCV & HIV Co-Infection and Risk Factors Analysis in Tripoli-Libya
Daw, Mohamed A.; Shabash, Amira; El-Bouzedi, Abdallah; Dau, Aghnya A.
2014-01-01
Background In 1998 Libya experienced a major outbreak of multiple blood borne viral hepatitis and HIV infections. Since then, no studies have been done on the epidemic features and risk factors of HBV, HCV, HIV and co-infection among the general population. Methods A prospective study was carried out using a multi-centre clustering method to collect samples from the general population. The participants were interviewed, and relevant information was collected, including socio-demographic, ethnic, and geographic variables. This information was correlated with the risk factors involved in the transmission of HBV, HCV and HIV. Blood samples were collected and the sera were tested for HBsAg, anti-HCV and anti-HIV using enzyme immunoassay. Results A total of 9,170 participants from the nine districts of Tripoli were enrolled. The average prevalence of HBsAg was 3.7%, anti-HCV 0.9%, anti-HIV 0.15% and co-infection 0.02%. The prevalence varied from one district to another. HBV was more prevalent among those aged over 50 years and was associated with family history. Anti-HCV and anti-HIV were more prevalent among those aged 20–40 years. Intravenous drug use and blood transfusion were the main risk factors for HCV and HIV infection. Conclusion HBV, HCV, HIV and co-infection are relatively common in Libya. High prevalence was associated with geographic, ethnic and socioeconomic variability within the community. HCV and HIV infections among the younger age groups are becoming an alarming issue. Regulations and health care education need to be implemented and longer term follow-up should be planned. PMID:24936655
FARAJI, Hossein; MOHAMMADI, Ali Akbar; AKBARI-ADERGANI, Behrouz; VAKILI SAATLOO, Naimeh; LASHKARBOLOKI, Gholamreza; MAHVI, Amir Hossein
2014-01-01
Background: Fluoride is an essential element for human health. However, excess fluoride in drinking water may cause dental and/or skeletal fluorosis. Drinking water is the main route of fluoride intake. The aim of the present study was to measure fluoride levels in human breast milk collected from two regions of Golestan Province, northern Iran with different amount of fluoride concentration of drinking water in Bandar Gaz and Nokande cities and to correlate it with fluoride concentrations in drinking water used by mothers living in these two areas. Methods: Twenty samples of water were collected from seven drinking water wells during 2012 from Bandar Gaz and Nokande in Iran during 2012. Fluoride concentration of water samples was measured using SPADNS method. Sixty breast milk samples were collected from lactating mothers of Bandar Gaz and Nokande cities. Content in breast milk was determined using standard F ion-selective electrode. Spearman’s rho correlation analysis was used to assess any possible relationship between fluoride levels in breast milk and in drinking water. Results: The means and standard deviation for F concentration in breast milk and drinking water were 0.002188±0.00026224 ppm and 0.5850±0.22542 ppm, respectively. Analysis of data showed that the variables were not normally distributed so the Spearman correlation coefficient between two variables calculated (ρS = 0.65) and it was significant (P=0.002). Conclusion: Fluoride concentration in water can directly act on its concentration in breast milk. We speculate that modifying F concentration in water can affect accessibility of fluoride for infants. PMID:26171359
Can Observation Skills of Citizen Scientists Be Estimated Using Species Accumulation Curves?
Kelling, Steve; Johnston, Alison; Hochachka, Wesley M.; Iliff, Marshall; Fink, Daniel; Gerbracht, Jeff; Lagoze, Carl; La Sorte, Frank A.; Moore, Travis; Wiggins, Andrea; Wong, Weng-Keen; Wood, Chris; Yu, Jun
2015-01-01
Volunteers are increasingly being recruited into citizen science projects to collect observations for scientific studies. An additional goal of these projects is to engage and educate these volunteers. Thus, there are few barriers to participation resulting in volunteer observers with varying ability to complete the project’s tasks. To improve the quality of a citizen science project’s outcomes it would be useful to account for inter-observer variation, and to assess the rarely tested presumption that participating in a citizen science projects results in volunteers becoming better observers. Here we present a method for indexing observer variability based on the data routinely submitted by observers participating in the citizen science project eBird, a broad-scale monitoring project in which observers collect and submit lists of the bird species observed while birding. Our method for indexing observer variability uses species accumulation curves, lines that describe how the total number of species reported increase with increasing time spent in collecting observations. We find that differences in species accumulation curves among observers equates to higher rates of species accumulation, particularly for harder-to-identify species, and reveals increased species accumulation rates with continued participation. We suggest that these properties of our analysis provide a measure of observer skill, and that the potential to derive post-hoc data-derived measurements of participant ability should be more widely explored by analysts of data from citizen science projects. We see the potential for inferential results from analyses of citizen science data to be improved by accounting for observer skill. PMID:26451728
Vera, L.; Pérez-Beteta, J.; Molina, D.; Borrás, J. M.; Benavides, M.; Barcia, J. A.; Velásquez, C.; Albillo, D.; Lara, P.; Pérez-García, V. M.
2017-01-01
Abstract Introduction: Machine learning methods are integrated in clinical research studies due to their strong capability to discover parameters having a high information content and their predictive combined potential. Several studies have been developed using glioblastoma patient’s imaging data. Many of them have focused on including large numbers of variables, mostly two-dimensional textural features and/or genomic data, regardless of their meaning or potential clinical relevance. Materials and methods: 193 glioblastoma patients were included in the study. Preoperative 3D magnetic resonance images were collected and semi-automatically segmented using an in-house software. After segmentation, a database of 90 parameters including geometrical and textural image-based measures together with patients’ clinical data (including age, survival, type of treatment, etc.) was constructed. The criterion for including variables in the study was that they had either shown individual impact on survival in single or multivariate analyses or have a precise clinical or geometrical meaning. These variables were used to perform several machine learning experiments. In a first set of computational cross-validation experiments based on regression trees, those attributes showing the highest information measures were extracted. In the second phase, more sophisticated learning methods were employed in order to validate the potential of the previous variables predicting survival. Concretely support vector machines, neural networks and sparse grid methods were used. Results: Variables showing high information measure in the first phase provided the best prediction results in the second phase. Specifically, patient age, Stupp regimen and a geometrical measure related with the irregularity of contrast-enhancing areas were the variables showing the highest information measure in the first stage. For the second phase, the combinations of patient age and Stupp regimen together with one tumor geometrical measure and one tumor heterogeneity feature reached the best quality prediction. Conclusions: Advanced machine learning methods identified the parameters with the highest information measure and survival predictive potential. The uninformed machine learning methods identified a novel feature measure with direct impact on survival. Used in combination with other previously known variables multi-indexes can be defined that can help in tumor characterization and prognosis prediction. Recent advances on the definition of those multi-indexes will be reported in the conference. Funding: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].
Newgard, Craig D.; Zive, Dana; Jui, Jonathan; Weathers, Cody; Daya, Mohamud
2011-01-01
Objectives To compare case ascertainment, agreement, validity, and missing values for clinical research data obtained, processed, and linked electronically from electronic health records (EHR), compared to “manual” data processing and record abstraction in a cohort of out-ofhospital trauma patients. Methods This was a secondary analysis of two sets of data collected for a prospective, population-based, out-of-hospital trauma cohort evaluated by 10 emergency medical services (EMS) agencies transporting to 16 hospitals, from January 1, 2006 through October 2, 2007. Eighteen clinical, operational, procedural, and outcome variables were collected and processed separately and independently using two parallel data processing strategies, by personnel blinded to patients in the other group. The electronic approach included electronic health record data exports from EMS agencies, reformatting and probabilistic linkage to outcomes from local trauma registries and state discharge databases. The manual data processing approach included chart matching, data abstraction, and data entry by a trained abstractor. Descriptive statistics, measures of agreement, and validity were used to compare the two approaches to data processing. Results During the 21-month period, 418 patients underwent both data processing methods and formed the primary cohort. Agreement was good to excellent (kappa 0.76 to 0.97; intraclass correlation coefficient 0.49 to 0.97), with exact agreement in 67% to 99% of cases, and a median difference of zero for all continuous and ordinal variables. The proportions of missing out-of-hospital values were similar between the two approaches, although electronic processing generated more missing outcomes (87 out of 418, 21%, 95% CI = 17% to 25%) than the manual approach (11 out of 418, 3%, 95% CI = 1% to 5%). Case ascertainment of eligible injured patients was greater using electronic methods (n = 3,008) compared to manual methods (n = 629). Conclusions In this sample of out-of-hospital trauma patients, an all-electronic data processing strategy identified more patients and generated values with good agreement and validity compared to traditional data collection and processing methods. PMID:22320373
Inic-Kanada, Aleksandra; Nussbaumer, Andrea; Montanaro, Jacqueline; Belij, Sandra; Schlacher, Simone; Stein, Elisabeth; Bintner, Nora; Merio, Margarethe; Zlabinger, Gerhard J.
2012-01-01
Purpose Evaluating cytokine profiles in tears could shed light on the pathogenesis of various ocular surface diseases. When collecting tears with the methods currently available, it is often not possible to avoid the tear reflex, which may give a different cytokine profile compared to basal tears. More importantly, tear collection with glass capillaries, the most widely used method for taking samples and the best method for avoiding tear reflex, is impractical for remote area field studies because it is tedious and time-consuming for health workers, who cannot collect tears from a large number of patients with this method in one day. Furthermore, this method is uncomfortable for anxious patients and children. Thus, tears are frequently collected using ophthalmic sponges. These sponges have the advantage that they are well tolerated by the patient, especially children, and enable standardization of the tear collection volume. The aim of this study was to compare various ophthalmic sponges and extraction buffers to optimize the tear collection method for field studies for subsequent quantification of cytokines in tears using the Luminex technology. Methods Three ophthalmic sponges, Merocel, Pro-ophta, and Weck-Cel, were tested. Sponges were presoaked with 25 cytokines/chemokines of known concentrations and eluted with seven different extraction buffers (EX1–EX7). To assess possible interference in the assay from the sponges, two standard curves were prepared in parallel: 1) cytokines of known concentrations with the extraction buffers and 2) cytokines of known concentrations loaded onto the sponges with the extraction buffers. Subsequently, a clinical assessment of the chosen sponge-buffer combination was performed with tears collected from four healthy subjects using 1) aspiration and 2) sponges. To quantify cytokine/chemokine recovery and the concentration in the tears, a 25-plex Cytokine Panel and the Luminex xMap were used. This platform enables simultaneous measurement of proinflammatory cytokines, Th1/Th2 distinguishing cytokines, nonspecific acting cytokines, and chemokines. Results We demonstrated the following: (i) 25 cytokines/chemokines expressed highly variable interactions with buffers and matrices. Several buffers enabled recovery of similar cytokine values (regulated and normal T cell expressed and secreted [RANTES], interleukin [IL]-13, IL-6, IL-8, IL-2R, and granulocyte-macrophage colony-stimulating factor [GM-CSF]); others were highly variable (monocyte chemotactic protein-1 [MCP-1], monokine induced by interferon-gamma [MIG], IL-1β, IL-4, IL-7, and eotaxin). (ii) Various extraction buffers displayed significantly different recovery rates on the same sponge for the same cytokine/chemokine. (iii) The highest recovery rates were obtained with the Merocel ophthalmic sponge except for tumor necrosis factor-α: the Weck-Cel ophthalmic sponge showed the best results, either with cytokine standards loaded onto sponges or with tears collected from the inner canthus of the eye, using the sponge. (iv) IL-5, IL-10, and interferon-α were not detected in any tear sample from four normal human subjects. Twenty-two cytokines/chemokines that we detected were extracted from the Merocel sponge to a satisfactory recovery percentage. The recovery of IL-7 was significantly lower in the extracted Merocel sponge compared to the diluted tear samples. The cytokine/chemokine extraction from tears showed the same pattern of extraction that we observed for extracting the standards. Conclusions Simultaneous measurement of various cytokines using ophthalmic sponges yielded diverse results for various cytokines as the level of extraction differs noticeably for certain cytokines. A second set of controls (standard curves “with sponges”) should be used to delineate the extent of extraction for each cytokine to be analyzed. Many cytokines/chemokines were detected in tear samples collected with the Merocel sponge, including many that have been implicated in ocular surface disease. Luminex detection of cytokine/chemokine profiles of tears collected with Merocel sponges and extracted with buffer EX1 may be useful in clinical studies, for example, to assess cytokine profiles evaluation in ocular surface diseases. PMID:23233782
NASA Astrophysics Data System (ADS)
Breves, E. A.; Lepore, K.; Dyar, M. D.; Bender, S. C.; Tokar, R. L.; Boucher, T.
2017-11-01
Laser-induced breakdown spectroscopy has become a popular tool for rapid elemental analysis of geological materials. However, quantitative applications of LIBS are plagued by variability in collected spectra that cannot be attributed to differences in geochemical composition. Even under ideal laboratory conditions, variability in LIBS spectra creates a host of difficulties for quantitative analysis. This is only exacerbated during field work, when both the laser-sample distance and the angle of ablation/collection are constantly changing. A primary goal of this study is to use empirical evidence to provide a more accurate assessment of uncertainty in LIBS-derived element predictions. We hope to provide practical guidance regarding the angles of ablation and collection that can be tolerated without substantially increasing prediction uncertainty beyond that which already exists under ideal laboratory conditions. Spectra were collected from ten geochemically diverse samples at angles of ablation and collection ranging from 0° to ± 60°. Ablation and collection angles were changed independently and simultaneously in order to isolate spectral changes caused by differences in ablation angle from those due to differences in collection angle. Most of the variability in atomic and continuum spectra is attributed to changes in the ablation angle, rather than the collection angle. At higher angles, the irradiance of the laser beam is lower and produces smaller, possibly less dense plasmas. Simultaneous changes in the collection angle do not appear to affect the collected spectra, possibly because smaller plasmas are still within the viewing area of the collection optics, even though this area is reduced at higher collection angles. A key observation is that changes in the magnitude of atomic and total emission are < 5% and 10%, respectively, in spectra collected with the configuration that most closely resembles field measurements (VV) at angles < 20°. In addition, variability in atomic and continuum emission is strongly dependent upon sample composition. Denser, more Fe/Mg-rich rocks exhibited much less variability with changes in ablation and collection angles than Si-rich felsic rocks. Elemental compositions of our variable angle data that were predicted using a much larger but conventionally-collected calibration suite show that accuracy generally suffers when the incidence and collection angles are high. Prediction accuracy (for measurements acquired with varying collection and ablation angles) varies from ± 1.28-1.86 wt% for Al2O3, ± 1.25-1.66 wt% for CaO, ± 1.90-2.21 wt% for Fe2O3T, ± 0.76-0.94 wt% for K2O, ± 2.85-3.61 wt% MgO, ± 0.15-0.17 wt% for MnO, ± 0.68-0.78 wt% for Na2O, ± 0.33-0.42 wt% for TiO2, and ± 2.94-4.34 wt% SiO2. The ChemCam team is using lab data acquired under normal incidence and collection angles to predict the compositions of Mars targets at varying angles. Thus, the increased errors noted in this study for high incidence angle measurements are likely similar to additional, unacknowledged errors on ChemCam results for non-normal targets analyzed on Mars. Optimal quantitative analysis of LIBS spectra must include some knowledge of the angle of ablation and collection so the approximate increase in uncertainty introduced by a departure from normal angles can be accurately reported.
Estimating population sizes for elusive animals: the forest elephants of Kakum National Park, Ghana.
Eggert, L S; Eggert, J A; Woodruff, D S
2003-06-01
African forest elephants are difficult to observe in the dense vegetation, and previous studies have relied upon indirect methods to estimate population sizes. Using multilocus genotyping of noninvasively collected samples, we performed a genetic survey of the forest elephant population at Kakum National Park, Ghana. We estimated population size, sex ratio and genetic variability from our data, then combined this information with field observations to divide the population into age groups. Our population size estimate was very close to that obtained using dung counts, the most commonly used indirect method of estimating the population sizes of forest elephant populations. As their habitat is fragmented by expanding human populations, management will be increasingly important to the persistence of forest elephant populations. The data that can be obtained from noninvasively collected samples will help managers plan for the conservation of this keystone species.
Fienen, Michael N.; Selbig, William R.
2012-01-01
A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.
Nonlinear vs. linear biasing in Trp-cage folding simulations
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-01
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Nonlinear vs. linear biasing in Trp-cage folding simulations.
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Daniels, Sarah I; Sillé, Fenna C M; Goldbaum, Audrey; Yee, Brenda; Key, Ellen F; Zhang, Luoping; Smith, Martyn T; Thomas, Reuben
2014-12-01
Blood miRNAs are a new promising area of disease research, but variability in miRNA measurements may limit detection of true-positive findings. Here, we measured sources of miRNA variability and determine whether repeated measures can improve power to detect fold-change differences between comparison groups. Blood from healthy volunteers (N = 12) was collected at three time points. The miRNAs were extracted by a method predetermined to give the highest miRNA yield. Nine different miRNAs were quantified using different qPCR assays and analyzed using mixed models to identify sources of variability. A larger number of miRNAs from a publicly available blood miRNA microarray dataset with repeated measures were used for a bootstrapping procedure to investigate effects of repeated measures on power to detect fold changes in miRNA expression for a theoretical case-control study. Technical variability in qPCR replicates was identified as a significant source of variability (P < 0.05) for all nine miRNAs tested. Variability was larger in the TaqMan qPCR assays (SD = 0.15-0.61) versus the qScript qPCR assays (SD = 0.08-0.14). Inter- and intraindividual and extraction variability also contributed significantly for two miRNAs. The bootstrapping procedure demonstrated that repeated measures (20%-50% of N) increased detection of a 2-fold change for approximately 10% to 45% more miRNAs. Statistical power to detect small fold changes in blood miRNAs can be improved by accounting for sources of variability using repeated measures and choosing appropriate methods to minimize variability in miRNA quantification. This study demonstrates the importance of including repeated measures in experimental designs for blood miRNA research. See all the articles in this CEBP Focus section, "Biomarkers, Biospecimens, and New Technologies in Molecular Epidemiology." ©2014 American Association for Cancer Research.
2013-09-30
productivity. Advanced variational methods for the assimilation of satellite and in situ observations to achieve improved state estimation and subsequent...time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection...South China Sea (SCS) using the Regional Ocean Modeling System (ROMS) with Incremental Strong Constraint 4-Dimensional Variational (IS4DVAR) data
The Effect of Psychosocial Factors on Acute and Persistent Pain Following Childbirth
2015-10-14
longitudinal study. Methods: Baseline measures of psychosocial variables were obtained during the last 8 weeks of pregnancy . Delivery and acute pain...low income by the US Census Bureau???s definition. Routine assessment of depression during pregnancy may identify those at risk of developing...were obtained during the last 8 weeks of pregnancy . Delivery and acute pain data were collected from the electronic medical record. Follow-up data were
2017-03-10
electromagnetic radiation that propagates through a planetary atmosphere. These codes vary in the extent of their scope, incorporated models, and derived...emissive properties of the atmosphere. The propagation of electromagnetic radiation is affected by the scattering and absorption by both air molecules...Mie theory is the collection of the Mie solutions and methods to Maxwell’s Equations, which 35 describe how electromagnetic waves are scattered by
Software Development for Asteroid and Variable Star Research
NASA Astrophysics Data System (ADS)
Sweckard, Teaghen; Clason, Timothy; Kenney, Jessica; Wuerker, Wolfgang; Palser, Sage; Giles, Tucker; Linder, Tyler; Sanchez, Richard
2018-01-01
The process of collecting and analyzing light curves from variable stars and asteroids is almost identical. In 2016 a collaboration was created to develop a simple fundamental way to study both asteroids and variable stars using methods that would allow the process to be repeated by middle school and high school students.Using robotic telescopes at Cerro Tololo (Chile), Yerkes Observatory (US), and Stone Edge Observatory (US) data were collected on RV Del and three asteroids. It was discovered that the only available software program which could be easily installed on lab computers was MPO Canopus. However, after six months it was determined that MPO Canopus was not an acceptable option because of the steep learning curve, lack of documentation and technical support.Therefore, the project decided that the best option was to design our own python based software. Using python and python libraries we developed code that can be used for photometry and can be easily changed to the user's needs. We accomplished this by meeting with our mentor astronomer, Tyler Linder, and in the beginning wrote two different programs, one for asteroids and one for variable stars. In the end, though, we chose to combine codes so that the program would be capable of performing photometry for both moving and static objects.The software performs differential photometry by comparing the magnitude of known reference stars to the object being studied. For asteroids, the image timestamps are used to obtain ephemeris of the asteroid from JPL Horizons automatically.
Alabama Coronary Artery Bypass Grafting Project
Holman, William L.; Sansom, Monique; Kiefe, Catarina I.; Peterson, Eric D.; Hubbard, Steve G.; Delong, James F.; Allman, Richard M.
2004-01-01
Objective/Background: This report describes the first round of results for Phase II of the Alabama CABG Project, a regional quality improvement initiative. Methods: Charts submitted by all hospitals in Alabama performing CABG (ICD-9 codes 36.10–36.20) were reviewed by a Clinical Data Abstraction Center (CDAC) (preintervention 1999–2000; postintervention 2000–2001). Variables that described quality in Phase I were abstracted for Phase II and data describing the new variables of β-blocker use and lipid management were collected. Data samples collected onsite by participating hospitals were used for rapid cycle improvement in Phase II. Results: CDAC data (n = 1927 cases in 1999; n = 2001 cases in 2000) showed that improvements from Phase I in aspirin prescription, internal mammary artery use, and duration of intubation persisted in Phase II. During Phase II, use of β-blockers before, during, or after CABG increased from 65% to 76% of patients (P < 0.05). Appropriate lipid management, an aggregate variable, occurred in 91% of patients before and 91% after the educational intervention. However, there were improvements in 3 of 5 subcategories for lipid management (documenting a lipid disorder [52%–57%], initiating drug therapy [45%–53%], and dietary counseling [74%–91%]; P < 0.05). Conclusions: In Phase II, this statewide process-oriented quality improvement program added two new measures of quality. Achievements of quality improvement from Phase I persisted in Phase II, and improvements were seen in the new variables of lipid management and perioperative use of β-blockers. PMID:14685107
Song, SuJin; Song, Won O
2014-01-01
Asian regions have been suffering from growing double burden of nutritional health problems, such as undernutrition and chronic diseases. National nutrition survey plays an essential role in helping to improve both national and global health and reduce health disparities. The aim of this review was to compile and present the information on current national nutrition surveys conducted in Asian countries and suggest relevant issues in implementation of national nutrition surveys. Fifteen countries in Asia have conducted national nutrition surveys to collect data on nutrition and health status of the population. The information on national nutrition survey of each country was obtained from government documents, international organizations, survey website of governmental agencies, and publications, including journal articles, books, reports, and brochures. The national nutrition survey of each country has different variables and procedures. Variables of the surveys include sociodemographic and lifestyle variables; foods and beverages intake, dietary habits, and food security of individual or household; and health indicators, such as anthropometric and biochemical variables. The surveys have focused on collecting data about nutritional health status in children aged under five years and women of reproductive ages, nutrition intake adequacy and prevalence of obesity and chronic diseases for all individuals. To measure nutrition and health status of Asian populations accurately, improvement of current dietary assessment methods with various diet evaluation tools is necessary. The information organized in this review is important for researchers, policy makers, public health program developers, educators, and consumers in improving national and global health.
Mishell, Daniel R; Guillebaud, John; Westhoff, Carolyn; Nelson, Anita L; Kaunitz, Andrew M; Trussell, James; Davis, Ann Jeanette
2007-01-01
Initially approved for use in the United States nearly 50 years ago, oral hormonal contraceptives containing both estrogen and progestin have undergone steady improvements in safety and convenience. Concurrent with improvements in safety associated with decreasing doses of both steroids, there has been an increased incidence of unscheduled bleeding and spotting. There exist no standards regarding data collection techniques and methods, and reporting and analysis of bleeding and spotting events during combined hormonal contraceptive (CHC) trials. For the regulatory review of hormonal contraceptives, data regarding the incidence of bleeding and spotting events are not included in either of the traditional categories of efficacy and safety. Standardization of methods for collecting and analyzing data about cycle control in all clinical trials of CHCs is long overdue. Until such standards are developed and implemented, clinicians need to familiarize themselves with the techniques used in each study in order to provide correct information to their patients about the frequency of bleeding and spotting associated with different formulations and delivery systems.
Effects of Sarin on the Operant Behavior of Guinea Pigs
2005-07-19
a after behavioral sessions had ended. The first collection time modified autoshaping procedure (concurrent variable-time was after the final saline...after behavioral sessions had ended. The first collection time modified autoshaping procedure (concurrent variable-time was after the final saline
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Byrne, Deidre A.
2010-01-01
Seafloor pressure records, collected at 11 stations aligned along a single ground track of the Topex/Poseidon and Jason satellites, are analyzed for their tidal content. With very low background noise levels and approximately 27 months of high-quality records, tidal constituents can be estimated with unusually high precision. This includes many high-frequency lines up through the seventh-diurnal band. The station deployment provides a unique opportunity to compare with tides estimated from satellite altimetry, point by point along the satellite track, in a region of moderately high mesoscale variability. That variability can significantly corrupt altimeter-based tide estimates, even with 17 years of data. A method to improve the along-track altimeter estimates by correcting the data for nontidal variability is found to yield much better agreement with the bottom-pressure data. The technique should prove useful in certain demanding applications, such as altimetric studies of internal tides.
Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia
2015-09-01
We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.
Applications of MIDAS regression in analysing trends in water quality
NASA Astrophysics Data System (ADS)
Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.
2014-04-01
We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.
NASA Astrophysics Data System (ADS)
Tang, Ronglin; Li, Zhao-Liang; Sun, Xiaomin; Bi, Yuyun
2017-01-01
Surface evapotranspiration (ET) is an important component of water and energy in land and atmospheric systems. This paper investigated whether using variable surface resistances in the reference ET estimates from the full-form Penman-Monteith (PM) equation could improve the upscaled daily ET estimates in the constant reference evaporative fraction (EFr, the ratio of actual to reference grass/alfalfa ET) method on clear-sky days using ground-based measurements. Half-hourly near-surface meteorological variables and eddy covariance (EC) system-measured latent heat flux data on clear-sky days were collected at two sites with different climatic conditions, namely, the subhumid Yucheng station in northern China and the arid Yingke site in northwestern China and were used as the model input and ground-truth, respectively. The results showed that using the Food and Agriculture Organization (FAO)-PM equation, the American Society of Civil Engineers-PM equation, and the full-form PM equation to estimate the reference ET in the constant EFr method produced progressively smaller upscaled daily ET at a given time from midmorning to midafternoon. Using all three PM equations produced the best results at noon at both sites regardless of whether the energy imbalance of the EC measurements was closed. When the EC measurements were not corrected for energy imbalance, using variable surface resistance in the full-form PM equation could improve the ET upscaling in the midafternoon, but worse results may occur in the midmorning to noon. Site-to-site and time-to-time variations were found in the performances of a given PM equation (with fixed or variable surface resistances) before and after the energy imbalance was closed.
Var C: Long-term photometric and spectral variability of a luminous blue variable in M 33
NASA Astrophysics Data System (ADS)
Burggraf, B.; Weis, K.; Bomans, D. J.; Henze, M.; Meusinger, H.; Sholukhova, O.; Zharova, A.; Pellerin, A.; Becker, A.
2015-09-01
Aims: So far the highly unstable phase of luminous blue variables (LBVs) has not been understood well. It is still uncertain why and which massive stars enter this phase. Investigating the variabilities by looking for a possible regular or even (semi-)periodic behaviour could give a hint at the underlying mechanism for these variations and might answer the question of where these variabilities originate. Finding out more about the LBV phase also means understanding massive stars better in general, which have (e.g. by enriching the ISM with heavy elements, providing ionising radiation and kinetic energy) a strong and significant influence on the ISM, hence also on their host galaxy. Methods: Photometric and spectroscopic data were taken for the LBV Var C in M 33 to investigate its recent status. In addition, scanned historic plates, archival data, and data from the literature were gathered to trace Var C's behaviour in the past. Its long-term variability and periodicity was investigated. Results: Our investigation of the variability indicates possible (semi-)periodic behaviour with a period of 42.3 years for Var C. That Var C's light curve covers a time span of more than 100 years means that more than two full periods of the cycle are visible. The critical historic maximum around 1905 is less strong but discernible even with the currently rare historic data. The semi-periodic and secular structure of the light curve is similar to the one of LMC R71. Both light curves hint at a new aspect in the evolution of LBVs. Based on observations collected at the Thüringer Landessternwarte (TLS) Tautenburg.Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).Tables 2-4, and 6 are available in electronic form at http://www.aanda.org
Evaluating Corn (Zea Mays L.) N Variability Via Remote Sensed Data
NASA Technical Reports Server (NTRS)
Sullivan, D. G.; Shaw, J. N.; Mask, P. L.; Rickman, D.; Luvall, J.; Wersinger, J. M.
2003-01-01
Transformations and losses of nitrogen (N) throughout the growing season can be costly. Methods in place to improve N management and facilitate split N applications during the growing season can be time consuming and logistically difficult. Remote sensing (RS) may be a method to rapidly assess temporal changes in crop N status and promote more efficient N management. This study was designed to evaluate the ability of three different RS platforms to predict N variability in corn (Zea mays L.) leaves during vegetative and early reproductive growth stages. Plots (15 x 15m) were established in the Coastal Plain (CP) and Appalachian Plateau (AP) physiographic regions each spring from 2000 to 2002 in a completely randomized design. Treatments consisted of four N rates (0, 56, 112, and 168 kg N/ha) applied as ammonium nitrate (NH4N03) replicated four time. Spectral measurements were acquired via spectroradiometer (lambda = 350 - 1050 nm), Airborne Terrestrial Applications Sensor (ATLAS) (lambda = 400 - 12,500 nm), and the IKONOS satellite (lambda = 450 - 900 nm). Spectroradiometer data were collected on a biweekly basis from V4 through R1. Due to the nature of - satellite and aircraft acquisitions, these data were acquired per availability. Chlorophyll meter (SPAD) and tissue N were collected as ancillary data along with each RS acquisition. Results showed vegetation indices derived from hand-held spectroradiometer measurements as early as V6-V8 were linearly related to yield and tissue N content. ATLAS data was correlated with tissue N at the AP site during the V6 stage (r2 = 0.66), but no significant relationships were observed at the CP site. No significant relationships were observed between plant N and IKONOS imagery. Using a combination of the greenness vegetation index (GNDVI) and the normalized difference vegetation index (NDVI), RS data acquired via ATLAS and the spectroradiometer could be used to evaluate tissue N variability and estimate corn yield variability under ideal growing conditions.
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuichard, N.; Papale, D.
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less
[Study on Application of NIR Spectral Information Screening in Identification of Maca Origin].
Wang, Yuan-zhong; Zhao, Yan-li; Zhang, Ji; Jin, Hang
2016-02-01
Medicinal and edible plant Maca is rich in various nutrients and owns great medicinal value. Based on near infrared diffuse reflectance spectra, 139 Maca samples collected from Peru and Yunnan were used to identify their geographical origins. Multiplication signal correction (MSC) coupled with second derivative (SD) and Norris derivative filter (ND) was employed in spectral pretreatment. Spectrum range (7,500-4,061 cm⁻¹) was chosen by spectrum standard deviation. Combined with principal component analysis-mahalanobis distance (PCA-MD), the appropriate number of principal components was selected as 5. Based on the spectrum range and the number of principal components selected, two abnormal samples were eliminated by modular group iterative singular sample diagnosis method. Then, four methods were used to filter spectral variable information, competitive adaptive reweighted sampling (CARS), monte carlo-uninformative variable elimination (MC-UVE), genetic algorithm (GA) and subwindow permutation analysis (SPA). The spectral variable information filtered was evaluated by model population analysis (MPA). The results showed that RMSECV(SPA) > RMSECV(CARS) > RMSECV(MC-UVE) > RMSECV(GA), were 2. 14, 2. 05, 2. 02, and 1. 98, and the spectral variables were 250, 240, 250 and 70, respectively. According to the spectral variable filtered, partial least squares discriminant analysis (PLS-DA) was used to build the model, with random selection of 97 samples as training set, and the other 40 samples as validation set. The results showed that, R²: GA > MC-UVE > CARS > SPA, RMSEC and RMSEP: GA < MC-UVE < CARS
Vuichard, N.; Papale, D.
2015-07-13
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less
Jezewski, Janusz; Wrobel, Janusz; Matonia, Adam; Horoba, Krzysztof; Martinek, Radek; Kupka, Tomasz; Jezewski, Michal
2017-01-01
Great expectations are connected with application of indirect fetal electrocardiography (FECG), especially for home telemonitoring of pregnancy. Evaluation of fetal heart rate (FHR) variability, when determined from FECG, uses the same criteria as for FHR signal acquired classically—through ultrasound Doppler method (US). Therefore, the equivalence of those two methods has to be confirmed, both in terms of recognizing classical FHR patterns: baseline, accelerations/decelerations (A/D), long-term variability (LTV), as well as evaluating the FHR variability with beat-to-beat accuracy—short-term variability (STV). The research material consisted of recordings collected from 60 patients in physiological and complicated pregnancy. The FHR signals of at least 30 min duration were acquired dually, using two systems for fetal and maternal monitoring, based on US and FECG methods. Recordings were retrospectively divided into normal (41) and abnormal (19) fetal outcome. The complex process of data synchronization and validation was performed. Obtained low level of the signal loss (4.5% for US and 1.8% for FECG method) enabled to perform both direct comparison of FHR signals, as well as indirect one—by using clinically relevant parameters. Direct comparison showed that there is no measurement bias between the acquisition methods, whereas the mean absolute difference, important for both visual and computer-aided signal analysis, was equal to 1.2 bpm. Such low differences do not affect the visual assessment of the FHR signal. However, in the indirect comparison the inconsistencies of several percent were noted. This mainly affects the acceleration (7.8%) and particularly deceleration (54%) patterns. In the signals acquired using the electrocardiography the obtained STV and LTV indices have shown significant overestimation by 10 and 50% respectively. It also turned out, that ability of clinical parameters to distinguish between normal and abnormal groups do not depend on the acquisition method. The obtained results prove that the abdominal FECG, considered as an alternative to the ultrasound approach, does not change the interpretation of the FHR signal, which was confirmed during both visual assessment and automated analysis. PMID:28559852
Working Up a Good Sweat – The Challenges of Standardising Sweat Collection for Metabolomics Analysis
Hussain, Joy N; Mantri, Nitin; Cohen, Marc M
2017-01-01
Introduction Human sweat is a complex biofluid of interest to diverse scientific fields. Metabolomics analysis of sweat promises to improve screening, diagnosis and self-monitoring of numerous conditions through new applications and greater personalisation of medical interventions. Before these applications can be fully developed, existing methods for the collection, handling, processing and storage of human sweat need to be revised. This review presents a cross-disciplinary overview of the origins, composition, physical characteristics and functional roles of human sweat, and explores the factors involved in standardising sweat collection for metabolomics analysis. Methods A literature review of human sweat analysis over the past 10 years (2006–2016) was performed to identify studies with metabolomics or similarly applicable ‘omics’ analysis. These studies were reviewed with attention to sweat induction and sampling techniques, timing of sweat collection, sweat storage conditions, laboratory derivation, processing and analytical platforms. Results Comparative analysis of 20 studies revealed numerous factors that can significantly impact the validity, reliability and reproducibility of sweat analysis including: anatomical site of sweat sampling, skin integrity and preparation; temperature and humidity at the sweat collection sites; timing and nature of sweat collection; metabolic quenching; transport and storage; qualitative and quantitative measurements of the skin microbiota at sweat collection sites; and individual variables such as diet, emotional state, metabolic conditions, pharmaceutical, recreational drug and supplement use. Conclusion Further development of standard operating protocols for human sweat collection can open the way for sweat metabolomics to significantly add to our understanding of human physiology in health and disease. PMID:28798503
Barregard, Lars; Møller, Peter; Henriksen, Trine; Mistry, Vilas; Koppen, Gudrun; Rossner, Pavel; Sram, Radim J; Weimann, Allan; Poulsen, Henrik E; Nataf, Robert; Andreoli, Roberta; Manini, Paola; Marczylo, Tim; Lam, Patricia; Evans, Mark D; Kasai, Hiroshi; Kawai, Kazuaki; Li, Yun-Shan; Sakai, Kazuo; Singh, Rajinder; Teichert, Friederike; Farmer, Peter B; Rozalski, Rafal; Gackowski, Daniel; Siomek, Agnieszka; Saez, Guillermo T; Cerda, Concha; Broberg, Karin; Lindh, Christian; Hossain, Mohammad Bakhtiar; Haghdoost, Siamak; Hu, Chiung-Wen; Chao, Mu-Rong; Wu, Kuen-Yuh; Orhan, Hilmi; Senduran, Nilufer; Smith, Raymond J; Santella, Regina M; Su, Yali; Cortez, Czarina; Yeh, Susan; Olinski, Ryszard; Loft, Steffen; Cooke, Marcus S
2013-06-20
Urinary 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxodG) is a widely used biomarker of oxidative stress. However, variability between chromatographic and ELISA methods hampers interpretation of data, and this variability may increase should urine composition differ between individuals, leading to assay interference. Furthermore, optimal urine sampling conditions are not well defined. We performed inter-laboratory comparisons of 8-oxodG measurement between mass spectrometric-, electrochemical- and ELISA-based methods, using common within-technique calibrants to analyze 8-oxodG-spiked phosphate-buffered saline and urine samples. We also investigated human subject- and sample collection-related variables, as potential sources of variability. Chromatographic assays showed high agreement across urines from different subjects, whereas ELISAs showed far more inter-laboratory variation and generally overestimated levels, compared to the chromatographic assays. Excretion rates in timed 'spot' samples showed strong correlations with 24 h excretion (the 'gold' standard) of urinary 8-oxodG (rp 0.67-0.90), although the associations were weaker for 8-oxodG adjusted for creatinine or specific gravity (SG). The within-individual excretion of 8-oxodG varied only moderately between days (CV 17% for 24 h excretion and 20% for first void, creatinine-corrected samples). This is the first comprehensive study of both human and methodological factors influencing 8-oxodG measurement, providing key information for future studies with this important biomarker. ELISA variability is greater than chromatographic assay variability, and cannot determine absolute levels of 8-oxodG. Use of standardized calibrants greatly improves intra-technique agreement and, for the chromatographic assays, importantly allows integration of results for pooled analyses. If 24 h samples are not feasible, creatinine- or SG-adjusted first morning samples are recommended.
Borchers, M R; Chang, Y M; Proudfoot, K L; Wadsworth, B A; Stone, A E; Bewley, J M
2017-07-01
The objective of this study was to use automated activity, lying, and rumination monitors to characterize prepartum behavior and predict calving in dairy cattle. Data were collected from 20 primiparous and 33 multiparous Holstein dairy cattle from September 2011 to May 2013 at the University of Kentucky Coldstream Dairy. The HR Tag (SCR Engineers Ltd., Netanya, Israel) automatically collected neck activity and rumination data in 2-h increments. The IceQube (IceRobotics Ltd., South Queensferry, United Kingdom) automatically collected number of steps, lying time, standing time, number of transitions from standing to lying (lying bouts), and total motion, summed in 15-min increments. IceQube data were summed in 2-h increments to match HR Tag data. All behavioral data were collected for 14 d before the predicted calving date. Retrospective data analysis was performed using mixed linear models to examine behavioral changes by day in the 14 d before calving. Bihourly behavioral differences from baseline values over the 14 d before calving were also evaluated using mixed linear models. Changes in daily rumination time, total motion, lying time, and lying bouts occurred in the 14 d before calving. In the bihourly analysis, extreme values for all behaviors occurred in the final 24 h, indicating that the monitored behaviors may be useful in calving prediction. To determine whether technologies were useful at predicting calving, random forest, linear discriminant analysis, and neural network machine-learning techniques were constructed and implemented using R version 3.1.0 (R Foundation for Statistical Computing, Vienna, Austria). These methods were used on variables from each technology and all combined variables from both technologies. A neural network analysis that combined variables from both technologies at the daily level yielded 100.0% sensitivity and 86.8% specificity. A neural network analysis that combined variables from both technologies in bihourly increments was used to identify 2-h periods in the 8 h before calving with 82.8% sensitivity and 80.4% specificity. Changes in behavior and machine-learning alerts indicate that commercially marketed behavioral monitors may have calving prediction potential. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Marineau, Mathieu D.; Minear, J. Toby; Wright, Scott A.
2015-01-01
Collecting physical bedload measurements is an expensive and time-consuming endeavor that rarely captures the spatial and temporal variability of sediment transport. Technological advances can improve monitoring of sediment transport by filling in temporal gaps between physical sampling periods. We have developed a low-cost hydrophone recording system designed to record the sediment-generated noise (SGN) resulting from collisions of coarse particles (generally larger than 4 mm) in gravel-bedded rivers. The sound level of the signal recorded by the hydrophone is assumed to be proportional to the magnitude of bedload transport as long as the acoustic frequency of the SGN is known, the grain-size distribution of the bedload is assumed constant, and the frequency band of the ambient noise is known and can be excluded from the analysis. Each system has two hydrophone heads and samples at half-hour intervals. Ten systems were deployed on the San Joaquin River, California, and its tributaries for ten months during water year 2014, and two systems were deployed during a flood event on the Gunnison River, Colorado in 2014. A mobile hydrophone system was also tested at both locations to collect longitudinal profiles of SGN. Physical samples of bedload were not collected in this study. In lieu of physical measurements, several audio recordings from each site were aurally reviewed to confirm the presence or absence of SGN, and hydraulic data were compared to historical measurements of bedload transport or transport capacity estimates to verify if hydraulic conditions during the study would likely produce bedload transport. At one site on the San Joaquin River, the threshold of movement was estimated to have occurred around 30 m 3 /s based on SGN data. During the Gunnison River flood event, continuous data showed clockwise hysteresis, indicating that bedload transport was generally less at any given streamflow discharge during the recession limb of the hydrograph. Spatial variability in transport was also detected in the longitudinal profiles audibly and using signal processing algorithms. These experiments demonstrate the ability of hydrophone technology to capture the temporal and spatial variability of sediment transport, which may be missed when samples are collected using conventional methods.
The Pediatric Risk of Mortality Score: Update 2015
Pollack, Murray M.; Holubkov, Richard; Funai, Tomohiko; Dean, J. Michael; Berger, John T.; Wessel, David L.; Meert, Kathleen; Berg, Robert A.; Newth, Christopher J. L.; Harrison, Rick E.; Carcillo, Joseph; Dalton, Heidi; Shanley, Thomas; Jenkins, Tammara L.; Tamburro, Robert
2016-01-01
Objectives Severity of illness measures have long been used in pediatric critical care. The Pediatric Risk of Mortality is a physiologically based score used to quantify physiologic status, and when combined with other independent variables, it can compute expected mortality risk and expected morbidity risk. Although the physiologic ranges for the Pediatric Risk of Mortality variables have not changed, recent Pediatric Risk of Mortality data collection improvements have been made to adapt to new practice patterns, minimize bias, and reduce potential sources of error. These include changing the outcome to hospital survival/death for the first PICU admission only, shortening the data collection period and altering the Pediatric Risk of Mortality data collection period for patients admitted for “optimizing” care before cardiac surgery or interventional catheterization. This analysis incorporates those changes, assesses the potential for Pediatric Risk of Mortality physiologic variable subcategories to improve score performance, and recalibrates the Pediatric Risk of Mortality score, placing the algorithms (Pediatric Risk of Mortality IV) in the public domain. Design Prospective cohort study from December 4, 2011, to April 7, 2013. Measurements and Main Results Among 10,078 admissions, the unadjusted mortality rate was 2.7% (site range, 1.3–5.0%). Data were divided into derivation (75%) and validation (25%) sets. The new Pediatric Risk of Mortality prediction algorithm (Pediatric Risk of Mortality IV) includes the same Pediatric Risk of Mortality physiologic variable ranges with the subcategories of neurologic and nonneurologic Pediatric Risk of Mortality scores, age, admission source, cardiopulmonary arrest within 24 hours before admission, cancer, and low-risk systems of primary dysfunction. The area under the receiver operating characteristic curve for the development and validation sets was 0.88 ± 0.013 and 0.90 ± 0.018, respectively. The Hosmer-Lemeshow goodness of fit statistics indicated adequate model fit for both the development (p = 0.39) and validation (p = 0.50) sets. Conclusions The new Pediatric Risk of Mortality data collection methods include significant improvements that minimize the potential for bias and errors, and the new Pediatric Risk of Mortality IV algorithm for survival and death has excellent prediction performance. PMID:26492059
In, Haejin; Simon, Cassie A; Phillips, Jerri Linn; Posner, Mitchell C; Ko, Clifford Y; Winchester, David P
2015-05-01
Cancer recurrence is a critical outcome in cancer care. However, population-level recurrence information is currently unavailable. Tumor registries provide an opportunity to generate this information, but require major reform. Our objectives were to (1) determine causes for variability in collection of recurrence, and (2) identify targets for intervention. On-site interviews and observations of tumor registry follow-up procedures were conducted at Commission on Cancer (CoC) accredited hospitals. Information regarding registry resources (caseload, staffing, chart availability), follow-up methods and perceived causes for difficulty in obtaining recurrence information was obtained. Seven NCI/academic, 5 comprehensive community and 2 community centers agreed to participate. Hospitals were inconsistent in their investigation of cancer recurrence, resulting in underreporting of rates of recurrence. Hospital characteristics, registry staffing, staff qualifications and medical chart access influenced follow-up practices. Coding standards and definitions for recurrence were suboptimal, resulting in hospital variability of recurrence reporting. Finally, inability to identify cases lost to follow-up in collected data prevents accurate analysis of recurrence rates. Tumor registries collect varying degrees of recurrence information and provide the underpinnings to capture population-level cancer recurrence data. Targets for intervention are listed, and provide a roadmap to obtain this critical information in cancer care. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Hopkins, Martha H. (Marty)
1997-01-01
Describes a mathematical investigation centered on collecting and analyzing data in a scientific setting. The first experiment teaches the basics of data collection--students should be able to identify variables that might have affected experimental results. The second activity includes designing an experiment to control for these variables. The…
A Call to Standardize Preanalytic Data Elements for Biospecimens, Part II.
Robb, James A; Bry, Lynn; Sluss, Patrick M; Wagar, Elizabeth A; Kennedy, Mary F
2015-09-01
Biospecimens must have appropriate clinical annotation (data) to ensure optimal quality for both patient care and research. Additional clinical preanalytic variables are the focus of this continuing study. To complete the identification of the essential preanalytic variables (data fields) that can, and in some instances should, be attached to every collected biospecimen by adding the additional specific variables for clinical chemistry and microbiology to our original 170 variables. The College of American Pathologists Diagnostic Intelligence and Health Information Technology Committee sponsored a second Biorepository Working Group to complete the list of preanalytic variables for annotating biospecimens. Members of the second Biorepository Working Group are experts in clinical pathology and microbiology. Additional preanalytic area-specific variables were identified and ranked along with definitions and potential negative impacts if the variable is not attached to the biospecimen. The draft manuscript was reviewed by additional national and international stakeholders. Four additional required preanalytic variables were identified specifically for clinical chemistry and microbiology biospecimens that can be used as a guide for site-specific implementation into patient care and research biorepository processes. In our collective experience, selecting which of the many preanalytic variables to attach to any specific set of biospecimens used for patient care and/or research is often difficult. The additional ranked list should be of practical benefit when selecting preanalytic variables for a given biospecimen collection.
Maswadeh, Waleed M; Snyder, A Peter
2015-05-30
Variable responses are fundamental for all experiments, and they can consist of information-rich, redundant, and low signal intensities. A dataset can consist of a collection of variable responses over multiple classes or groups. Usually some of the variables are removed in a dataset that contain very little information. Sometimes all the variables are used in the data analysis phase. It is common practice to discriminate between two distributions of data; however, there is no formal algorithm to arrive at a degree of separation (DS) between two distributions of data. The DS is defined herein as the average of the sum of the areas from the probability density functions (PDFs) of A and B that contain a≥percentage of A and/or B. Thus, DS90 is the average of the sum of the PDF areas of A and B that contain ≥90% of A and/or B. To arrive at a DS value, two synthesized PDFs or very large experimental datasets are required. Experimentally it is common practice to generate relatively small datasets. Therefore, the challenge was to find a statistical parameter that can be used on small datasets to estimate and highly correlate with the DS90 parameter. Established statistical methods include the overlap area of the two data distribution profiles, Welch's t-test, Kolmogorov-Smirnov (K-S) test, Mann-Whitney-Wilcoxon test, and the area under the receiver operating characteristics (ROC) curve (AUC). The area between the ROC curve and diagonal (ACD) and the length of the ROC curve (LROC) are introduced. The established, ACD, and LROC methods were correlated to the DS90 when applied on many pairs of synthesized PDFs. The LROC method provided the best linear correlation with, and estimation of, the DS90. The estimated DS90 from the LROC (DS90-LROC) is applied to a database, as an example, of three Italian wines consisting of thirteen variable responses for variable ranking consideration. An important highlight of the DS90-LROC method is utilizing the LROC curve methodology to test all variables one-at-a-time with all pairs of classes in a dataset. Copyright © 2015 Elsevier B.V. All rights reserved.
Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.
2015-01-01
Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.
1994-01-01
A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.
Wang, Pei; Zhang, Hui; Yang, Hailong; Nie, Lei; Zang, Hengchang
2015-02-25
Near-infrared (NIR) spectroscopy has been developed into an indispensable tool for both academic research and industrial quality control in a wide field of applications. The feasibility of NIR spectroscopy to monitor the concentration of puerarin, daidzin, daidzein and total isoflavonoid (TIF) during the extraction process of kudzu (Pueraria lobata) was verified in this work. NIR spectra were collected in transmission mode and pretreated with smoothing and derivative. Partial least square regression (PLSR) was used to establish calibration models. Three different variable selection methods, including correlation coefficient method, interval partial least squares (iPLS), and successive projections algorithm (SPA) were performed and compared with models based on all of the variables. The results showed that the approach was very efficient and environmentally friendly for rapid determination of the four quality indices (QIs) in the kudzu extraction process. This method established may have the potential to be used as a process analytical technological (PAT) tool in the future. Copyright © 2014 Elsevier B.V. All rights reserved.
Nonparametric Hierarchical Bayesian Model for Functional Brain Parcellation
Lashkari, Danial; Sridharan, Ramesh; Vul, Edward; Hsieh, Po-Jang; Kanwisher, Nancy; Golland, Polina
2011-01-01
We develop a method for unsupervised analysis of functional brain images that learns group-level patterns of functional response. Our algorithm is based on a generative model that comprises two main layers. At the lower level, we express the functional brain response to each stimulus as a binary activation variable. At the next level, we define a prior over the sets of activation variables in all subjects. We use a Hierarchical Dirichlet Process as the prior in order to simultaneously learn the patterns of response that are shared across the group, and to estimate the number of these patterns supported by data. Inference based on this model enables automatic discovery and characterization of salient and consistent patterns in functional signals. We apply our method to data from a study that explores the response of the visual cortex to a collection of images. The discovered profiles of activation correspond to selectivity to a number of image categories such as faces, bodies, and scenes. More generally, our results appear superior to the results of alternative data-driven methods in capturing the category structure in the space of stimuli. PMID:21841977
Data standardization. The key to effective management
Wagner, C. Russell
1991-01-01
Effective management of the nation's water resources is dependent upon accurate and consistent hydrologic information. Before the emergence of environmental concerns in the 1960's, most hydrologic information was collected by the U.S. Geological Survey and other Federal agencies that used fairly consistent methods and equipment. In the past quarter century, however, increased environmental awareness has resulted in an expansion of hydrologic data collection not only by Federal agencies, but also by state and municipal governments, university investigators, and private consulting firms. The acceptance and use of standard methods of collecting and processing hydrologic data would contribute to cost savings and to greater credibility of flow information vital to responsible assessment and management of the nation's water resources. This paper traces the evolution of the requirements and uses of open-channel flow information in the U.S., and the sequence of efforts to standardize the methods used to obtain this information in the future. The variable nature of naturally flowing rivers results in continually changing hydraulic properties of their channels. Those persons responsible for measurement of water flowing in open channels (streamflow) must use a large amount of judgement in the selection of appropriate equipment and technique to obtain accurate flow information. Standardization of the methods used in the measurement of streamflow is essential to assure consistency of data, but must also allow considerable latitude for individual judgement to meet constantly changing field conditions.
Bucher, Denis; Pierce, Levi C T; McCammon, J Andrew; Markwick, Phineus R L
2011-04-12
We have implemented the accelerated molecular dynamics approach (Hamelberg, D.; Mongan, J.; McCammon, J. A. J. Chem. Phys. 2004, 120 (24), 11919) in the framework of ab initio MD (AIMD). Using three simple examples, we demonstrate that accelerated AIMD (A-AIMD) can be used to accelerate solvent relaxation in AIMD simulations and facilitate the detection of reaction coordinates: (i) We show, for one cyclohexane molecule in the gas phase, that the method can be used to accelerate the rate of the chair-to-chair interconversion by a factor of ∼1 × 10(5), while allowing for the reconstruction of the correct canonical distribution of low-energy states; (ii) We then show, for a water box of 64 H(2)O molecules, that A-AIMD can also be used in the condensed phase to accelerate the sampling of water conformations, without affecting the structural properties of the solvent; and (iii) The method is then used to compute the potential of mean force (PMF) for the dissociation of Na-Cl in water, accelerating the convergence by a factor of ∼3-4 compared to conventional AIMD simulations.(2) These results suggest that A-AIMD is a useful addition to existing methods for enhanced conformational and phase-space sampling in solution. While the method does not make the use of collective variables superfluous, it also does not require the user to define a set of collective variables that can capture all the low-energy minima on the potential energy surface. This property may prove very useful when dealing with highly complex multidimensional systems that require a quantum mechanical treatment.
Yang, Mingjun; Huang, Jing; MacKerell, Alexander D
2015-06-09
Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.
International spinal cord injury endocrine and metabolic extended data set.
Bauman, W A; Wecht, J M; Biering-Sørensen, F
2017-05-01
The objective of this study was to develop the International Spinal Cord Injury (SCI) Endocrine and Metabolic Extended Data Set (ISCIEMEDS) within the framework of the International SCI Data Sets that would facilitate consistent collection and reporting of endocrine and metabolic findings in the SCI population. This study was conducted in an international setting. The ISCIEMEDS was developed by a working group. The initial ISCIEMEDS was revised based on suggestions from members of the International SCI Data Sets Committee, the International Spinal Cord Society (ISCoS) Executive and Scientific Committees, American Spinal Injury Association (ASIA) Board, other interested organizations, societies and individual reviewers. The data set was posted for two months on ISCoS and ASIA websites for comments. Variable names were standardized, and a suggested database structure for the ISCIEMEDS was provided by the Common Data Elements (CDEs) project at the National Institute on Neurological Disorders and Stroke (NINDS) of the US National Institute of Health (NIH), and are available at https://commondataelements.ninds.nih.gov/SCI.aspx#tab=Data_Standards. The final ISCIEMEDS contains questions on the endocrine and metabolic conditions related to SCI. Because the information may be collected at any time, the date of data collection is important to determine the time after SCI. ISCIEMEDS includes information on carbohydrate metabolism (6 variables), calcium and bone metabolism (12 variables), thyroid function (9 variables), adrenal function (2 variables), gonadal function (7 variables), pituitary function (6 variables), sympathetic nervous system function (1 variable) and renin-aldosterone axis function (2 variables). The complete instructions for data collection and the data sheet itself are freely available on the website of ISCoS (http://www.iscos.org.uk/international-sci-data-sets).
Hoogerheide, E S S; Azevedo Filho, J A; Vencovsky, R; Zucchi, M I; Zago, B W; Pinheiro, J B
2017-05-31
The cultivated garlic (Allium sativum L.) displays a wide phenotypic diversity, which is derived from natural mutations and phenotypic plasticity, due to dependence on soil type, moisture, latitude, altitude and cultural practices, leading to a large number of cultivars. This study aimed to evaluate the genetic variability shown by 63 garlic accessions belonging to Instituto Agronômico de Campinas and the Escola Superior de Agricultura "Luiz de Queiroz" germplasm collections. We evaluated ten quantitative characters in experimental trials conducted under two localities of the State of São Paulo: Monte Alegre do Sul and Piracicaba, during the agricultural year of 2007, in a randomized blocks design with five replications. The Mahalanobis distance was used to measure genetic dissimilarities. The UPGMA method and Tocher's method were used as clustering procedures. Results indicated significant variation among accessions (P < 0.01) for all evaluated characters, except for the percentage of secondary bulb growth in MAS, indicating the existence of genetic variation for bulb production, and germplasm evaluation considering different environments is more reliable for the characterization of the genotypic variability among garlic accessions, since it diminishes the environmental effects in the clustering of genotypes.
Communication and Organization in Software Development: An Empirical Study
NASA Technical Reports Server (NTRS)
Seaman, Carolyn B.; Basili, Victor R.
1996-01-01
The empirical study described in this paper addresses the issue of communication among members of a software development organization. The independent variables are various attributes of organizational structure. The dependent variable is the effort spent on sharing information which is required by the software development process in use. The research questions upon which the study is based ask whether or not these attributes of organizational structure have an effect on the amount of communication effort expended. In addition, there are a number of blocking variables which have been identified. These are used to account for factors other than organizational structure which may have an effect on communication effort. The study uses both quantitative and qualitative methods for data collection and analysis. These methods include participant observation, structured interviews, and graphical data presentation. The results of this study indicate that several attributes of organizational structure do affect communication effort, but not in a simple, straightforward way. In particular, the distances between communicators in the reporting structure of the organization, as well as in the physical layout of offices, affects how quickly they can share needed information, especially during meetings. These results provide a better understanding of how organizational structure helps or hinders communication in software development.
NASA Astrophysics Data System (ADS)
Garces, E. L.; Garces, M. A.; Christe, A.
2017-12-01
The RedVox infrasound recorder app uses microphones and barometers in smartphones to record infrasound, low-frequency sound below the threshold of human hearing. We study a device's metadata, which includes position, latency time, the differences between the device's internal times and the server times, and the machine time, searching for patterns and possible errors or discontinuities in these scaled parameters. We highlight metadata variability through scaled multivariate displays (histograms, distribution curves, scatter plots), all created and organized through software development in Python. This project is helpful in ascertaining variability and honing the accuracy of smartphones, aiding the emergence of portable devices as viable geophysical data collection instruments. It can also improve the app and cloud service by increasing efficiency and accuracy, allowing to better document and foresee drastic natural movements like tsunamis, earthquakes, volcanic eruptions, storms, rocket launches, and meteor impacts; recorded data can later be used for studies and analysis by a variety of professions. We expect our final results to produce insight on how to counteract problematic issues in data mining and improve accuracy in smartphone data-collection. By eliminating lurking variables and minimizing the effect of confounding variables, we hope to discover efficient processes to reduce superfluous precision, unnecessary errors, and data artifacts. These methods should conceivably be transferable to other areas of software development, data analytics, and statistics-based experiments, contributing a precedent of smartphone metadata studies from geophysical rather than societal data. The results should facilitate the rise of civilian-accessible, hand-held, data-gathering mobile sensor networks and yield more straightforward data mining techniques.
Islas-Granillo, H; Borges-Yañez, S A; Medina-Solís, C E; Galan-Vidal, C A; Navarrete-Hernández, J J; Escoffié-Ramirez, M; Maupomé, G
2014-12-01
To compare a limited array of chewing-stimulated saliva features (salivary flow, pH and buffer capacity) in a sample of elderly Mexicans with clinical, sociodemographic and socio-economic variables. A cross-sectional study was carried out in 139 adults, 60 years old and older, from two retirement homes and a senior day care centre in the city of Pachuca, Mexico. Sociodemographic, socio-economic and behavioural variables were collected through a questionnaire. A trained and standardized examiner obtained the oral clinical variables. Chewing-stimulated saliva (paraffin method) was collected and the salivary flow rate, pH and buffer capacity were measured. The analysis was performed using non-parametric tests in Stata 9.0. Mean age was 79.1 ± 9.8 years. Most of the subjects included were women (69.1%). Mean chewing-stimulated salivary flow was 0.75 ± 0.80 mL/minute, and the pH and buffer capacity were 7.88 ± 0.83 and 4.20 ± 1.24, respectively. Mean chewing-stimulated salivary flow varied (p < 0.05) across type of retirement home, tooth brushing frequency, number of missing teeth and use of dental prostheses. pH varied across the type of retirement home (p < 0.05) and marginally by age (p = 0.087); buffer capacity (p < 0.05) varied across type of retirement home, tobacco consumption and the number of missing teeth. These exploratory data add to the body of knowledge with regard to chewing-stimulated salivary features (salivary flow rate, pH and buffer capacity) and outline the variability of those features across selected sociodemographic, socio-economic and behavioural variables in a group of Mexican elders.
Systematic on-site monitoring of compliance dust samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grayson, R.L.; Gandy, J.R.
1996-12-31
Maintaining compliance with U.S. respirable coal mine dust standards can be difficult on high-productivity longwall panels. Comprehensive and systematic analysis of compliance dust sample data, coupled with access to the U.S. Bureau of Mines (USBM) DUSTPRO, can yield important information for use in maintaining compliance. The objective of this study was to develop and apply a customized software for the collection, storage, modification, and analysis of respirable dust data while providing for flexible export of data and linking with the USBM`s expert advisory system on dust control. An executable, IBM-compatible software was created and customized for use by the personmore » in charge of collecting, submitting, analyzing, and monitoring respirable dust compliance samples. Both descriptive statistics and multiple regression analysis were incorporated. The software allows ASCH files to be exported and directly links with DUSTPRO. After development and validation of the software, longwall compliance data from two different mines was analyzed to evaluate the value of the software. Data included variables on respirable dust concentration, tons produced, the existence of roof/floor rock (dummy variable), and the sampling cycle (dummy variables). Because of confidentiality, specific data will not be presented, only the equations and ANOVA tables. The final regression models explained 83.8% and 61.1% of the variation in the data for the two panels. Important correlations among variables within sampling cycles showed the value of using dummy variables for sampling cycles. The software proved flexible and fast for its intended use. The insights obtained from use improved the systematic monitoring of respirable dust compliance data, especially for pinpointing the most effective dust control methods during specific sampling cycles.« less
NASA Astrophysics Data System (ADS)
Wright, W. J.; Shahan, T.; Sharp, N.; Comas, X.
2015-12-01
Peat soils are known to release globally significant amounts of methane (CH4) and carbon dioxide (CO2) to the atmosphere. However, uncertainties still remain regarding the spatio-temporal distribution of gas accumulations and triggering mechanisms of gas releasing events. Furthermore, most research on peatland gas dynamics has traditionally been focused on high latitude peatlands. Therefore, understanding gas dynamics in low-latitude peatlands (e.g. the Florida Everglades) is key to global climate research. Recent studies in the Everglades have demonstrated that biogenic gas flux values may vary when considering different temporal and spatial scales of measurements. The work presented here targets spatial variability in gas production and release at the plot scale in an approximately 85 m2 area, and targets temporal variability with data collected during the spring months of two different years. This study is located in the Loxahatchee Impoundment Landscape Assessment (LILA), a hydrologically controlled, landscape scale (30 Ha) model of the Florida Everglades. Ground penetrating radar (GPR) has been used in the past to investigate biogenic gas dynamics in peat soils, and is used in this study to monitor changes of in situ gas storage. Each year, a grid of GPR profiles was collected to image changes in gas distribution in 2d on a weekly basis, and several flux chambers outfitted with time-lapse cameras captured high resolution (hourly) gas flux measurements inside the GPR grid. Combining these methods allows us to use a mass balance approach to estimate spatial variability in gas production rates, and capture temporal variability in gas flux rates.
NASA Astrophysics Data System (ADS)
Pisek, Jan; Chen, Jing; Kobayashi, Hideki; Rautiainen, Miina; Schaepman, Michael; Karnieli, Arnon; Sprintsin, Michael; Ryu, Youngryel; Nikopensius, Maris; Raabe, Kairi
2016-04-01
Ground vegetation (understory) provides an essential contribution to the whole-stand reflectance signal in many boreal, sub-boreal, and temperate forests. Accurate knowledge about forest understory reflectance is urgently needed in various forest reflectance modelling efforts. However, systematic collections of understory reflectance data covering different sites and ecosystems are almost missing. Measurement of understory reflectance is a real challenge because of an extremely high variability of irradiance at the forest floor, weak signal in some parts of the spectrum, spectral separability issues of over- and understory and its variable nature. Understory can consist of several sub-layers (regenerated tree, shrub, grasses or dwarf shrub, mosses, lichens, litter, bare soil), it has spatially-temporally variable species composition and ground coverage. Additional challenges are introduced by patchiness of ground vegetation, ground surface roughness, and understory-overstory relations. Due to this variability, remote sensing might be the only means to provide consistent data at spatially relevant scales. In this presentation, we report on retrieving seasonal courses of understory Normalized Difference Vegetation Index (NDVI) from multi-angular MODIS BRDF/Albedo data. We compared satellite-based seasonal courses of understory NDVI against an extended collection of different types of forest sites with available in-situ understory reflectance measurements. These sites are distributed along a wide latitudinal gradient on the Northern hemisphere: a sparse and dense black spruce forests in Alaska and Canada, a northern European boreal forest in Finland, hemiboreal needleleaf and deciduous stands in Estonia, a mixed temperate forest in Switzerland, a cool temperate deciduous broadleaf forest in Korea, and a semi-arid pine plantation in Israel. Our results indicated the retrieval method performs well particularly over open forests of different types. We also demonstrated the limitations of the method for closed canopies, where the understory signal retrieval is much attenuated. The retrieval of understory signal can be used e.g. to improve the estimates of leaf area index (LAI), fAPAR in sparsely vegetated areas, and also to study the phenology of understory layer. Our results are particularly useful to producing Northern hemisphere maps of seasonal dynamics of forests, allowing to separately retrieve understory variability, being a main contributor to spring emergence and fall senescence uncertainty. The inclusion of understory variability in ecological models will ultimately improve prediction and forecast horizons of vegetation dynamics.
Uncertainty assessment method for the Cs-137 fallout inventory and penetration depth.
Papadakos, G N; Karangelos, D J; Petropoulos, N P; Anagnostakis, M J; Hinis, E P; Simopoulos, S E
2017-05-01
Within the presented study, soil samples were collected in year 2007 at 20 different locations of the Greek terrain, both from the surface and also from depths down to 26 cm. Sampling locations were selected primarily from areas where high levels of 137 Cs deposition after the Chernobyl accident had already been identified by the Nuclear Engineering Laboratory of the National Technical University of Athens during and after the year of 1986. At one location of relatively higher deposition, soil core samples were collected following a 60 m by 60 m Cartesian grid with a 20 m node-to-node distance. Single or pair core samples were also collected from the remaining 19 locations. Sample measurements and analysis were used to estimate 137 Cs inventory and the corresponding depth migration, twenty years after the deposition on Greek terrain. Based on these data, the uncertainty components of the whole sampling-to-results procedure were investigated. A cause-and-effect assessment process was used to apply the law of error propagation and demonstrate that the dominating significant component of the combined uncertainty is that due to the spatial variability of the contemporary (2007) 137 Cs inventory. A secondary, yet also significant component was identified to be the activity measurement process itself. Other less-significant uncertainty parameters were sampling methods, the variation in the soil field density with depth and the preparation of samples for measurement. The sampling grid experiment allowed for the quantitative evaluation of the uncertainty due to spatial variability, also by the assistance of the semivariance analysis. Denser, optimized grid could return more accurate values for this component but with a significantly elevated laboratory cost, in terms of both, human and material resources. Using the hereby collected data and for the case of a single core soil sampling using a well-defined sampling methodology quality assurance, the uncertainty component due to spatial variability was evaluated to about 19% for the 137 Cs inventory and up to 34% for the 137 Cs penetration depth. Based on the presented results and also on related literature, it is argued that such high uncertainties should be anticipated for single core samplings conducted using similar methodology and employed as 137 Cs inventory and penetration depth estimators. Copyright © 2017 Elsevier Ltd. All rights reserved.
Variability of yellow tulp (Moraea pallida Bak.) toxicity.
Snyman, L D; Schultz, R A; van den Berg, H
2011-06-01
Yellow tulp (Moraea pallida Bak.), collected predominantly during the flowering stage from a number of sites in South Africa, showed large variation in digoxin equivalent values, indicating variability in yellow tulp toxicity. Very low values were recorded for tulp collected from certain sites in the Northern Cape.
Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin
2015-01-01
Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.
Novel characterization of the aerosol and gas-phase composition of aerosolized jet fuel.
Tremblay, Raphael T; Martin, Sheppard A; Fisher, Jeffrey W
2010-04-01
Few robust methods are available to characterize the composition of aerosolized complex hydrocarbon mixtures. The difficulty in separating the droplets from their surrounding vapors and preserving their content is challenging, more so with fuels, which contain hydrocarbons ranging from very low to very high volatility. Presented here is a novel method that uses commercially available absorbent tubes to measure a series of hydrocarbons in the vapor and droplets from aerosolized jet fuels. Aerosol composition and concentrations were calculated from the differential between measured total (aerosol and gas-phase) and measured gas-phase concentrations. Total samples were collected directly, whereas gas-phase only samples were collected behind a glass fiber filter to remove droplets. All samples were collected for 1 min at 400 ml min(-1) and quantified using thermal desorption-gas chromatography-mass spectrometry. This method was validated for the quantification of the vapor and droplet content from 4-h aerosolized jet fuel exposure to JP-8 and S-8 at total concentrations ranging from 200 to 1000 mg/m(3). Paired samples (gas-phase only and total) were collected every approximately 40 min. Calibrations were performed with neat fuel to calculate total concentration and also with a series of authentic standards to calculate specific compound concentrations. Accuracy was good when compared to an online GC-FID (gas chromatography-flame ionization detection) technique. Variability was 15% or less for total concentrations, the sum of all gas-phase compounds, and for most specific compound concentrations in both phases. Although validated for jet fuels, this method can be adapted to other hydrocarbon-based mixtures.
Effectiveness of sampling methods employed for Acanthamoeba keratitis diagnosis by culture.
Muiño, Laura; Rodrigo, Donoso; Villegas, Rodrigo; Romero, Pablo; Peredo, Daniel E; Vargas, Rafael A; Liempi, Daniela; Osuna, Antonio; Jercic, María Isabel
2018-06-18
This retrospective, observational study was designed to evaluate the effectiveness of the sampling methods commonly used for the collection of corneal scrapes for the diagnosis of Acanthamoeba keratitis (AK) by culture, in terms of their ability to provide a positive result. A total of 553 samples from 380 patients with suspected AK received at the Parasitology Section of the Public Health Institute of Chile, between January 2005 and December 2015, were evaluated. A logistic regression model was used to determine the correlation between the culture outcome (positive or negative) and the method for sample collection. The year of sample collection was also included in the analysis as a confounding variable. Three hundred and sixty-five samples (27%) from 122 patients (32.1%) were positive by culture. The distribution of sample types was as follows: 142 corneal scrapes collected using a modified bezel needle (a novel method developed by a team of Chilean corneologists), 176 corneal scrapes obtained using a scalpel, 50 corneal biopsies, 30 corneal swabs, and 155 non-biological materials including contact lens and its paraphernalia. Biopsy provided the highest likelihood ratio for a positive result by culture (1.89), followed by non-biological materials (1.10) and corneal scrapes obtained using a modified needle (1.00). The lowest likelihood ratio was estimated for corneal scrapes obtained using a scalpel (0.88) and cotton swabs (0.78). Apart from biopsy, optimum corneal samples for the improved diagnosis of AK can be obtained using a modified bezel needle instead of a scalpel, while cotton swabs are not recommended.
Trask, Catherine; Mathiassen, Svend Erik; Wahlström, Jens; Heiden, Marina; Rezagholi, Mahmoud
2012-06-27
Documentation of posture measurement costs is rare and cost models that do exist are generally naïve. This paper provides a comprehensive cost model for biomechanical exposure assessment in occupational studies, documents the monetary costs of three exposure assessment methods for different stakeholders in data collection, and uses simulations to evaluate the relative importance of cost components. Trunk and shoulder posture variables were assessed for 27 aircraft baggage handlers for 3 full shifts each using three methods typical to ergonomic studies: self-report via questionnaire, observation via video film, and full-shift inclinometer registration. The cost model accounted for expenses related to meetings to plan the study, administration, recruitment, equipment, training of data collectors, travel, and onsite data collection. Sensitivity analyses were conducted using simulated study parameters and cost components to investigate the impact on total study cost. Inclinometry was the most expensive method (with a total study cost of € 66,657), followed by observation (€ 55,369) and then self report (€ 36,865). The majority of costs (90%) were borne by researchers. Study design parameters such as sample size, measurement scheduling and spacing, concurrent measurements, location and travel, and equipment acquisition were shown to have wide-ranging impacts on costs. This study provided a general cost modeling approach that can facilitate decision making and planning of data collection in future studies, as well as investigation into cost efficiency and cost efficient study design. Empirical cost data from a large field study demonstrated the usefulness of the proposed models.
Asadi-Lari, Mohsen; Hassanzadeh, Jafar; Torabinia, Mansour; Vaez-Mahdavi, Mohammad Reza; Montazeri, Ali; Ghaem, Haleh; Menati, Rostam; Niazi, Mohsen; Kassani, Aziz
2016-01-01
Background: Social capital has been defined as norms, networks, and social links that facilitate collective actions. Social capital is related to a number of main social and public health variables. Therefore, the present study aimed to determine the factors associated with social capital among the residents of Tehran, Iran. Methods: In this large cross-sectional population-based study, 31531 residents aged 20 years and above were selected through multi-stage sampling method from 22 districts of Tehran in 2011. The social capital questionnaire, 28-item General Health Questionnaire (GHQ-28), and Short-Form Health Survey (SF-12) were used. Hypothetical causal models were designed to identify the pathways through which different variables influenced the components of social capital. Then, path analysis was conducted for identifying the determinants of social capital. Results: The most influential variables in 'individual trust' were job status (β=0.37, p=0.02), marital status (β=0.32, p=0.01), Physical Component Summary (PCS) (β=0.37, p=0.02), and age (β=0.34, p=0.03). On the other hand, education level (β=0.34, p=0.01), age (β=0.33, p=0.02), marital status (β=0.33, p=0.01), and job status (β=0.32, p=0.01) were effective in 'cohesion and social support'. Additionally, age (β=0.18, p=0.02), PCS (β=0.36, p=0.01), house ownership (β=0.23, p=0.03), and mental health (β=0.26, p=0.01) were influential in 'social trust/collective relations'. Conclusion: Social capital can be improved in communities by planning to improve education and occupation status, paying more attention to strengthening family bonds, and provision of local facilities and neighborhood bonds to reduce migration within the city.
Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-01-01
Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242
Evaluation of direct-to-consumer low-volume lab tests in healthy adults
Kidd, Brian A.; Hoffman, Gabriel; Zimmerman, Noah; Li, Li; Morgan, Joseph W.; Glowe, Patricia K.; Botwin, Gregory J.; Parekh, Samir; Babic, Nikolina; Doust, Matthew W.; Stock, Gregory B.; Schadt, Eric E.; Dudley, Joel T.
2016-01-01
BACKGROUND. Clinical laboratory tests are now being prescribed and made directly available to consumers through retail outlets in the USA. Concerns with these test have been raised regarding the uncertainty of testing methods used in these venues and a lack of open, scientific validation of the technical accuracy and clinical equivalency of results obtained through these services. METHODS. We conducted a cohort study of 60 healthy adults to compare the uncertainty and accuracy in 22 common clinical lab tests between one company offering blood tests obtained from finger prick (Theranos) and 2 major clinical testing services that require standard venipuncture draws (Quest and LabCorp). Samples were collected in Phoenix, Arizona, at an ambulatory clinic and at retail outlets with point-of-care services. RESULTS. Theranos flagged tests outside their normal range 1.6× more often than other testing services (P < 0.0001). Of the 22 lab measurements evaluated, 15 (68%) showed significant interservice variability (P < 0.002). We found nonequivalent lipid panel test results between Theranos and other clinical services. Variability in testing services, sample collection times, and subjects markedly influenced lab results. CONCLUSION. While laboratory practice standards exist to control this variability, the disparities between testing services we observed could potentially alter clinical interpretation and health care utilization. Greater transparency and evaluation of testing technologies would increase their utility in personalized health management. FUNDING. This work was supported by the Icahn Institute for Genomics and Multiscale Biology, a gift from the Harris Family Charitable Foundation (to J.T. Dudley), and grants from the NIH (R01 DK098242 and U54 CA189201, to J.T. Dudley, and R01 AG046170 and U01 AI111598, to E.E. Schadt). PMID:27018593
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
de Boer, Hans H; Maat, George J R; Kadarmo, D Aji; Widodo, Putut T; Kloosterman, Ate D; Kal, Arnoud J
2018-06-04
In disaster victim identification (DVI), DNA profiling is considered to be one of the most reliable and efficient means to identify bodies or separated body parts. This requires a post mortem DNA sample, and an ante mortem DNA sample of the presumed victim or their biological relative(s). Usually the collection of an adequate ante mortem sample is technically simple, but the acquisition of a good quality post mortem sample under unfavourable DVI circumstances is complicated due to the variable degree of preservation of the human remains and the high risk of DNA (cross) contamination. This paper provides the community with an efficient method to collect post-mortem DNA samples from muscle, bone, bone marrow and teeth, with a minimal risk of contamination. Our method has been applied in a recent, challenging DVI operation (i.e. the identification of the 298 victims of the MH17 airplane crash in 2014). 98,2% of the collected PM samples provided the DVI team with highly informative DNA genotyping results without the risk of contamination and consequent mistyping the victim's DNA. Moreover, the method is easy, cheap and quick. This paper provides the DVI community with a step-wise instructions with recommendations for the type of tissue to be sampled and the site of excision (preferably the upper leg). Although initially designed for DVI purposes, the method is also suited for the identification of individual victims. Copyright © 2018 Elsevier B.V. All rights reserved.
Xu, Yan; Zhu, Quing
2015-01-01
Abstract. A new two-step estimation and imaging method is developed for a two-layer breast tissue structure consisting of a breast tissue layer and a chest wall underneath. First, a smaller probe with shorter distance source-detector pairs was used to collect the reflected light mainly from the breast tissue layer. Then, a larger probe with 9×14 source-detector pairs and a centrally located ultrasound transducer was used to collect reflected light from the two-layer tissue structure. The data collected from the smaller probe were used to estimate breast tissue optical properties. With more accurate estimation of the average breast tissue properties, the second layer properties can be assessed from data obtained from the larger probe. Using this approach, the unknown variables have been reduced from four to two and the estimated bulk tissue optical properties are more accurate and robust. In addition, a two-step reconstruction using a genetic algorithm and conjugate gradient method is implemented to simultaneously reconstruct the absorption and reduced scattering maps of targets inside a two-layer tissue structure. Simulations and phantom experiments have been performed to validate the new reconstruction method, and a clinical example is given to demonstrate the feasibility of this approach. PMID:26046722
Alfa, Michelle J; Singh, Harminder; Nugent, Zoann; Duerksen, Donald; Schultz, Gale; Reidy, Carol; DeGagne, Patricia; Olson, Nancy
2017-01-01
Simulated-use buildup biofilm (BBF) model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes. BBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli , and Pseudomonas aeruginosa . Sterile reverse osmosis (RO) water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey-Engley) were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes. Despite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF) extraction provided significantly better recovery of E. coli ( p = 0.02) from duodenoscope lever cavities compared to the CDC flush method. We recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.
Seed reserves diluted during surface soil reclamation in eastern Mojave Desert
Scoles-Sciulla, S. J.; DeFalco, L.A.
2009-01-01
Surface soil reclamation is used to increase the re-establishment of native vegetation following disturbance through preservation and eventual replacement of the indigenous seed reserves. Employed widely in the mining industry, soil reclamation has had variable success in re-establishing native vegetation in arid and semi-arid regions. We tested whether variable success could be due in part to a decrease of seed reserves during the reclamation process by measuring the change in abundance of germinable seed when surface soil was mechanically collected, stored in a soil pile for 4 months, and reapplied upon completion of a roadway. Overall seed reserve declines amounted to 86% of the original germinable seed in the soil. The greatest decrease in seed reserves occurred during soil collection (79% of original reserves), compared to the storage and reapplication stages. At nearby sites where stored surface soil had been reapplied, no perennial plant cover occurred from 0.5 to 5 years after application and <1% cover after 7 years compared to 5% cover in nearby undisturbed areas. The reduction in abundance of germinable seed during reclamation was primarily due to dilution of seed reserves when deeper soil fractions without seed were mixed with the surface soil during collection. Unless more precise techniques of surface soil collection are utilized, soil reclamation alone as a means for preserving native seed reserves is a method ill-suited for revegetating disturbed soils with a shallow seed bank, such as those found in the Mojave Desert. Copyright ?? Taylor & Francis Group, LLC.
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2016-05-01
Background: Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r process to fuel cycle optimization for nuclear energy. The need for a predictive theory applicable where no data are available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. Purpose: In this work, we calculate the pre-neutron emission charge and mass distributions of the fission fragments formed in the neutron-induced fission of 239Pu using a microscopic method based on nuclear density functional theory (DFT). Methods: Our theoretical framework is the nuclear energy density functional (EDF) method, where large-amplitude collective motion is treated adiabatically by using the time-dependent generator coordinate method (TDGCM) under the Gaussian overlap approximation (GOA). In practice, the TDGCM is implemented in two steps. First, a series of constrained EDF calculations map the configuration and potential-energy landscape of the fissioning system for a small set of collective variables (in this work, the axial quadrupole and octupole moments of the nucleus). Then, nuclear dynamics is modeled by propagating a collective wave packet on the potential-energy surface. Fission fragment distributions are extracted from the flux of the collective wave packet through the scission line. Results: We find that the main characteristics of the fission charge and mass distributions can be well reproduced by existing energy functionals even in two-dimensional collective spaces. Theory and experiment agree typically within two mass units for the position of the asymmetric peak. As expected, calculations are sensitive to the structure of the initial state and the prescription for the collective inertia. We emphasize that results are also sensitive to the continuity of the collective landscape near scission. Conclusions: Our analysis confirms that the adiabatic approximation provides an effective scheme to compute fission fragment yields. It also suggests that, at least in the framework of nuclear DFT, three-dimensional collective spaces may be a prerequisite to reach 10% accuracy in predicting pre-neutron emission fission fragment yields.
Synthetic ALSPAC longitudinal datasets for the Big Data VR project.
Avraam, Demetris; Wilson, Rebecca C; Burton, Paul
2017-01-01
Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables) without including the original data itself or disclosing participant information. In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
Policy to implementation: evidence-based practice in community mental health – study protocol
2013-01-01
Background Evidence-based treatments (EBTs) are not widely available in community mental health settings. In response to the call for implementation of evidence-based treatments in the United States, states and counties have mandated behavioral health reform through policies and other initiatives. Evaluations of the impact of these policies on implementation are rare. A systems transformation about to occur in Philadelphia, Pennsylvania, offers an important opportunity to prospectively study implementation in response to a policy mandate. Methods/design Using a prospective sequential mixed-methods design, with observations at multiple points in time, we will investigate the responses of staff from 30 community mental health clinics to a policy from the Department of Behavioral Health encouraging and incentivizing providers to implement evidence-based treatments to treat youth with mental health problems. Study participants will be 30 executive directors, 30 clinical directors, and 240 therapists. Data will be collected prior to the policy implementation, and then at two and four years following policy implementation. Quantitative data will include measures of intervention implementation and potential moderators of implementation (i.e., organizational- and leader-level variables) and will be collected from executive directors, clinical directors, and therapists. Measures include self-reported therapist fidelity to evidence-based treatment techniques as measured by the Therapist Procedures Checklist-Revised, organizational variables as measured by the Organizational Social Context Measurement System and the Implementation Climate Assessment, leader variables as measured by the Multifactor Leadership Questionnaire, attitudes towards EBTs as measured by the Evidence-Based Practice Attitude Scale, and knowledge of EBTs as measured by the Knowledge of Evidence- Based Services Questionnaire. Qualitative data will include semi-structured interviews with a subset of the sample to assess the implementation experience of high-, average-, and low-performing agencies. Mixed methods will be integrated through comparing and contrasting results from the two methods for each of the primary hypotheses in this study. Discussion Findings from the proposed research will inform both future policy mandates around implementation and the support required for the success of these policies, with the ultimate goal of improving the quality of treatment provided to youth in the public sector. PMID:23522556
Olsen, Jaran S; Aarskaug, Tone; Skogan, Gunnar; Fykse, Else Marie; Ellingsen, Anette Bauer; Blatny, Janet M
2009-09-01
Vibrio cholerae is the etiological agent of cholera and may be used in bioterror actions due to the easiness of its dissemination, and the public fear for acquiring the cholera disease. A simple and highly discriminating method for connecting clinical and environmental isolates of V. cholerae is needed in microbial forensics. Twelve different loci containing variable numbers of tandem-repeats (VNTRs) were evaluated in which six loci were polymorphic. Two multiplex reactions containing PCR primers targeting these six VNTRs resulted in successful DNA amplification of 142 various environmental and clinical V. cholerae isolates. The genetic distribution inside the V. cholerae strain collection was used to evaluate the discriminating power (Simpsons Diversity Index=0.99) of this new MLVA analysis, showing that the assay have a potential to differentiate between various strains, but also to identify those isolates which are collected from a common V. cholerae outbreak. This work has established a rapid and highly discriminating MLVA assay useful for track back analyses and/or forensic studies of V. cholerae infections.
Sun, Guibo; Webster, Chris; Ni, Michael Y; Zhang, Xiaohu
2018-05-07
Uncertainty with respect to built environment (BE) data collection, measure conceptualization and spatial scales is evident in urban health research, but most findings are from relatively lowdensity contexts. We selected Hong Kong, an iconic high-density city, as the study area as limited research has been conducted on uncertainty in such areas. We used geocoded home addresses (n=5732) from a large population-based cohort in Hong Kong to extract BE measures for the participants' place of residence based on an internationally recognized BE framework. Variability of the measures was mapped and Spearman's rank correlation calculated to assess how well the relationships among indicators are preserved across variables and spatial scales. We found extreme variations and uncertainties for the 180 measures collected using comprehensive data and advanced geographic information systems modelling techniques. We highlight the implications of methodological selection and spatial scales of the measures. The results suggest that more robust information regarding urban health research in high-density city would emerge if greater consideration were given to BE data, design methods and spatial scales of the BE measures.
Environmental characteristics and benthic invertebrate assemblages in Colorado mountain lakes
LaFrancois, B.M.; Carlisle, D.M.; Nydick, K.R.; Johnson, B.M.; Baron, Jill S.
2003-01-01
Twenty-two high-elevation lakes (>3000 m) in Rocky Mountain National Park and Indian Peaks Wilderness Area, Colorado, were surveyed during summer 1998 to explore relationships among benthic invertebrates, water chemistry (particularly nitrate concentrations), and other environmental variables. Water samples were collected from the deepest portion of each lake and analyzed for ions and other water chemistry parameters. Benthic invertebrates were collected from the littoral zone using both a sweep net and Hess sampler. Physical and geographical measurements were derived from maps. Relationships among benthic invertebrate assemblages and environmental variables were examined using canonical correspondence analysis, and the importance of sampling methodology and taxonomie resolution on these relationships was evaluated. Choice of sampling methodology strongly influenced the outcome of statistical analyses, whereas taxonomie resolution did not. Presence/absence of benthic invertebrate taxa among the study lakes was best explained by elevation and presence of fish. Relative abundance and density of benthic invertebrate taxa were more strongly influenced by sampling date and water chemistry. Nitrate (NO₃⁻) concentration, potentially on the rise due to regional nitrogen deposition, was unrelated to benthic invertebrate distribution regardless of sampling method or taxonomie resolution.
Building Community Around Hydrologic Data Models Within CUAHSI
NASA Astrophysics Data System (ADS)
Maidment, D.
2007-12-01
The Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has a Hydrologic Information Systems project which aims to provide better data access and capacity for data synthesis for the nation's water information, both that collected by academic investigators and that collected by water agencies. These data include observations of streamflow, water quality, groundwater levels, weather and climate and aquatic biology. Each water agency or research investigator has a unique method of formatting their data (syntactic heterogeneity) and describing their variables (semantic heterogeneity). The result is a large agglomeration of data in many formats and descriptions whose full content is hard to interpret and analyze. CUAHSI is helping to resolve syntactic heterogeneity through the development of WaterML, a standard XML markup language for communicating water observations data through web services, and a standard relational database structure for archiving data called the Observations Data Model. Variables in these data archiving and communicating systems are indexed against a controlled vocabulary of descriptive terms to provide the capacity to synthesize common data types from disparate data sources.
A study protocol to evaluate the relationship between outdoor air pollution and pregnancy outcomes
2010-01-01
Background The present study protocol is designed to assess the relationship between outdoor air pollution and low birth weight and preterm births outcomes performing a semi-ecological analysis. Semi-ecological design studies are widely used to assess effects of air pollution in humans. In this type of analysis, health outcomes and covariates are measured in individuals and exposure assignments are usually based on air quality monitor stations. Therefore, estimating individual exposures are one of the major challenges when investigating these relationships with a semi-ecologic design. Methods/Design Semi-ecologic study consisting of a retrospective cohort study with ecologic assignment of exposure is applied. Health outcomes and covariates are collected at Primary Health Care Center. Data from pregnant registry, clinical record and specific questionnaire administered orally to the mothers of children born in period 2007-2010 in Portuguese Alentejo Litoral region, are collected by the research team. Outdoor air pollution data are collected with a lichen diversity biomonitoring program, and individual pregnancy exposures are assessed with spatial geostatistical simulation, which provides the basis for uncertainty analysis of individual exposures. Awareness of outdoor air pollution uncertainty will improve validity of individual exposures assignments for further statistical analysis with multivariate regression models. Discussion Exposure misclassification is an issue of concern in semi-ecological design. In this study, personal exposures are assigned to each pregnant using geocoded addresses data. A stochastic simulation method is applied to lichen diversity values index measured at biomonitoring survey locations, in order to assess spatial uncertainty of lichen diversity value index at each geocoded address. These methods assume a model for spatial autocorrelation of exposure and provide a distribution of exposures in each study location. We believe that variability of simulated exposure values at geocoded addresses will improve knowledge on variability of exposures, improving therefore validity of individual exposures to input in posterior statistical analysis. PMID:20950449
Fascione, Jeanna M; Crews, Ryan T; Wrobel, James S
2012-01-01
Identifying the variability of footprint measurement collection techniques and the reliability of footprint measurements would assist with appropriate clinical foot posture appraisal. We sought to identify relationships between these measures in a healthy population. On 30 healthy participants, midgait dynamic footprint measurements were collected using an ink mat, paper pedography, and electronic pedography. The footprints were then digitized, and the following footprint indices were calculated with photo digital planimetry software: footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index. Differences between techniques were identified with repeated-measures analysis of variance with post hoc test of Scheffe. In addition, to assess practical similarities between the different methods, intraclass correlation coefficients (ICCs) were calculated. To assess intrarater reliability, footprint indices were calculated twice on 10 randomly selected ink mat footprint measurements, and the ICC was calculated. Dynamic footprint measurements collected with an ink mat significantly differed from those collected with paper pedography (ICC, 0.85-0.96) and electronic pedography (ICC, 0.29-0.79), regardless of the practical similarities noted with ICC values (P = .00). Intrarater reliability for dynamic ink mat footprint measurements was high for the footprint index, arch index, truncated arch index, Chippaux-Smirak Index, and Staheli Index (ICC, 0.74-0.99). Footprint measurements collected with various techniques demonstrate differences. Interchangeable use of exact values without adjustment is not advised. Intrarater reliability of a single method (ink mat) was found to be high.
[Differentiation by geometric morphometrics among 11 Anopheles (Nyssorhynchus) in Colombia].
Calle, David Alonso; Quiñones, Martha Lucía; Erazo, Holmes Francisco; Jaramillo, Nicolás
2008-09-01
The correct identification of the Anopheles species of the subgenus Nyssorhynchus is important because this subgenus includes the main malaria vectors in Colombia. This information is necessary for focusing a malaria control program. Geometric morphometrics were used to evaluate morphometric variation of 11 species of subgenus Nyssorhynchus present in Colombia and to distinguish females of each species. Materials and methods. The specimens were obtained from series and family broods from females collected with protected human hosts as attractants. The field collected specimens and their progeny were identified at each of the associated stages by conventional keys. For some species, wild females were used. Landmarks were selected on wings from digital pictures from 336 individuals, and digitized with coordinates. The coordinate matrix was processed by generalized Procrustes analysis which generated size and shape variables, free of non-biological variation. Size and shape variables were analyzed by univariate and multivariate statistics. The subdivision of subgenus Nyssorhynchus in sections is not correlated with wing shape. Discriminant analyses correctly classified 97% of females in the section Albimanus and 86% in the section Argyritarsis. In addition, these methodologies allowed the correct identification of 3 sympatric species from Putumayo which have been difficult to identify in the adult female stage. The geometric morphometrics were demonstrated to be a very useful tool as an adjunct to taxonomy of females the use of this method is recommended in studies of the subgenus Nyssorhynchus in Colombia.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J
2016-01-15
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.
Secondary Aluminum Processing Waste: Salt Cake ...
Thirty-nine salt cake samples were collected from 10 SAP facilities across the U.S. The facilities were identified by the Aluminum Association to cover a wide range of processes. Results suggest that while the percent metal leached from the salt cake was relatively low, the leachable metal content may still pose a contamination concern and potential human and ecological exposure if uncontrollably released to the environment. As a result, salt cake should always be managed at facilities that utilize synthetic liner systems with leachate collection (the salt content of the leachate will increase the hydraulic conductivity of clay liners within a few years of installation). The mineral phase analysis showed that various species of aluminum are present in the salt cake samples with a large degree of variability. The relative abundance of various aluminum species was evaluated but it is noted that the method used is a semi-quantitative method and as a result there is a limitation for the data use. The analysis only showed a few aluminum species present in salt cake which does not exclude the presence of other crystalline species especially in light of the variability observed in the samples. Results presented in this document are of particular importance when trying to understand concerns associated with the disposal of salt cake in MSW landfills. From the end-of-life management perspective, data presented here suggest that salt cake should not be size reduce
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.
2016-01-01
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934
Variability of 137Cs inventory at a reference site in west-central Iran.
Bazshoushtari, Nasim; Ayoubi, Shamsollah; Abdi, Mohammad Reza; Mohammadi, Mohammad
2016-12-01
137 Cs technique has been widely used for the evaluation rates and patterns of soil erosion and deposition. This technique requires an accurate estimate of the values of 137 Cs inventory at the reference site. This study was conducted to evaluate the variability of the inventory of 137 Cs regarding to the sampling program including sample size, distance and sampling method at a reference site located in vicinity of Fereydan district in Isfahan province, west-central Iran. Two 3 × 8 grids were established comprising large grid (35 m length and 8 m width), and small grid (24 m length and 6 m width). At each grid intersection two soil samples were collected from 0 to 15 cm and 15-30 cm depths, totally 96 soil samples from 48 sampling points. Coefficients of variation for 137 Cs inventory in the soil samples was relatively low (CV = 15%), and the sampling distance and methods used did not significantly affect the 137 Cs inventories across the studied reference site. To obtain a satisfactory estimate of the mean 137 Cs activity in the reference sites, particularly those located in the semiarid regions, it is recommended to collect at least four samples along in a grid pattern 3 m apart. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reeves, Mathew J; Mullard, Andrew J; Wehner, Susan
2008-01-01
Background The Paul Coverdell National Acute Stroke Registry (PCNASR) is a U.S. based national registry designed to monitor and improve the quality of acute stroke care delivered by hospitals. The registry monitors care through specific performance measures, the accuracy of which depends in part on the reliability of the individual data elements used to construct them. This study describes the inter-rater reliability of data elements collected in Michigan's state-based prototype of the PCNASR. Methods Over a 6-month period, 15 hospitals participating in the Michigan PCNASR prototype submitted data on 2566 acute stroke admissions. Trained hospital staff prospectively identified acute stroke admissions, abstracted chart information, and submitted data to the registry. At each hospital 8 randomly selected cases were re-abstracted by an experienced research nurse. Inter-rater reliability was estimated by the kappa statistic for nominal variables, and intraclass correlation coefficient (ICC) for ordinal and continuous variables. Factors that can negatively impact the kappa statistic (i.e., trait prevalence and rater bias) were also evaluated. Results A total of 104 charts were available for re-abstraction. Excellent reliability (kappa or ICC > 0.75) was observed for many registry variables including age, gender, black race, hemorrhagic stroke, discharge medications, and modified Rankin Score. Agreement was at least moderate (i.e., 0.75 > kappa ≥; 0.40) for ischemic stroke, TIA, white race, non-ambulance arrival, hospital transfer and direct admit. However, several variables had poor reliability (kappa < 0.40) including stroke onset time, stroke team consultation, time of initial brain imaging, and discharge destination. There were marked systematic differences between hospital abstractors and the audit abstractor (i.e., rater bias) for many of the data elements recorded in the emergency department. Conclusion The excellent reliability of many of the data elements supports the use of the PCNASR to monitor and improve care. However, the poor reliability for several variables, particularly time-related events in the emergency department, indicates the need for concerted efforts to improve the quality of data collection. Specific recommendations include improvements to data definitions, abstractor training, and the development of ED-based real-time data collection systems. PMID:18547421
Fortier, Isabel; Doiron, Dany; Little, Julian; Ferretti, Vincent; L’Heureux, François; Stolk, Ronald P; Knoppers, Bartha M; Hudson, Thomas J; Burton, Paul R
2011-01-01
Background Proper understanding of the roles of, and interactions between genetic, lifestyle, environmental and psycho-social factors in determining the risk of development and/or progression of chronic diseases requires access to very large high-quality databases. Because of the financial, technical and time burdens related to developing and maintaining very large studies, the scientific community is increasingly synthesizing data from multiple studies to construct large databases. However, the data items collected by individual studies must be inferentially equivalent to be meaningfully synthesized. The DataSchema and Harmonization Platform for Epidemiological Research (DataSHaPER; http://www.datashaper.org) was developed to enable the rigorous assessment of the inferential equivalence, i.e. the potential for harmonization, of selected information from individual studies. Methods This article examines the value of using the DataSHaPER for retrospective harmonization of established studies. Using the DataSHaPER approach, the potential to generate 148 harmonized variables from the questionnaires and physical measures collected in 53 large population-based studies (6.9 million participants) was assessed. Variable and study characteristics that might influence the potential for data synthesis were also explored. Results Out of all assessment items evaluated (148 variables for each of the 53 studies), 38% could be harmonized. Certain characteristics of variables (i.e. relative importance, individual targeted, reference period) and of studies (i.e. observational units, data collection start date and mode of questionnaire administration) were associated with the potential for harmonization. For example, for variables deemed to be essential, 62% of assessment items paired could be harmonized. Conclusion The current article shows that the DataSHaPER provides an effective and flexible approach for the retrospective harmonization of information across studies. To implement data synthesis, some additional scientific, ethico-legal and technical considerations must be addressed. The success of the DataSHaPER as a harmonization approach will depend on its continuing development and on the rigour and extent of its use. The DataSHaPER has the potential to take us closer to a truly collaborative epidemiology and offers the promise of enhanced research potential generated through synthesized databases. PMID:21804097
Riddell, Michaela A; Edwards, Nancy; Thompson, Simon R; Bernabe-Ortiz, Antonio; Praveen, Devarsetty; Johnson, Claire; Kengne, Andre P; Liu, Peter; McCready, Tara; Ng, Eleanor; Nieuwlaat, Robby; Ovbiagele, Bruce; Owolabi, Mayowa; Peiris, David; Thrift, Amanda G; Tobe, Sheldon; Yusoff, Khalid
2017-03-15
The imperative to improve global health has prompted transnational research partnerships to investigate common health issues on a larger scale. The Global Alliance for Chronic Diseases (GACD) is an alliance of national research funding agencies. To enhance research funded by GACD members, this study aimed to standardise data collection methods across the 15 GACD hypertension research teams and evaluate the uptake of these standardised measurements. Furthermore we describe concerns and difficulties associated with the data harmonisation process highlighted and debated during annual meetings of the GACD funded investigators. With these concerns and issues in mind, a working group comprising representatives from the 15 studies iteratively identified and proposed a set of common measures for inclusion in each of the teams' data collection plans. One year later all teams were asked which consensus measures had been implemented. Important issues were identified during the data harmonisation process relating to data ownership, sharing methodologies and ethical concerns. Measures were assessed across eight domains; demographic; dietary; clinical and anthropometric; medical history; hypertension knowledge; physical activity; behavioural (smoking and alcohol); and biochemical domains. Identifying validated measures relevant across a variety of settings presented some difficulties. The resulting GACD hypertension data dictionary comprises 67 consensus measures. Of the 14 responding teams, only two teams were including more than 50 consensus variables, five teams were including between 25 and 50 consensus variables and four teams were including between 6 and 24 consensus variables, one team did not provide details of the variables collected and two teams did not include any of the consensus variables as the project had already commenced or the measures were not relevant to their study. Deriving consensus measures across diverse research projects and contexts was challenging. The major barrier to their implementation was related to the time taken to develop and present these measures. Inclusion of consensus measures into future funding announcements would facilitate researchers integrating these measures within application protocols. We suggest that adoption of consensus measures developed here, across the field of hypertension, would help advance the science in this area, allowing for more comparable data sets and generalizable inferences.
Inic-Kanada, Aleksandra; Nussbaumer, Andrea; Montanaro, Jacqueline; Belij, Sandra; Schlacher, Simone; Stein, Elisabeth; Bintner, Nora; Merio, Margarethe; Zlabinger, Gerhard J; Barisani-Asenbauer, Talin
2012-01-01
Evaluating cytokine profiles in tears could shed light on the pathogenesis of various ocular surface diseases. When collecting tears with the methods currently available, it is often not possible to avoid the tear reflex, which may give a different cytokine profile compared to basal tears. More importantly, tear collection with glass capillaries, the most widely used method for taking samples and the best method for avoiding tear reflex, is impractical for remote area field studies because it is tedious and time-consuming for health workers, who cannot collect tears from a large number of patients with this method in one day. Furthermore, this method is uncomfortable for anxious patients and children. Thus, tears are frequently collected using ophthalmic sponges. These sponges have the advantage that they are well tolerated by the patient, especially children, and enable standardization of the tear collection volume. The aim of this study was to compare various ophthalmic sponges and extraction buffers to optimize the tear collection method for field studies for subsequent quantification of cytokines in tears using the Luminex technology. Three ophthalmic sponges, Merocel, Pro-ophta, and Weck-Cel, were tested. Sponges were presoaked with 25 cytokines/chemokines of known concentrations and eluted with seven different extraction buffers (EX1-EX7). To assess possible interference in the assay from the sponges, two standard curves were prepared in parallel: 1) cytokines of known concentrations with the extraction buffers and 2) cytokines of known concentrations loaded onto the sponges with the extraction buffers. Subsequently, a clinical assessment of the chosen sponge-buffer combination was performed with tears collected from four healthy subjects using 1) aspiration and 2) sponges. To quantify cytokine/chemokine recovery and the concentration in the tears, a 25-plex Cytokine Panel and the Luminex xMap were used. This platform enables simultaneous measurement of proinflammatory cytokines, Th1/Th2 distinguishing cytokines, nonspecific acting cytokines, and chemokines. WE DEMONSTRATED THE FOLLOWING: (i) 25 cytokines/chemokines expressed highly variable interactions with buffers and matrices. Several buffers enabled recovery of similar cytokine values (regulated and normal T cell expressed and secreted [RANTES], interleukin [IL]-13, IL-6, IL-8, IL-2R, and granulocyte-macrophage colony-stimulating factor [GM-CSF]); others were highly variable (monocyte chemotactic protein-1 [MCP-1], monokine induced by interferon-gamma [MIG], IL-1β, IL-4, IL-7, and eotaxin). (ii) Various extraction buffers displayed significantly different recovery rates on the same sponge for the same cytokine/chemokine. (iii) The highest recovery rates were obtained with the Merocel ophthalmic sponge except for tumor necrosis factor-α: the Weck-Cel ophthalmic sponge showed the best results, either with cytokine standards loaded onto sponges or with tears collected from the inner canthus of the eye, using the sponge. (iv) IL-5, IL-10, and interferon-α were not detected in any tear sample from four normal human subjects. Twenty-two cytokines/chemokines that we detected were extracted from the Merocel sponge to a satisfactory recovery percentage. The recovery of IL-7 was significantly lower in the extracted Merocel sponge compared to the diluted tear samples. The cytokine/chemokine extraction from tears showed the same pattern of extraction that we observed for extracting the standards. Simultaneous measurement of various cytokines using ophthalmic sponges yielded diverse results for various cytokines as the level of extraction differs noticeably for certain cytokines. A second set of controls (standard curves "with sponges") should be used to delineate the extent of extraction for each cytokine to be analyzed. Many cytokines/chemokines were detected in tear samples collected with the Merocel sponge, including many that have been implicated in ocular surface disease. Luminex detection of cytokine/chemokine profiles of tears collected with Merocel sponges and extracted with buffer EX1 may be useful in clinical studies, for example, to assess cytokine profiles evaluation in ocular surface diseases.
Giddings, E.M.; Moorman, Michelle; Cuffney, Thomas F.; McMahon, Gerard; Harned, Douglas A.
2007-01-01
This report provides summarized physical, chemical, and biological data collected during a study of the effects of urbanization on stream ecosystems as part of the U.S. Geological Survey's National Water-Quality Assessment study. The purpose of this study was to examine differences in biological, chemical, and physical characteristics of streams across a gradient of urban intensity. Thirty sites were selected along an urbanization gradient that represents conditions in the North Carolina Piedmont ecoregion, including the cities of Raleigh, Durham, Cary, Greensboro, Winston-Salem, High Point, Asheboro, and Oxford. Data collected included streamflow variability, stream temperature, instream chemistry, instream aquatic habitat, and collections of the algal, macroinvertebrate, and fish communities. In addition, ancillary data describing land use, socioeconomic conditions, and urban infrastructure were compiled for each basin using a geographic information system analysis. All data were processed and summarized for analytical use and are presented in downloadable data tables, along with the methods of data collection and processing.
Summary of suspended-sediment concentration data, San Francisco Bay, California, water year 2010
Buchanan, Paul A.; Morgan, Tara L.
2014-01-01
Suspended-sediment concentration data were collected by the U.S. Geological Survey in San Francisco Bay during water year 2010 (October 1, 2009–September 30, 2010). Turbidity sensors and water samples were used to monitor suspended-sediment concentration at two sites in Suisun Bay, one site in San Pablo Bay, three sites in Central San Francisco Bay, and one site in South San Francisco Bay. Sensors were positioned at two depths at most sites to help define the vertical variability of suspended sediments. Water samples were collected periodically and analyzed for concentrations of suspended sediment. The results of the analyses were used to calibrate the output of the turbidity sensors so that a record of suspended-sediment concentrations could be computed. This report presents the data-collection methods used and summarizes, in graphs, the suspended-sediment concentration data collected from October 2009 through September 2010. Calibration curves and plots of the processed data for each sensor also are presented.
Adaptively biased molecular dynamics for free energy calculations
NASA Astrophysics Data System (ADS)
Babin, Volodymyr; Roland, Christopher; Sagui, Celeste
2008-04-01
We present an adaptively biased molecular dynamics (ABMD) method for the computation of the free energy surface of a reaction coordinate using nonequilibrium dynamics. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential and is inspired by the metadynamics method. The ABMD method has several useful features, including a small number of control parameters and an O(t ) numerical cost with molecular dynamics time t. The ABMD method naturally allows for extensions based on multiple walkers and replica exchange, where different replicas can have different temperatures and/or collective variables. This is beneficial not only in terms of the speed and accuracy of a calculation, but also in terms of the amount of useful information that may be obtained from a given simulation. The workings of the ABMD method are illustrated via a study of the folding of the Ace-GGPGGG-Nme peptide in a gaseous and solvated environment.
Zenebe, Chernet Baye; Adefris, Mulat; Yenit, Melaku Kindie; Gelaw, Yalemzewod Assefa
2017-09-06
Despite the fact that long acting family planning methods reduce population growth and improve maternal health, their utilization remains poor. Therefore, this study assessed the prevalence of long acting and permanent family planning method utilization and associated factors among women in reproductive age groups who have decided not to have more children in Gondar city, northwest Ethiopia. An institution based cross-sectional study was conducted from August to October, 2015. Three hundred seventeen women who have decided not to have more children were selected consecutively into the study. A structured and pretested questionnaire was used to collect data. Both bivariate and multi-variable logistic regressions analyses were used to identify factors associated with utilization of long acting and permanent family planning methods. The multi-variable logistic regression analysis was used to investigate factors associated with the utilization of long acting and permanent family planning methods. The Adjusted Odds Ratio (AOR) with the corresponding 95% Confidence Interval (CI) was used to show the strength of associations, and variables with a P-value of <0.05 were considered statistically significant. In this study, the overall prevalence of long acting and permanent contraceptive (LAPCM) method utilization was 34.7% (95% CI: 29.5-39.9). According to the multi-variable logistic regression analysis, utilization of long acting and permanent contraceptive methods was significantly associated with women who had secondary school, (AOR: 2279, 95% CI: 1.17, 4.44), college, and above education (AOR: 2.91, 95% CI: 1.36, 6.24), history of previous utilization (AOR: 3.02, 95% CI: 1.69, 5.38), and information about LAPCM (AOR: 8.85, 95% CI: 2.04, 38.41). In this study the prevalence of long acting and permanent family planning method utilization among women who have decided not to have more children was high compared with previous studies conducted elsewhere. Advanced educational status, previous utilization of LAPCM, and information on LAPCM were significantly associated with the utilization of LAPCM. As a result, strengthening behavioral change communication channels to make information accessible is highly recommended.
NASA Astrophysics Data System (ADS)
Goss, Natasha R.; Mladenov, Natalie; Seibold, Christine M.; Chowanski, Kurt; Seitz, Leslie; Wellemeyer, T. Barret; Williams, Mark W.
2013-12-01
Atmospheric wet and dry deposition are important sources of carbon for remote alpine lakes and soils. The carbon inputs from dry deposition in alpine National Atmospheric Deposition Program (NADP) collectors, including aeolian dust and biological material, are not well constrained due to difficulties in retaining particulate matter in the collectors. Here, we developed and tested a marble insert for dry deposition collection at the Niwot Ridge Long Term Ecological Research Station (NWT LTER) Soddie site (3345 m) between 24 May and 8 November 2011. We conducted laboratory tests of the insert's effect on particulate matter (PM) mass and non-purgeable organic carbon (DOC) and found that the insert did not significantly change either measurement. Thus, the insert may enable dry deposition collection of PM and DOC at NADP sites. We then developed a method for enumerating the collected wet and dry deposition with the Flow Cytometer and Microscope (FlowCAM), a dynamic-image particle analysis tool. The FlowCAM has the potential to establish morphology, which affects particle settling and retention, through particle diameter and aspect ratio. Particle images were used to track the abundance of pollen grains over time. Qualitative image examination revealed that most particles were biological in nature, such as intact algal cells and pollen. Dry deposition loading to the Soddie site as determined by FlowCAM measurements was highly variable, ranging from 100 to >230 g ha-1 d-1 in June-August 2011 and peaking in late June. No significant difference in diameter or aspect ratio was found between wet and dry deposition, suggesting fundamental similarities between those deposition types. Although FlowCAM statistics and identification of particle types proved insightful, our total-particle enumeration method had a high variance and underestimated the total number of particles when compared to imaging of relatively large volumes (60-125 mL) from a single sample. We recommend use of the FlowCAM, especially for subclasses of particles, but in light of uncertainty in particle counts, believe that it should be paired with traditional methods such as microscopy in this stage of the technique's development. Analysis of well-mixed samples produced lower variability than settling methods used for algae samples. Use of the marble inserts in the dry deposition collector in the NADP context is recommended, and the implications of various particle counting and identification methods are explored.
Overmyer, Katherine A.; Thonusin, Chanisa; Qi, Nathan R.; Burant, Charles F.; Evans, Charles R.
2015-01-01
A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia. PMID:25658945
Overmyer, Katherine A; Thonusin, Chanisa; Qi, Nathan R; Burant, Charles F; Evans, Charles R
2015-01-01
A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.
Pitiranggon, Masha; Perzanowski, Matthew S; Kinney, Patrick L; Xu, Dongqun; Chillrud, Steven N; Yan, Beizhan
2014-10-01
Exhaled breath condensate (EBC) provides a relatively easy, non-invasive method for measuring biomarkers of inflammation and oxidative stress in the airways. However, the levels of these biomarkers in EBC are influenced, not only by their levels in lung lining fluid but also by the volume of water vapor that also condenses during EBC collection. For this reason, the use of a biomarker of dilution has been recommended. Urea has been proposed and utilized as a promising dilution biomarker due to its even distribution throughout the body and relatively low volatility. Current EBC urea analytical methods either are not sensitive enough, necessitating large volumes of EBC, or are labor intensive, requiring a derivatization step or other pretreatment. We report here a straightforward and reliable LC-MS approach that we developed that does not require derivatization or large sample volume (∼36 µL). An Acclaim mixed-mode hydrophilic interaction chromatography column was selected because it can produce good peak symmetry and efficiently separate urea from other polar and nonpolar compounds. To achieve a high recovery rate, a slow and incomplete evaporation method was used followed by a solvent-phase exchange. Among EBC samples collected from 28 children, urea levels were found to be highly variable, with a relative standard deviation of 234%, suggesting high variability in dilution of the lung lining fluid component of EBC. The limit of detection was found to be 0.036 µg/mL. Published by Oxford University Press [2013]. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Model-Based Control of Observer Bias for the Analysis of Presence-Only Data in Ecology
Warton, David I.; Renner, Ian W.; Ramp, Daniel
2013-01-01
Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter “observer bias”). In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly – by modelling presence locations as a function of known observer bias variables (such as accessibility variables) in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the “pseudo-absence problem” of where to locate pseudo-absences (and how many). The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or “inventory” methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species. PMID:24260167
A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew
2013-12-01
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.
A simple method to predict body temperature of small reptiles from environmental temperature.
Vickers, Mathew; Schwarzkopf, Lin
2016-05-01
To study behavioral thermoregulation, it is useful to use thermal sensors and physical models to collect environmental temperatures that are used to predict organism body temperature. Many techniques involve expensive or numerous types of sensors (cast copper models, or temperature, humidity, radiation, and wind speed sensors) to collect the microhabitat data necessary to predict body temperatures. Expense and diversity of requisite sensors can limit sampling resolution and accessibility of these methods. We compare body temperature predictions of small lizards from iButtons, DS18B20 sensors, and simple copper models, in both laboratory and natural conditions. Our aim was to develop an inexpensive yet accurate method for body temperature prediction. Either method was applicable given appropriate parameterization of the heat transfer equation used. The simplest and cheapest method was DS18B20 sensors attached to a small recording computer. There was little if any deficit in precision or accuracy compared to other published methods. We show how the heat transfer equation can be parameterized, and it can also be used to predict body temperature from historically collected data, allowing strong comparisons between current and previous environmental temperatures using the most modern techniques. Our simple method uses very cheap sensors and loggers to extensively sample habitat temperature, improving our understanding of microhabitat structure and thermal variability with respect to small ectotherms. While our method was quite precise, we feel any potential loss in accuracy is offset by the increase in sample resolution, important as it is increasingly apparent that, particularly for small ectotherms, habitat thermal heterogeneity is the strongest influence on transient body temperature.
Anderson, Geoffrey A; Ilcisin, Lenka; Abesiga, Lenard; Mayanja, Ronald; Portal Benetiz, Noralis; Ngonzi, Joseph; Kayima, Peter; Shrime, Mark G
2017-06-01
The Lancet Commission on Global Surgery recommends that every country report its surgical volume and postoperative mortality rate. Little is known, however, about the numbers of operations performed and the associated postoperative mortality rate in low-income countries or how to best collect these data. For one month, every patient who underwent an operation at a referral hospital in western Uganda was observed. These patients and their outcomes were followed until discharge. Prospective data were compared with data obtained from logbooks and patient charts to determine the validity of using retrospective methods for collecting these metrics. Surgical volume at this regional hospital in Uganda is 8,515 operations/y, compared to 4,000 operations/y reported in the only other published data. The postoperative mortality rate at this hospital is 2.4%, similar to other hospitals in low-income countries. Finding patient files in the medical records department was time consuming and yielded only 62% of the files. Furthermore, a comparison of missing versus found charts revealed that the missing charts were significantly different from the found charts. Logbooks, on the other hand, captured 99% of the operations and 94% of the deaths. Our results describe a simple, reproducible, accurate, and inexpensive method for collection of the Lancet Commission on Global Surgery variables using logbooks that already exist in most hospitals in low-income countries. While some have suggested using risk-adjusted postoperative mortality rate as a more equitable variable, our data suggest that only a limited amount of risk adjustment is possible given the limited available data. Copyright © 2017 Elsevier Inc. All rights reserved.
A Preliminary Study of Biomonitoring for Bisphenol-A in Human Sweat.
Porucznik, Christina A; Cox, Kyley J; Wilkins, Diana G; Anderson, David J; Bailey, Nicole M; Szczotka, Kathryn M; Stanford, Joseph B
2015-09-01
Measurement of human exposure to the endocrine disruptor bisphenol-A (BPA) is hampered by the ubiquitous but transient exposure for most individuals, coupled with a short metabolic half-life which leads to high inter- and intra-individual variability. We investigated the possibility of measuring multiday exposure to BPA in human sweat among volunteer participants with the goal of identifying an exposure assessment method less affected by temporal variability. We recruited 50 participants to wear a sweat collection patch (PharmChek(®)) for 7 days with concurrent collection of daily first-morning urine. Urines and sweat patch extracts were analyzed with quantitative LC-MS-MS using a method we previously validated. In addition, a human volunteer consumed one can of commercially available soup (16 oz, 473 cm(3)) daily for 3 days and collected urine. Sweat patches (n = 2, 1 per arm) were worn for the 3 days of the study. BPA was detected in quality control specimens prepared by fortification of BPA to sweat patches, but was only detected at 5× above average background on three participant patches. Although the highest measured urine BPA concentration was 195 ng/mL for an individual with deliberate exposure, no BPA was detected above background in the corresponding sweat patches. In this preliminary investigation, the use of sweat patches primarily worn on the upper-outer arm did not detect BPA exposures that were documented by urine monitoring. The absence of BPA in sweat patches may be due to several factors, including insufficient quantity of specimen per patch, or extremely low concentrations of BPA in naturally occurring sweat, among others. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Islam, M Mofizul; Topp, Libby; Conigrave, Katherine M; van Beek, Ingrid; Maher, Lisa; White, Ann; Rodgers, Craig; Day, Carolyn A
2012-01-01
Research with injecting drug users (IDUs) suggests greater willingness to report sensitive and stigmatised behaviour via audio computer-assisted self-interviewing (ACASI) methods than during face-to-face interviews (FFIs); however, previous studies were limited in verifying this within the same individuals at the same time point. This study examines the relative willingness of IDUs to report sensitive information via ACASI and during a face-to-face clinical assessment administered in health services for IDUs. During recruitment for a randomised controlled trial undertaken at two IDU-targeted health services, assessments were undertaken as per clinical protocols, followed by referral of eligible clients to the trial, in which baseline self-report data were collected via ACASI. Five questions about sensitive injecting and sexual risk behaviours were administered to participants during both clinical interviews and baseline research data collection. "Percentage agreement" determined the magnitude of concordance/discordance in responses across interview methods, while tests appropriate to data format assessed the statistical significance of this variation. Results for all five variables suggest that, relative to ACASI, FFI elicited responses that may be perceived as more socially desirable. Discordance was statistically significant for four of the five variables examined. Participants who reported a history of sex work were more likely to provide discordant responses to at least one socially sensitive item. In health services for IDUs, information collection via ACASI may elicit more reliable and valid responses than FFI. Adoption of a universal precautionary approach to complement individually tailored assessment of and advice regarding health risk behaviours for IDUs may address this issue.
Yuan, Ke-Hai; Jiang, Ge; Cheng, Ying
2017-11-01
Data in psychology are often collected using Likert-type scales, and it has been shown that factor analysis of Likert-type data is better performed on the polychoric correlation matrix than on the product-moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real-data example indicates that estimates by ridge GLS are 9-20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich-type standard errors following the ridge GLS methods also perform reasonably well. © 2017 The British Psychological Society.
Unrein, Julia R.; Morris, Jeffrey M.; Chitwood, Rob S.; Lipton, Joshua; Peers, Jennifer; van de Wetering, Stan; Schreck, Carl B.
2016-01-01
Many anthropogenic disturbances have contributed to the decline of Pacific lampreys (Entosphenus tridentatus), but potential negative effects of contaminants on lampreys are unclear. Lamprey ammocoetes are the only detritivorous fish in the lower Willamette River, Oregon, USA, and have been observed in Portland Harbor sediments. Their long benthic larval stage places them at risk from the effects of contaminated sediment. The authors developed experimental methods to assess the effects of contaminated sediment on the growth and behavior of field-collected ammocoetes reared in a laboratory. Specifically, they developed methods to assess individual growth and burrowing behavior. Burrowing performance demonstrated high variability among contaminated sediments; however, ammocoetes presented with noncontaminated reference sediment initiated burrowing more rapidly and completed it faster. Ammocoete reemergence from contaminated sediments suggests avoidance of some chemical compounds. The authors conducted long-term exposure experiments on individually held ammocoetes using sediment collected from their native Siletz River, which included the following: contaminated sediments collected from 9 sites within Portland Harbor, 2 uncontaminated reference sediments collected upstream, 1 uncontaminated sediment with characteristics similar to Portland Harbor sediments, and clean sand. They determined that a 24-h depuration period was sufficient to evaluate weight changes and observed no mortality or growth effects in fish exposed to any of the contaminated sediments. However, the effect on burrowing behavior appeared to be a sensitive endpoint, with potentially significant implications for predator avoidance.
Soft computing in design and manufacturing of advanced materials
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y; Vary, Alex
1993-01-01
The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.
On the period determination of ASAS eclipsing binaries
NASA Astrophysics Data System (ADS)
Mayangsari, L.; Priyatikanto, R.; Putra, M.
2014-03-01
Variable stars, or particularly eclipsing binaries, are very essential astronomical occurrence. Surveys are the backbone of astronomy, and many discoveries of variable stars are the results of surveys. All-Sky Automated Survey (ASAS) is one of the observing projects whose ultimate goal is photometric monitoring of variable stars. Since its first light in 1997, ASAS has collected 50,099 variable stars, with 11,076 eclipsing binaries among them. In the present work we focus on the period determination of the eclipsing binaries. Since the number of data points in each ASAS eclipsing binary light curve is sparse, period determination of any system is a not straightforward process. For 30 samples of such systems we compare the implementation of Lomb-Scargle algorithm which is an Fast Fourier Transform (FFT) basis and Phase Dispersion Minimization (PDM) method which is non-FFT basis to determine their period. It is demonstrated that PDM gives better performance at handling eclipsing detached (ED) systems whose variability are non-sinusoidal. More over, using semi-automatic recipes, we get better period solution and satisfactorily improve 53% of the selected object's light curves, but failed against another 7% of selected objects. In addition, we also highlight 4 interesting objects for further investigation.
Quantitatively measured tremor in hand-arm vibration-exposed workers.
Edlund, Maria; Burström, Lage; Hagberg, Mats; Lundström, Ronnie; Nilsson, Tohr; Sandén, Helena; Wastensson, Gunilla
2015-04-01
The aim of the present study was to investigate the possible increase in hand tremor in relation to hand-arm vibration (HAV) exposure in a cohort of exposed and unexposed workers. Participants were 178 male workers with or without exposure to HAV. The study is cross-sectional regarding the outcome of tremor and has a longitudinal design with respect to exposure. The dose of HAV exposure was collected via questionnaires and measurements at several follow-ups. The CATSYS Tremor Pen(®) was used for measuring postural tremor. Multiple linear regression methods were used to analyze associations between different tremor variables and HAV exposure, along with predictor variables with biological relevance. There were no statistically significant associations between the different tremor variables and cumulative HAV or current exposure. Age was a statistically significant predictor of variation in tremor outcomes for three of the four tremor variables, whereas nicotine use was a statistically significant predictor of either left or right hand or both hands for all four tremor variables. In the present study, there was no evidence of an exposure-response association between HAV exposure and measured postural tremor. Increase in age and nicotine use appeared to be the strongest predictors of tremor.
What Counts? An Ethnographic Study of Infection Data Reported to a Patient Safety Program
Dixon-Woods, Mary; Leslie, Myles; Bion, Julian; Tarrant, Carolyn
2012-01-01
Context Performance measures are increasingly widely used in health care and have an important role in quality. However, field studies of what organizations are doing when they collect and report performance measures are rare. An opportunity for such a study was presented by a patient safety program requiring intensive care units (ICUs) in England to submit monthly data on central venous catheter bloodstream infections (CVC-BSIs). Methods We conducted an ethnographic study involving ∼855 hours of observational fieldwork and 93 interviews in 17 ICUs plus 29 telephone interviews. Findings Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice. Conclusions Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care. These findings have important implications for assumptions about what can be achieved in infection reduction and quality improvement strategies. PMID:22985281
Exhaled breath condensate pH assays are not influenced by oral ammonia
Wells, K; Vaughan, J; Pajewski, T; Hom, S; Ngamtrakulpanit, L; Smith, A; Nguyen, A; Turner, R; Hunt, J
2005-01-01
Background: Measurement of pH in exhaled breath condensate (EBC) is robust and simple. Acidic source fluid (airway lining fluid) traps bases while volatilising acids, leading to EBC acidification in many lung diseases. Lower airway ammonia is one determinant of airway lining fluid pH, raising the concern that addition of the base ammonia by contamination from the mouth might confound EBC pH assays. Methods: Three discrete methods were used to limit oral ammonia contamination of EBC collections: endotracheal intubation, oral rinsing, and –40°C condenser temperatures. Separately, ammonia was removed from collected EBC samples by lyophilisation and resuspension. Intraweek and intraday variability of ammonia concentration was determined in 76 subjects, and ammonia and pH from a further 235 samples were graphically compared. Ammonia was assayed spectrophotometrically and pH was assessed after deaeration. Results: Data from 1091 samples are presented. Ammonia was reduced in EBC by all methods. Endotracheal intubation decreased EBC ammonia from a mean (SD) of 619 (124) µM to 80 (24) µM (p<0.001, n = 32). Oral rinsing before collection also led to a decline in EBC ammonia from 573 (307) µM to 224 (80) µM (p = 0.016, n = 7). The colder the condensation temperature used, the less ammonia was trapped in the EBC. Lyophilisation removed 99.4 (1.9)% of ammonia. Most importantly, the pH of EBC never decreased after removal of ammonia by any of these methods. Intraweek and intraday coefficients of variation for ammonia were 64 (27)% and 60 (32)%, which is substantially more variable than EBC pH assays. Conclusions: Although ammonia and pH appear to correlate in EBC, the oral ammonia concentration is not an important determinant of EBC pH. No precautions need to be taken to exclude oral ammonia when EBC pH is of interest. The low pH and low ammonia found in EBC from patients with lung diseases appear to be independent effects of volatile compounds arising from the airway. PMID:15618579
RNA:DNA ratios as a proxy of egg production rates of Acartia
NASA Astrophysics Data System (ADS)
Cruz, Joana; Teodósio, M. Alexandra; Ben-Hamadou, Radhouane; Chícharo, Luís; Garrido, Susana; Ré, Pedro; Santos, A. Miguel P.
2017-03-01
Estimates of copepod secondary production are of great importance to infer the global organic matter fluxes in aquatic ecosystems and species-specific responses of zooplankton to hydrologic variability. However, there is still no routine method to determine copepods secondary production in order to eliminate time consuming experimental analyses. Therefore, we determined whether there is a correlation between Egg Production Rates (EPR) and RNA:DNA ratios of Acartia species, by measuring their seasonal and spatial variability and the influence of environmental factors for Acartia sp. collected in the Guadiana river estuary. EPR of Acartia tonsa was positively related with chlorophyll a concentration, freshwater inflow and biomass of dinoflagellates, while Acartia clausi was only related to dinoflagellates. Dinoflagellates seem to be the optimal food item influencing the reproduction of both Acartia species in the studied area. The biochemical index RNA:DNA was positively related to EPR, indicating that it is a good proxy of copepod production and a promising method to use in the future to estimate secondary production.
A Kernel Embedding-Based Approach for Nonstationary Causal Model Inference.
Hu, Shoubo; Chen, Zhitang; Chan, Laiwan
2018-05-01
Although nonstationary data are more common in the real world, most existing causal discovery methods do not take nonstationarity into consideration. In this letter, we propose a kernel embedding-based approach, ENCI, for nonstationary causal model inference where data are collected from multiple domains with varying distributions. In ENCI, we transform the complicated relation of a cause-effect pair into a linear model of variables of which observations correspond to the kernel embeddings of the cause-and-effect distributions in different domains. In this way, we are able to estimate the causal direction by exploiting the causal asymmetry of the transformed linear model. Furthermore, we extend ENCI to causal graph discovery for multiple variables by transforming the relations among them into a linear nongaussian acyclic model. We show that by exploiting the nonstationarity of distributions, both cause-effect pairs and two kinds of causal graphs are identifiable under mild conditions. Experiments on synthetic and real-world data are conducted to justify the efficacy of ENCI over major existing methods.
Shen, Xia; De Jonge, Jennifer; Forsberg, Simon K. G.; Pettersson, Mats E.; Sheng, Zheya; Hennig, Lars; Carlborg, Örjan
2014-01-01
As Arabidopsis thaliana has colonized a wide range of habitats across the world it is an attractive model for studying the genetic mechanisms underlying environmental adaptation. Here, we used public data from two collections of A. thaliana accessions to associate genetic variability at individual loci with differences in climates at the sampling sites. We use a novel method to screen the genome for plastic alleles that tolerate a broader climate range than the major allele. This approach reduces confounding with population structure and increases power compared to standard genome-wide association methods. Sixteen novel loci were found, including an association between Chromomethylase 2 (CMT2) and temperature seasonality where the genome-wide CHH methylation was different for the group of accessions carrying the plastic allele. Cmt2 mutants were shown to be more tolerant to heat-stress, suggesting genetic regulation of epigenetic modifications as a likely mechanism underlying natural adaptation to variable temperatures, potentially through differential allelic plasticity to temperature-stress. PMID:25503602
Martini, Daniela; Biasini, Beatrice; Rossi, Stefano; Zavaroni, Ivana; Bedogni, Giorgio; Musci, Marilena; Pruneti, Carlo; Passeri, Giovanni; Ventura, Marco; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Dei Cas, Alessandra; Bonadonna, Riccardo C; Del Rio, Daniele
2018-06-01
All the requests for authorisation to bear health claims under Articles 13(5) and 14 in the context of appetite ratings and weight management have received a negative opinion by the European Food Safety Authority (EFSA), mainly because of the insufficient substantiation of the claimed effects (CEs). This manuscript results from an investigation aimed to collect, collate and critically analyse the information related to outcome variables (OVs) and methods of measurement (MMs) in the context of appetite ratings and weight management compliant with Regulation 1924/2006. Based on the literature review, the appropriateness of OVs and MMs was evaluated for specific CEs. This work might help EFSA in the development of updated guidance addressed to stakeholders interested in bearing health claims in the area of weight management. Moreover, it could drive the applicants during the design of randomised controlled trials aimed to substantiate such claims.
Biasini, Beatrice; Marchi, Laura; Angelino, Donato; Bedogni, Giorgio; Zavaroni, Ivana; Pruneti, Carlo; Galli, Daniela; Mirandola, Prisco; Vitale, Marco; Dei Cas, Alessandra; Bonadonna, Riccardo C; Passeri, Giovanni; Ventura, Marco; Del Rio, Daniele; Martini, Daniela
2018-01-29
Most of the requests of authorisation to the use of health claims pursuant to Regulation EC 1924/2006 related to the gastrointestinal (GI) tract have received a negative opinion by the European Food Safety Authority (EFSA), mainly because of an insufficient substantiation of the claimed effect (CE). The present manuscript refers to the collection, collation and critical analysis of outcome variables (OVs) and methods of measurement (MMs) related to the GI tract compliant with Regulation 1924/2006. The critical evaluation of OVs and MMs was based on the literature review, with the final aim of defining their appropriateness in the context of a specific CE. The results obtained are relevant for the choice of the best OVs and MMs to be used in randomised controlled trials aimed to substantiate the claims on the GI tract. Moreover, the results can be used by EFSA for updating the guidance for the scientific requirements of such health claims.