Sample records for efficient sampling strategies

  1. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  3. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  4. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  5. Sampling error in timber surveys

    Treesearch

    Austin Hasel

    1938-01-01

    Various sampling strategies are evaluated for efficiency in an interior ponderosa pine forest. In a 5760 acre tract, efficiency was gained by stratifying into quarter acre blocks and sampling randomly from within. A systematic cruise was found to be superior for volume estimation.

  6. Are Uncultivated Bacteria Really Uncultivable?

    PubMed Central

    Puspita, Indun Dewi; Kamagata, Yoichi; Tanaka, Michiko; Asano, Kozo; Nakatsu, Cindy H.

    2012-01-01

    Many strategies have been used to increase the number of bacterial cells that can be grown from environmental samples but cultivation efficiency remains a challenge for microbial ecologists. The difficulty of cultivating a fraction of bacteria in environmental samples can be classified into two non-exclusive categories. Bacterial taxa with no cultivated representatives for which appropriate laboratory conditions necessary for growth are yet to be identified. The other class is cells in a non-dividing state (also known as dormant or viable but not culturable cells) that require the removal or addition of certain factors to re-initiate growth. A number of strategies, from simple to high throughput techniques, are reviewed that have been used to increase the cultivation efficiency of environmental samples. Some of the underlying mechanisms that contribute to the success of these cultivation strategies are described. Overall this review emphasizes the need of researchers to first understand the factors that are hindering cultivation to identify the best strategies to improve cultivation efficiency. PMID:23059723

  7. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  8. An Immunization Strategy for Hidden Populations.

    PubMed

    Chen, Saran; Lu, Xin

    2017-06-12

    Hidden populations, such as injecting drug users (IDUs), sex workers (SWs) and men who have sex with men (MSM), are considered at high risk of contracting and transmitting infectious diseases such as AIDS, gonorrhea, syphilis etc. However, public health interventions to such groups are prohibited due to strong privacy concerns and lack of global information, which is a necessity for traditional strategies such as targeted immunization and acquaintance immunization. In this study, we introduce an innovative intervention strategy to be used in combination with a sampling approach that is widely used for hidden populations, Respondent-driven Sampling (RDS). The RDS strategy is implemented in two steps: First, RDS is used to estimate the average degree (personal network size) and degree distribution of the target population with sample data. Second, a cut-off threshold is calculated and used to screen the respondents to be immunized. Simulations on model networks and real-world networks reveal that the efficiency of the RDS strategy is close to that of the targeted strategy. As the new strategy can be implemented with the RDS sampling process, it provides a cost-efficient and feasible approach for disease intervention and control for hidden populations.

  9. Comparison of Depletion Strategies for the Enrichment of Low-Abundance Proteins in Urine.

    PubMed

    Filip, Szymon; Vougas, Konstantinos; Zoidakis, Jerome; Latosinska, Agnieszka; Mullen, William; Spasovski, Goce; Mischak, Harald; Vlahou, Antonia; Jankowski, Joachim

    2015-01-01

    Proteome analysis of complex biological samples for biomarker identification remains challenging, among others due to the extended range of protein concentrations. High-abundance proteins like albumin or IgG of plasma and urine, may interfere with the detection of potential disease biomarkers. Currently, several options are available for the depletion of abundant proteins in plasma. However, the applicability of these methods in urine has not been thoroughly investigated. In this study, we compared different, commercially available immunodepletion and ion-exchange based approaches on urine samples from both healthy subjects and CKD patients, for their reproducibility and efficiency in protein depletion. A starting urine volume of 500 μL was used to simulate conditions of a multi-institutional biomarker discovery study. All depletion approaches showed satisfactory reproducibility (n=5) in protein identification as well as protein abundance. Comparison of the depletion efficiency between the unfractionated and fractionated samples and the different depletion strategies, showed efficient depletion in all cases, with the exception of the ion-exchange kit. The depletion efficiency was found slightly higher in normal than in CKD samples and normal samples yielded more protein identifications than CKD samples when using both initial as well as corresponding depleted fractions. Along these lines, decrease in the amount of albumin and other targets as applicable, following depletion, was observed. Nevertheless, these depletion strategies did not yield a higher number of identifications in neither the urine from normal nor CKD patients. Collectively, when analyzing urine in the context of CKD biomarker identification, no added value of depletion strategies can be observed and analysis of unfractionated starting urine appears to be preferable.

  10. Comparison of Depletion Strategies for the Enrichment of Low-Abundance Proteins in Urine

    PubMed Central

    Filip, Szymon; Vougas, Konstantinos; Zoidakis, Jerome; Latosinska, Agnieszka; Mullen, William; Spasovski, Goce; Mischak, Harald; Vlahou, Antonia; Jankowski, Joachim

    2015-01-01

    Proteome analysis of complex biological samples for biomarker identification remains challenging, among others due to the extended range of protein concentrations. High-abundance proteins like albumin or IgG of plasma and urine, may interfere with the detection of potential disease biomarkers. Currently, several options are available for the depletion of abundant proteins in plasma. However, the applicability of these methods in urine has not been thoroughly investigated. In this study, we compared different, commercially available immunodepletion and ion-exchange based approaches on urine samples from both healthy subjects and CKD patients, for their reproducibility and efficiency in protein depletion. A starting urine volume of 500 μL was used to simulate conditions of a multi-institutional biomarker discovery study. All depletion approaches showed satisfactory reproducibility (n=5) in protein identification as well as protein abundance. Comparison of the depletion efficiency between the unfractionated and fractionated samples and the different depletion strategies, showed efficient depletion in all cases, with the exception of the ion-exchange kit. The depletion efficiency was found slightly higher in normal than in CKD samples and normal samples yielded more protein identifications than CKD samples when using both initial as well as corresponding depleted fractions. Along these lines, decrease in the amount of albumin and other targets as applicable, following depletion, was observed. Nevertheless, these depletion strategies did not yield a higher number of identifications in neither the urine from normal nor CKD patients. Collectively, when analyzing urine in the context of CKD biomarker identification, no added value of depletion strategies can be observed and analysis of unfractionated starting urine appears to be preferable. PMID:26208298

  11. Learning Efficiency of Two ICT-Based Instructional Strategies in Greek Sheep Farmers

    ERIC Educational Resources Information Center

    Bellos, Georgios; Mikropoulos, Tassos A.; Deligeorgis, Stylianos; Kominakis, Antonis

    2016-01-01

    Purpose: The objective of the present study was to compare the learning efficiency of two information and communications technology (ICT)-based instructional strategies (multimedia presentation (MP) and concept mapping) in a sample (n = 187) of Greek sheep farmers operating mainly in Western Greece. Design/methodology/approach: In total, 15…

  12. A Comparison of Three Online Recruitment Strategies for Engaging Parents

    PubMed Central

    Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H.

    2017-01-01

    Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design. PMID:28804184

  13. A Comparison of Three Online Recruitment Strategies for Engaging Parents.

    PubMed

    Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H

    2016-10-01

    Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design.

  14. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    USDA-ARS?s Scientific Manuscript database

    Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...

  15. Hospital electronic medical record enterprise application strategies: do they matter?

    PubMed

    Fareed, Naleef; Ozcan, Yasar A; DeShazo, Jonathan P

    2012-01-01

    Successful implementations and the ability to reap the benefits of electronic medical record (EMR) systems may be correlated with the type of enterprise application strategy that an administrator chooses when acquiring an EMR system. Moreover, identifying the most optimal enterprise application strategy is a task that may have important linkages with hospital performance. This study explored whether hospitals that have adopted differential EMR enterprise application strategies concomitantly differ in their overall efficiency. Specifically, the study examined whether hospitals with a single-vendor strategy had a higher likelihood of being efficient than those with a best-of-breed strategy and whether hospitals with a best-of-suite strategy had a higher probability of being efficient than those with best-of-breed or single-vendor strategies. A conceptual framework was used to formulate testable hypotheses. A retrospective cross-sectional approach using data envelopment analysis was used to obtain efficiency scores of hospitals by EMR enterprise application strategy. A Tobit regression analysis was then used to determine the probability of a hospital being inefficient as related to its EMR enterprise application strategy, while moderating for the hospital's EMR "implementation status" and controlling for hospital and market characteristics. The data envelopment analysis of hospitals suggested that only 32 hospitals were efficient in the study's sample of 2,171 hospitals. The results from the post hoc analysis showed partial support for the hypothesis that hospitals with a best-of-suite strategy were more likely to be efficient than those with a single-vendor strategy. This study underscores the importance of understanding the differences between the three strategies discussed in this article. On the basis of the findings, hospital administrators should consider the efficiency associations that a specific strategy may have compared with another prior to moving toward an enterprise application strategy.

  16. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Treesearch

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  17. Preserving correlations between trajectories for efficient path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingrich, Todd R.; Geissler, Phillip L.; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-06-21

    Importance sampling of trajectories has proved a uniquely successful strategy for exploring rare dynamical behaviors of complex systems in an unbiased way. Carrying out this sampling, however, requires an ability to propose changes to dynamical pathways that are substantial, yet sufficiently modest to obtain reasonable acceptance rates. Satisfying this requirement becomes very challenging in the case of long trajectories, due to the characteristic divergences of chaotic dynamics. Here, we examine schemes for addressing this problem, which engineer correlation between a trial trajectory and its reference path, for instance using artificial forces. Our analysis is facilitated by a modern perspective onmore » Markov chain Monte Carlo sampling, inspired by non-equilibrium statistical mechanics, which clarifies the types of sampling strategies that can scale to long trajectories. Viewed in this light, the most promising such strategy guides a trial trajectory by manipulating the sequence of random numbers that advance its stochastic time evolution, as done in a handful of existing methods. In cases where this “noise guidance” synchronizes trajectories effectively, as the Glauber dynamics of a two-dimensional Ising model, we show that efficient path sampling can be achieved for even very long trajectories.« less

  18. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  19. Beating the curse of dimension with accurate statistics for the Fokker-Planck equation in complex turbulent systems.

    PubMed

    Chen, Nan; Majda, Andrew J

    2017-12-05

    Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.

  20. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  1. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Efficiency and factors influencing efficiency of Community Health Strategy in providing Maternal and Child Health services in Mwingi District, Kenya: an expert opinion perspective

    PubMed Central

    Nzioki, Japheth Mativo; Onyango, Rosebella Ogutu; Ombaka, James Herbert

    2015-01-01

    Introduction Community Health Strategy (CHS) is a new Primary Health Care (PHC) model in Kenya, designed to provide PHC services in Kenya. In 2011, CHS was initiated in Mwingi district as one of the components of APHIA plus kamili program. The objectives of this study was to evaluate the efficiency of the CHS in providing MCH services in Mwingi district and to establish the factors influencing efficiency of the CHS in providing MCH services in the district. Methods This was a qualitative study. Fifteen Key informants were sampled from key stakeholders. Sampling was done using purposive and maximum variation sampling methods. Semi-structured in-depth interviews were used for data collection. Data was managed and analyzed using NVIVO. Framework analysis and quasi statistics were used in data analysis. Results Expert opinion data indicated that the CHS was efficient in providing MCH services. Factors influencing efficiency of the CHS in provision of MCH services were: challenges facing Community Health Workers (CHWs), Social cultural and economic factors influencing MCH in the district, and motivation among CHWs. Conclusion Though CHS was found to be efficient in providing MCH services, this was an expert opinion perspective, a quantitative Cost Effectiveness Analysis (CEA) to confirm these findings is recommended. To improve efficiency of the CHS in the district, challenges facing CHWs and Social cultural and economic factors that influence efficiency of the CHS in the district need to be addressed. PMID:26090046

  3. Facebook or Twitter?: Effective recruitment strategies for family caregivers.

    PubMed

    Herbell, Kayla; Zauszniewski, Jaclene A

    2018-06-01

    This brief details recent recruitment insights from a large all-online study of family caregivers that aimed to develop a measure to assess how family caregivers manage daily stresses. Online recruitment strategies included the use of Twitter and Facebook. Overall, 800 individuals responded to the recruitment strategy; 230 completed all study procedures. The most effective online recruitment strategy for targeting family caregivers was Facebook, yielding 86% of the sample. Future researchers may find the use of social media recruitment methods appealing because they are inexpensive, simple, and efficient methods for obtaining National samples. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Efficient Simulation of Tropical Cyclone Pathways with Stochastic Perturbations

    NASA Astrophysics Data System (ADS)

    Webber, R.; Plotkin, D. A.; Abbot, D. S.; Weare, J.

    2017-12-01

    Global Climate Models (GCMs) are known to statistically underpredict intense tropical cyclones (TCs) because they fail to capture the rapid intensification and high wind speeds characteristic of the most destructive TCs. Stochastic parametrization schemes have the potential to improve the accuracy of GCMs. However, current analysis of these schemes through direct sampling is limited by the computational expense of simulating a rare weather event at fine spatial gridding. The present work introduces a stochastically perturbed parametrization tendency (SPPT) scheme to increase simulated intensity of TCs. We adapt the Weighted Ensemble algorithm to simulate the distribution of TCs at a fraction of the computational effort required in direct sampling. We illustrate the efficiency of the SPPT scheme by comparing simulations at different spatial resolutions and stochastic parameter regimes. Stochastic parametrization and rare event sampling strategies have great potential to improve TC prediction and aid understanding of tropical cyclogenesis. Since rising sea surface temperatures are postulated to increase the intensity of TCs, these strategies can also improve predictions about climate change-related weather patterns. The rare event sampling strategies used in the current work are not only a novel tool for studying TCs, but they may also be applied to sampling any range of extreme weather events.

  5. Evaluation of a Urine Pooling Strategy for the Rapid and Cost-Efficient Prevalence Classification of Schistosomiasis.

    PubMed

    Lo, Nathan C; Coulibaly, Jean T; Bendavid, Eran; N'Goran, Eliézer K; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I; Andrews, Jason R

    2016-08-01

    A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. We performed a cross-sectional study involving 114 children aged 6-15 years in six neighborhoods in Azaguié Ahoua, south Côte d'Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0-97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy.

  6. Evaluation of a Urine Pooling Strategy for the Rapid and Cost-Efficient Prevalence Classification of Schistosomiasis

    PubMed Central

    Coulibaly, Jean T.; Bendavid, Eran; N’Goran, Eliézer K.; Utzinger, Jürg; Keiser, Jennifer; Bogoch, Isaac I.; Andrews, Jason R.

    2016-01-01

    Background A key epidemiologic feature of schistosomiasis is its focal distribution, which has important implications for the spatial targeting of preventive chemotherapy programs. We evaluated the diagnostic accuracy of a urine pooling strategy using a point-of-care circulating cathodic antigen (POC-CCA) cassette test for detection of Schistosoma mansoni, and employed simulation modeling to test the classification accuracy and efficiency of this strategy in determining where preventive chemotherapy is needed in low-endemicity settings. Methodology We performed a cross-sectional study involving 114 children aged 6–15 years in six neighborhoods in Azaguié Ahoua, south Côte d’Ivoire to characterize the sensitivity and specificity of the POC-CCA cassette test with urine samples that were tested individually and in pools of 4, 8, and 12. We used a Bayesian latent class model to estimate test characteristics for individual POC-CCA and quadruplicate Kato-Katz thick smears on stool samples. We then developed a microsimulation model and used lot quality assurance sampling to test the performance, number of tests, and total cost per school for each pooled testing strategy to predict the binary need for school-based preventive chemotherapy using a 10% prevalence threshold for treatment. Principal Findings The sensitivity of the urine pooling strategy for S. mansoni diagnosis using pool sizes of 4, 8, and 12 was 85.9%, 79.5%, and 65.4%, respectively, when POC-CCA trace results were considered positive, and 61.5%, 47.4%, and 30.8% when POC-CCA trace results were considered negative. The modeled specificity ranged from 94.0–97.7% for the urine pooling strategies (when POC-CCA trace results were considered negative). The urine pooling strategy, regardless of the pool size, gave comparable and often superior classification performance to stool microscopy for the same number of tests. The urine pooling strategy with a pool size of 4 reduced the number of tests and total cost compared to classical stool microscopy. Conclusions/Significance This study introduces a method for rapid and efficient S. mansoni prevalence estimation through examining pooled urine samples with POC-CCA as an alternative to widely used stool microscopy. PMID:27504954

  7. TU-AB-BRC-11: Moving a GPU-OpenCL-Based Monte Carlo (MC) Dose Engine Towards Routine Clinical Use: Automatic Beam Commissioning and Efficient Source Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Folkerts, M; Jiang, S

    Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less

  8. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  9. VARIANCE ESTIMATION FOR SPATIALLY BALANCED SAMPLES OF ENVIRONMENTAL RESOURCES

    EPA Science Inventory

    The spatial distribution of a natural resource is an important consideration in designing an efficient survey or monitoring program for the resource. We review a unified strategy for designing probability samples of discrete, finite resource populations, such as lakes within som...

  10. Use of an online extraction liquid chromatography quadrupole time-of-flight tandem mass spectrometry method for the characterization of polyphenols in Citrus paradisi cv. Changshanhuyu peel.

    PubMed

    Tong, Chaoying; Peng, Mijun; Tong, Runna; Ma, Ruyi; Guo, Keke; Shi, Shuyun

    2018-01-19

    Chemical profiling of natural products by high performance liquid chromatography (HPLC) was critical for understanding of their clinical bioactivities, and sample pretreatment steps have been considered as a bottleneck for analysis. Currently, concerted efforts have been made to develop sample pretreatment methods with high efficiency, low solvent and time consumptions. Here, a simple and efficient online extraction (OLE) strategy coupled with HPLC-diode array detector-quadrupole time-of-flight tandem mass spectrometry (HPLC-DAD-QTOF-MS/MS) was developed for rapid chemical profiling. For OLE strategy, guard column inserted with ground sample (2 mg) instead of sample loop was connected with manual injection valve, in which components were directly extracted and transferred to HPLC-DAD-QTOF-MS/MS system only by mobile phase without any extra time, solvent, instrument and operation. By comparison with offline heat-reflux extraction of Citrus paradisi cv. Changshanhuyu (Changshanhuyu) peel, OLE strategy presented higher extraction efficiency perhaps because of the high pressure and gradient elution mode. A total of twenty-two secondary metabolites were detected according to their retention times, UV spectra, exact mass, and fragmentation ions in MS/MS spectra, and nine of them were discovered in Changshanhuyu peel for the first time to our knowledge. It is concluded that the developed OLE-HPLC-DAD-QTOF-MS/MS system offers new perspectives for rapid chemical profiling of natural products. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Online extraction-high performance liquid chromatography-diode array detector-quadrupole time-of-flight tandem mass spectrometry for rapid flavonoid profiling of Fructus aurantii immaturus.

    PubMed

    Tong, Runna; Peng, Mijun; Tong, Chaoying; Guo, Keke; Shi, Shuyun

    2018-03-01

    Chemical profiling of natural products by high performance liquid chromatography (HPLC) was critical for understanding of their clinical bioactivities, and sample pretreatment steps have been considered as a bottleneck for analysis. Currently, concerted efforts have been made to develop sample pretreatment methods with high efficiency, low solvent and time consumptions. Here, a simple and efficient online extraction (OLE) strategy coupled with HPLC-diode array detector-quadrupole time-of-flight tandem mass spectrometry (HPLC-DAD-QTOF-MS/MS) was developed for rapid chemical profiling. For OLE strategy, guard column inserted with ground sample (2 mg) instead of sample loop was connected with manual injection valve, in which components were directly extracted and transferred to HPLC-DAD-QTOF-MS/MS system only by mobile phase without any extra time, solvent, instrument and operation. By comparison with offline heat-reflux extraction for Fructus aurantii immaturus (Zhishi), OLE strategy presented higher extraction efficiency perhaps because of the high pressure and gradient elution mode. A total of eighteen flavonoids were detected according to their retention times, UV spectra, exact mass, and fragmentation ions in MS/MS spectra, and compound 9, natsudaidain-3-O-glucoside, was discovered in Zhishi for the first time. It is concluded that the developed OLE-HPLC-DAD-QTOF-MS/MS system offers new perspectives for rapid chemical profiling of natural products. Copyright © 2018. Published by Elsevier B.V.

  12. Multi-source recruitment strategies for advancing addiction recovery research beyond treated samples

    PubMed Central

    Subbaraman, Meenakshi Sabina; Laudet, Alexandre B.; Ritter, Lois A.; Stunz, Aina; Kaskutas, Lee Ann

    2014-01-01

    Background The lack of established sampling frames makes reaching individuals in recovery from substance problems difficult. Although general population studies are most generalizable, the low prevalence of individuals in recovery makes this strategy costly and inefficient. Though more efficient, treatment samples are biased. Aims To describe multi-source recruitment for capturing participants from heterogeneous pathways to recovery; assess which sources produced the most respondents within subgroups; and compare treatment and non-treatment samples to address generalizability. Results Family/friends, Craigslist, social media and non-12-step groups produced the most respondents from hard-to-reach groups, such as racial minorities and treatment-naïve individuals. Recovery organizations yielded twice as many African-Americans and more rural dwellers, while social media yielded twice as many young people than other sources. Treatment samples had proportionally fewer females and older individuals compared to non-treated samples. Conclusions Future research on recovery should utilize previously neglected recruiting strategies to maximize the representativeness of samples. PMID:26166909

  13. Pooled HIV-1 viral load testing using dried blood spots to reduce the cost of monitoring antiretroviral treatment in a resource-limited setting.

    PubMed

    Pannus, Pieter; Fajardo, Emmanuel; Metcalf, Carol; Coulborn, Rebecca M; Durán, Laura T; Bygrave, Helen; Ellman, Tom; Garone, Daniela; Murowa, Michael; Mwenda, Reuben; Reid, Tony; Preiser, Wolfgang

    2013-10-01

    Rollout of routine HIV-1 viral load monitoring is hampered by high costs and logistical difficulties associated with sample collection and transport. New strategies are needed to overcome these constraints. Dried blood spots from finger pricks have been shown to be more practical than the use of plasma specimens, and pooling strategies using plasma specimens have been demonstrated to be an efficient method to reduce costs. This study found that combination of finger-prick dried blood spots and a pooling strategy is a feasible and efficient option to reduce costs, while maintaining accuracy in the context of a district hospital in Malawi.

  14. Adaptive sampling strategies with high-throughput molecular dynamics

    NASA Astrophysics Data System (ADS)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  15. Effective surveillance strategies following a potential classical Swine Fever incursion in a remote wild pig population in North-Western Australia.

    PubMed

    Leslie, E; Cowled, B; Graeme Garner, M; Toribio, J-A L M L; Ward, M P

    2014-10-01

    Early disease detection and efficient methods of proving disease freedom can substantially improve the response to incursions of important transboundary animal diseases in previously free regions. We used a spatially explicit, stochastic disease spread model to simulate the spread of classical swine fever in wild pigs in a remote region of northern Australia and to assess the performance of disease surveillance strategies to detect infection at different time points and to delineate the size of the resulting outbreak. Although disease would likely be detected, simple random sampling was suboptimal. Radial and leapfrog sampling improved the effectiveness of surveillance at various stages of the simulated disease incursion. This work indicates that at earlier stages, radial sampling can reduce epidemic length and achieve faster outbreak delineation and control, but at later stages leapfrog sampling will outperform radial sampling in relation to supporting faster disease control with a less-extensive outbreak area. Due to the complexity of wildlife population dynamics and group behaviour, a targeted approach to surveillance needs to be implemented for the efficient use of resources and time. Using a more situation-based surveillance approach and accounting for disease distribution and the time period over which an epidemic has occurred is the best way to approach the selection of an appropriate surveillance strategy. © 2013 Blackwell Verlag GmbH.

  16. Sampling strategies for efficient estimation of tree foliage biomass

    Treesearch

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  17. Alternative Loglinear Smoothing Models and Their Effect on Equating Function Accuracy. Research Report. ETS RR-09-48

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2009-01-01

    This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…

  18. Consistency assessment with global and bridging development strategies in emerging markets.

    PubMed

    Li, Gang; Chen, Josh; Quan, Hui; Shentu, Yue

    2013-11-01

    Global trial strategy with the participation of all major regions including countries from emerging markets surely increases new drug development efficiency. Nevertheless, there are circumstances in which some countries in emerging markets cannot join the original global trial. To evaluate the extrapolability of the original trial results to a new country, a bridging trial in the country has to be conducted. In this paper, we first evaluate the efficiency loss of the bridging trial strategy compared to that of the global trial strategy as a function of between-study variability from consistency assessment perspective. The provided evidence should encourage countries in emerging markets to make a greater effort to participate in the original global trial. We then discuss sample size requirement for desired assurance probability for consistency assessment based on various approaches for both global and bridging trial strategies. Examples are presented for numerical demonstration and comparisons. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    PubMed

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Comparison of fuel value and combustion characteristics of two different RDF samples.

    PubMed

    Sever Akdağ, A; Atımtay, A; Sanin, F D

    2016-01-01

    Generation of Municipal Solid Waste (MSW) tends to increase with the growing population and economic development of the society; therefore, establishing environmentally sustainable waste management strategies is crucial. In this sense, waste to energy strategies have come into prominence since they increase the resource efficiency and replace the fossil fuels with renewable energy sources by enabling material and energy recovery instead of landfill disposal of the wastes. Refuse Derived Fuel (RDF), which is an alternative fuel produced from energy-rich Municipal Solid Waste (MSW) materials diverted from landfills, is one of the waste to energy strategies gaining more and more attention. This study aims to investigate the thermal characteristics and co-combustion efficiency of two RDF samples in Turkey. Proximate, ultimate and thermogravimetric analyses (TGA) were conducted on these samples. Furthermore, elemental compositions of ash from RDF samples were determined by X-Ray Fluorescence (XRF) analysis. The RDF samples were combusted alone and co-combusted in mixtures with coal and petroleum coke in a lab scale reactor at certain percentages on energy basis (3%, 5%, 10%, 20% and 30%) where co-combustion processes and efficiencies were investigated. It was found that the calorific values of RDF samples on dry basis were close to that of coal and a little lower compared to petroleum coke used in this study. Furthermore, the analysis indicated that when RDF in the mixture was higher than 10%, the CO concentration in the flue gas increased and so the combustion efficiency decreased; furthermore, the combustion characteristics changed from char combustion to volatile combustion. However, RDF addition to the fuel mixtures decreased the SO2 emission and did not change the NOx profiles. Also, XRF analysis showed that the slagging and fouling potential of RDF combustion was a function of RDF portion in fuel blend. When the RDF was combusted alone, the slagging and fouling indices of its ash were found to be higher than the limit values producing slagging and fouling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  2. Column-coupling strategies for multidimensional electrophoretic separation techniques.

    PubMed

    Kler, Pablo A; Sydes, Daniel; Huhn, Carolin

    2015-01-01

    Multidimensional electrophoretic separations represent one of the most common strategies for dealing with the analysis of complex samples. In recent years we have been witnessing the explosive growth of separation techniques for the analysis of complex samples in applications ranging from life sciences to industry. In this sense, electrophoretic separations offer several strategic advantages such as excellent separation efficiency, different methods with a broad range of separation mechanisms, and low liquid consumption generating less waste effluents and lower costs per analysis, among others. Despite their impressive separation efficiency, multidimensional electrophoretic separations present some drawbacks that have delayed their extensive use: the volumes of the columns, and consequently of the injected sample, are significantly smaller compared to other analytical techniques, thus the coupling interfaces between two separations components must be very efficient in terms of providing geometrical precision with low dead volume. Likewise, very sensitive detection systems are required. Additionally, in electrophoretic separation techniques, the surface properties of the columns play a fundamental role for electroosmosis as well as the unwanted adsorption of proteins or other complex biomolecules. In this sense the requirements for an efficient coupling for electrophoretic separation techniques involve several aspects related to microfluidics and physicochemical interactions of the electrolyte solutions and the solid capillary walls. It is interesting to see how these multidimensional electrophoretic separation techniques have been used jointly with different detection techniques, for intermediate detection as well as for final identification and quantification, particularly important in the case of mass spectrometry. In this work we present a critical review about the different strategies for coupling two or more electrophoretic separation techniques and the different intermediate and final detection methods implemented for such separations.

  3. Consistently Sampled Correlation Filters with Space Anisotropic Regularization for Visual Tracking

    PubMed Central

    Shi, Guokai; Xu, Tingfa; Luo, Jiqiang; Li, Yuankun

    2017-01-01

    Most existing correlation filter-based tracking algorithms, which use fixed patches and cyclic shifts as training and detection measures, assume that the training samples are reliable and ignore the inconsistencies between training samples and detection samples. We propose to construct and study a consistently sampled correlation filter with space anisotropic regularization (CSSAR) to solve these two problems simultaneously. Our approach constructs a spatiotemporally consistent sample strategy to alleviate the redundancies in training samples caused by the cyclical shifts, eliminate the inconsistencies between training samples and detection samples, and introduce space anisotropic regularization to constrain the correlation filter for alleviating drift caused by occlusion. Moreover, an optimization strategy based on the Gauss-Seidel method was developed for obtaining robust and efficient online learning. Both qualitative and quantitative evaluations demonstrate that our tracker outperforms state-of-the-art trackers in object tracking benchmarks (OTBs). PMID:29231876

  4. Determining best practices in reconnoitering sites for habitability potential on Mars using a semi-autonomous rover: A GeoHeuristic Operational Strategies Test.

    PubMed

    Yingst, R A; Berger, J; Cohen, B A; Hynek, B; Schmidt, M E

    2017-03-01

    We tested science operations strategies developed for use in remote mobile spacecraft missions, to determine whether reconnoitering a site of potential habitability prior to in-depth study (a walkabout-first strategy) can be a more efficient use of time and resources than the linear approach commonly used by planetary rover missions. Two field teams studied a sedimentary sequence in Utah to assess habitability potential. At each site one team commanded a human "rover" to execute observations and conducted data analysis and made follow-on decisions based solely on those observations. Another team followed the same traverse using traditional terrestrial field methods, and the results of the two teams were compared. Test results indicate that for a mission with goals similar to our field case, the walkabout-first strategy may save time and other mission resources, while improving science return. The approach enabled more informed choices and higher team confidence in choosing where to spend time and other consumable resources. The walkabout strategy may prove most efficient when many close sites must be triaged to a smaller subset for detailed study or sampling. This situation would arise when mission goals include finding, identifying, characterizing or sampling a specific material, feature or type of environment within a certain area.

  5. Self-paced model learning for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin

    2017-01-01

    In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

  6. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    PubMed

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  7. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling.

    PubMed

    Yang, Y Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C-H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  8. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    NASA Astrophysics Data System (ADS)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing; Yang, Lijiang; Gao, Yi Qin

    2016-03-01

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence of the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ - ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.

  9. Efficient sampling over rough energy landscapes with high barriers: A combination of metadynamics with integrated tempering sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. Isaac; Zhang, Jun; Che, Xing

    2016-03-07

    In order to efficiently overcome high free energy barriers embedded in a complex energy landscape and calculate overall thermodynamics properties using molecular dynamics simulations, we developed and implemented a sampling strategy by combining the metadynamics with (selective) integrated tempering sampling (ITS/SITS) method. The dominant local minima on the potential energy surface (PES) are partially exalted by accumulating history-dependent potentials as in metadynamics, and the sampling over the entire PES is further enhanced by ITS/SITS. With this hybrid method, the simulated system can be rapidly driven across the dominant barrier along selected collective coordinates. Then, ITS/SITS ensures a fast convergence ofmore » the sampling over the entire PES and an efficient calculation of the overall thermodynamic properties of the simulation system. To test the accuracy and efficiency of this method, we first benchmarked this method in the calculation of ϕ − ψ distribution of alanine dipeptide in explicit solvent. We further applied it to examine the design of template molecules for aromatic meta-C—H activation in solutions and investigate solution conformations of the nonapeptide Bradykinin involving slow cis-trans isomerizations of three proline residues.« less

  10. Optimizing direct amplification of forensic commercial kits for STR determination.

    PubMed

    Caputo, M; Bobillo, M C; Sala, A; Corach, D

    2017-04-01

    Direct DNA amplification in forensic genotyping reduces analytical time when large sample sets are being analyzed. The amplification success depends mainly upon two factors: on one hand, the PCR chemistry and, on the other, the type of solid substrate where the samples are deposited. We developed a workflow strategy aiming to optimize times and cost when starting from blood samples spotted onto diverse absorbent substrates. A set of 770 blood samples spotted onto Blood cards, Whatman ® 3 MM paper, FTA™ Classic cards, and Whatman ® Grade 1 was analyzed by a unified working strategy including a low-cost pre-treatment, a PCR amplification volume scale-down, and the use of the 3500 Genetic Analyzer as the analytical platform. Samples were analyzed using three different commercial multiplex STR direct amplification kits. The efficiency of the strategy was evidenced by a higher percentage of high-quality profiles obtained (over 94%), a reduced number of re-injections (average 3.2%), and a reduced amplification failure rate (lower than 5%). Average peak height ratio among different commercial kits was 0.91, and the intra-locus balance showed values ranging from 0.92 to 0.94. A comparison with previously reported results was performed demonstrating the efficiency of the proposed modifications. The protocol described herein showed high performance, producing optimal quality profiles, and being both time and cost effective. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  11. Outcomes of Zika virus infection during pregnancy: contributions to the debate on the efficiency of cohort studies.

    PubMed

    Duarte, Elisabeth Carmen; Garcia, Leila Posenato; de Araújo, Wildo Navegantes; Velez, Maria P

    2017-12-02

    Zika infection during pregnancy (ZIKVP) is known to be associated with adverse outcomes. Studies on this matter involve both rare outcomes and rare exposures and methodological choices are not straightforward. Cohort studies will surely offer more robust evidences, but their efficiency must be enhanced. We aim to contribute to the debate on sample selection strategies in cohort studies to assess outcomes associated with ZKVP. A study can be statistically more efficient than another if its estimates are more accurate (precise and valid), even if the studies involve the same number of subjects. Sample size and specific design strategies can enhance or impair the statistical efficiency of a study, depending on how the subjects are distributed in subgroups pertinent to the analysis. In most ZIKVP cohort studies to date there is an a priori identification of the source population (pregnant women, regardless of their exposure status) which is then sampled or included in its entirety (census). Subsequently, the group of pregnant women is classified according to exposure (presence or absence of ZIKVP), respecting the exposed:unexposed ratio in the source population. We propose that the sample selection be done from the a priori identification of groups of pregnant women exposed and unexposed to ZIKVP. This method will allow for an oversampling (even 100%) of the pregnant women with ZKVP and a optimized sampling from the general population of pregnant women unexposed to ZIKVP, saving resources in the unexposed group and improving the expected number of incident cases (outcomes) overall. We hope that this proposal will broaden the methodological debate on the improvement of statistical power and protocol harmonization of cohort studies that aim to evaluate the association between Zika infection during pregnancy and outcomes for the offspring, as well as those with similar objectives.

  12. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  13. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    PubMed

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.

  14. Field results for line intersect distance sampling of coarse woody debris

    Treesearch

    David L. R. Affleck

    2009-01-01

    A growing recognition of the importance of downed woody materials in forest ecosystem processes and global carbon budgets has sharpened the need for efficient sampling strategies that target this resource. Often the aggregate volume, biomass, or carbon content of the downed wood is of primary interest, making recently developed probability proportional-to-volume...

  15. Strategies for the design of bright upconversion nanoparticles for bioanalytical applications

    NASA Astrophysics Data System (ADS)

    Wiesholler, Lisa M.; Hirsch, Thomas

    2018-06-01

    In recent years upconversion nanoparticles (UCNPs) received great attention because of their outstanding optical properties. Especially in bioanalytical applications this class of materials can overcome limitations of common probes like high background fluorescence or blinking. Nevertheless, the requirements for UCNPs to be applicable in biological samples, e.g. small size, water-dispersibility, excitation at low power density are in contradiction with the demand of high brightness. Therefore, a lot of attention is payed to the enhancement of the upconversion luminescence. This review discuss the recent trends and strategies to boost the brightness of UCNPs, classified in three main directions: a) improving the efficiency of energy absorption by the sensitizer via coupling to plasmonic or photonic structures or via attachment of ligands for light harvesting; b) minimizing non-radiative deactivation by variations in the architecture of UCNPs; and c) changing the excitation wavelength to get bright particles at low excitation power density for applications in aqueous systems. These strategies are critically reviewed including current limitations as well as future perspectives for the design of efficient UCNPs especially for sensing application in biological samples or cells.

  16. Learning to explore the structure of kinematic objects in a virtual environment

    PubMed Central

    Buckmann, Marcus; Gaschler, Robert; Höfer, Sebastian; Loeben, Dennis; Frensch, Peter A.; Brock, Oliver

    2015-01-01

    The current study tested the quantity and quality of human exploration learning in a virtual environment. Given the everyday experience of humans with physical object exploration, we document substantial practice gains in the time, force, and number of actions needed to classify the structure of virtual chains, marking the joints as revolute, prismatic, or rigid. In line with current work on skill acquisition, participants could generalize the new and efficient psychomotor patterns of object exploration to novel objects. On the one hand, practice gains in exploration performance could be captured by a negative exponential practice function. On the other hand, they could be linked to strategies and strategy change. After quantifying how much was learned in object exploration and identifying the time course of practice-related gains in exploration efficiency (speed), we identified what was learned. First, we identified strategy components that were associated with efficient (fast) exploration performance: sequential processing, simultaneous use of both hands, low use of pulling rather than pushing, and low use of force. Only the latter was beneficial irrespective of the characteristics of the other strategy components. Second, we therefore characterized efficient exploration behavior by strategies that simultaneously take into account the abovementioned strategy components. We observed that participants maintained a high level of flexibility, sampling from a pool of exploration strategies trading the level of psycho-motoric challenges with exploration speed. We discuss the findings pursuing the aim of advancing intelligent object exploration by combining analytic (object exploration in humans) and synthetic work (object exploration in robots) in the same virtual environment. PMID:25904878

  17. Chitosan-based water-propelled micromotors with strong antibacterial activity.

    PubMed

    Delezuk, Jorge A M; Ramírez-Herrera, Doris E; Esteban-Fernández de Ávila, Berta; Wang, Joseph

    2017-02-09

    A rapid and efficient micromotor-based bacteria killing strategy is described. The new antibacterial approach couples the attractive antibacterial properties of chitosan with the efficient water-powered propulsion of magnesium (Mg) micromotors. These Janus micromotors consist of Mg microparticles coated with the biodegradable and biocompatible polymers poly(lactic-co-glycolic acid) (PLGA), alginate (Alg) and chitosan (Chi), with the latter responsible for the antibacterial properties of the micromotor. The distinct speed and efficiency advantages of the new micromotor-based environmentally friendly antibacterial approach have been demonstrated in various control experiments by treating drinking water contaminated with model Escherichia coli (E. coli) bacteria. The new dynamic antibacterial strategy offers dramatic improvements in the antibacterial efficiency, compared to static chitosan-coated microparticles (e.g., 27-fold enhancement), with a 96% killing efficiency within 10 min. Potential real-life applications of these chitosan-based micromotors for environmental remediation have been demonstrated by the efficient treatment of seawater and fresh water samples contaminated with unknown bacteria. Coupling the efficient water-driven propulsion of such biodegradable and biocompatible micromotors with the antibacterial properties of chitosan holds great considerable promise for advanced antimicrobial water treatment operation.

  18. Optimized breeding strategies for multiple trait integration: II. Process efficiency in event pyramiding and trait fixation.

    PubMed

    Peng, Ting; Sun, Xiaochun; Mumm, Rita H

    2014-01-01

    Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.

  19. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  20. Determining best practices in reconnoitering sites for habitability potential on Mars using a semi-autonomous rover: A GeoHeuristic Operational Strategies Test

    PubMed Central

    Yingst, R.A.; Berger, J.; Cohen, B.A.; Hynek, B.; Schmidt, M.E.

    2017-01-01

    We tested science operations strategies developed for use in remote mobile spacecraft missions, to determine whether reconnoitering a site of potential habitability prior to in-depth study (a walkabout-first strategy) can be a more efficient use of time and resources than the linear approach commonly used by planetary rover missions. Two field teams studied a sedimentary sequence in Utah to assess habitability potential. At each site one team commanded a human “rover” to execute observations and conducted data analysis and made follow-on decisions based solely on those observations. Another team followed the same traverse using traditional terrestrial field methods, and the results of the two teams were compared. Test results indicate that for a mission with goals similar to our field case, the walkabout-first strategy may save time and other mission resources, while improving science return. The approach enabled more informed choices and higher team confidence in choosing where to spend time and other consumable resources. The walkabout strategy may prove most efficient when many close sites must be triaged to a smaller subset for detailed study or sampling. This situation would arise when mission goals include finding, identifying, characterizing or sampling a specific material, feature or type of environment within a certain area. PMID:29307922

  1. Natural attenuation, biostimulation and bioaugmentation of landfill leachate management

    NASA Astrophysics Data System (ADS)

    Er, X. Y.; Seow, T. W.; Lim, C. K.; Ibrahim, Z.

    2018-04-01

    Landfills used for solid waste management will lead to leachate production. Proper leachate management is highly essential to be paid attention to protect the environment and living organisms’ health and safety. In this study, the remedial strategies used for leachate management were natural attenuation, biostimulation and bioaugmentation. All treatment samples were treated via 42-days combined anaerobic-aerobic treatment and the treatment efficiency was studied by measuring the removal rate of COD and ammonia nitrogen. In this study, all remedial strategies showed different degrees of contaminants removal. Lowest contaminants removal rate was achieved via bioaugmentation of B. panacihumi strain ZB1, which were 39.4% of COD and 37.6% of ammonia nitrogen removed from the leachate sample. Higher contaminants removal rate was achieved via natural attenuation and biostimulation. Native microbial population was able to remove 41% of COD and 59% of ammonia nitrogen from the leachate sample. The removal efficiency could be further improved via biostimulation to trigger microbial growth and decontamination rate. Through biostimulation, 58% of COD and 51.8% of ammonia nitrogen were removed from the leachate sample. In conclusion, natural attenuation and biostimulation should be the main choice for leachate management to avoid any unexpected impacts due to introduction of exogenous species.

  2. Direct and long-term detection of gene doping in conventional blood samples.

    PubMed

    Beiter, T; Zimmermann, M; Fragasso, A; Hudemann, J; Niess, A M; Bitzer, M; Lauer, U M; Simon, P

    2011-03-01

    The misuse of somatic gene therapy for the purpose of enhancing athletic performance is perceived as a coming threat to the world of sports and categorized as 'gene doping'. This article describes a direct detection approach for gene doping that gives a clear yes-or-no answer based on the presence or absence of transgenic DNA in peripheral blood samples. By exploiting a priming strategy to specifically amplify intronless DNA sequences, we developed PCR protocols allowing the detection of very small amounts of transgenic DNA in genomic DNA samples to screen for six prime candidate genes. Our detection strategy was verified in a mouse model, giving positive signals from minute amounts (20 μl) of blood samples for up to 56 days following intramuscular adeno-associated virus-mediated gene transfer, one of the most likely candidate vector systems to be misused for gene doping. To make our detection strategy amenable for routine testing, we implemented a robust sample preparation and processing protocol that allows cost-efficient analysis of small human blood volumes (200 μl) with high specificity and reproducibility. The practicability and reliability of our detection strategy was validated by a screening approach including 327 blood samples taken from professional and recreational athletes under field conditions.

  3. Strategy to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis.

    PubMed

    Vázquez-Martínez, Guadalupe; Rodriguez, Mario H; Hernández-Hernández, Fidel; Ibarra, Jorge E

    2004-04-01

    An efficient strategy, based on a combination of procedures, was developed to obtain axenic cultures from field-collected samples of the cyanobacterium Phormidium animalis. Samples were initially cultured in solid ASN-10 medium, and a crude separation of major contaminants from P. animalis filaments was achieved by washing in a series of centrifugations and resuspensions in liquid medium. Then, manageable filament fragments were obtained by probe sonication. Fragmentation was followed by forceful washing, using vacuum-driven filtration through an 8-microm pore size membrane and an excess of water. Washed fragments were cultured and treated with a sequential exposure to four different antibiotics. Finally, axenic cultures were obtained from serial dilutions of treated fragments. Monitoring under microscope examination and by inoculation in Luria-Bertani (LB) agar plates indicated either axenicity or the degree of contamination throughout the strategy.

  4. Adaptive Sampling for Urban Air Quality through Participatory Sensing

    PubMed Central

    Zeng, Yuanyuan; Xiang, Kai

    2017-01-01

    Air pollution is one of the major problems of the modern world. The popularization and powerful functions of smartphone applications enable people to participate in urban sensing to better know about the air problems surrounding them. Data sampling is one of the most important problems that affect the sensing performance. In this paper, we propose an Adaptive Sampling Scheme for Urban Air Quality (AS-air) through participatory sensing. Firstly, we propose to find the pattern rules of air quality according to the historical data contributed by participants based on Apriori algorithm. Based on it, we predict the on-line air quality and use it to accelerate the learning process to choose and adapt the sampling parameter based on Q-learning. The evaluation results show that AS-air provides an energy-efficient sampling strategy, which is adaptive toward the varied outside air environment with good sampling efficiency. PMID:29099766

  5. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun

    2017-12-01

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.

  6. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    PubMed

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  7. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    NASA Astrophysics Data System (ADS)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  8. Choosing experiments to accelerate collective discovery

    PubMed Central

    Rzhetsky, Andrey; Foster, Jacob G.; Foster, Ian T.

    2015-01-01

    A scientist’s choice of research problem affects his or her personal career trajectory. Scientists’ combined choices affect the direction and efficiency of scientific discovery as a whole. In this paper, we infer preferences that shape problem selection from patterns of published findings and then quantify their efficiency. We represent research problems as links between scientific entities in a knowledge network. We then build a generative model of discovery informed by qualitative research on scientific problem selection. We map salient features from this literature to key network properties: an entity’s importance corresponds to its degree centrality, and a problem’s difficulty corresponds to the network distance it spans. Drawing on millions of papers and patents published over 30 years, we use this model to infer the typical research strategy used to explore chemical relationships in biomedicine. This strategy generates conservative research choices focused on building up knowledge around important molecules. These choices become more conservative over time. The observed strategy is efficient for initial exploration of the network and supports scientific careers that require steady output, but is inefficient for science as a whole. Through supercomputer experiments on a sample of the network, we study thousands of alternatives and identify strategies much more efficient at exploring mature knowledge networks. We find that increased risk-taking and the publication of experimental failures would substantially improve the speed of discovery. We consider institutional shifts in grant making, evaluation, and publication that would help realize these efficiencies. PMID:26554009

  9. Insights into biodiversity sampling strategies for freshwater microinvertebrate faunas through bioblitz campaigns and DNA barcoding.

    PubMed

    Laforest, Brandon J; Winegardner, Amanda K; Zaheer, Omar A; Jeffery, Nicholas W; Boyle, Elizabeth E; Adamowicz, Sarah J

    2013-04-04

    Biodiversity surveys have long depended on traditional methods of taxonomy to inform sampling protocols and to determine when a representative sample of a given species pool of interest has been obtained. Questions remain as to how to design appropriate sampling efforts to accurately estimate total biodiversity. Here we consider the biodiversity of freshwater ostracods (crustacean class Ostracoda) from the region of Churchill, Manitoba, Canada. Through an analysis of observed species richness and complementarity, accumulation curves, and richness estimators, we conduct an a posteriori analysis of five bioblitz-style collection strategies that differed in terms of total duration, number of sites, protocol flexibility to heterogeneous habitats, sorting of specimens for analysis, and primary purpose of collection. We used DNA barcoding to group specimens into molecular operational taxonomic units for comparison. Forty-eight provisional species were identified through genetic divergences, up from the 30 species previously known and documented in literature from the Churchill region. We found differential sampling efficiency among the five strategies, with liberal sorting of specimens for molecular analysis, protocol flexibility (and particularly a focus on covering diverse microhabitats), and a taxon-specific focus to collection having strong influences on garnering more accurate species richness estimates. Our findings have implications for the successful design of future biodiversity surveys and citizen-science collection projects, which are becoming increasingly popular and have been shown to produce reliable results for a variety of taxa despite relying on largely untrained collectors. We propose that efficiency of biodiversity surveys can be increased by non-experts deliberately selecting diverse microhabitats; by conducting two rounds of molecular analysis, with the numbers of samples processed during round two informed by the singleton prevalence during round one; and by having sub-teams (even if all non-experts) focus on select taxa. Our study also provides new insights into subarctic diversity of freshwater Ostracoda and contributes to the broader "Barcoding Biotas" campaign at Churchill. Finally, we comment on the associated implications and future research directions for community ecology analyses and biodiversity surveys through DNA barcoding, which we show here to be an efficient technique enabling rapid biodiversity quantification in understudied taxa.

  10. Evaluating Monitoring Strategies to Detect Precipitation-Induced Microbial Contamination Events in Karstic Springs Used for Drinking Water

    PubMed Central

    Besmer, Michael D.; Hammes, Frederik; Sigrist, Jürg A.; Ort, Christoph

    2017-01-01

    Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies. PMID:29213255

  11. Evaluating Monitoring Strategies to Detect Precipitation-Induced Microbial Contamination Events in Karstic Springs Used for Drinking Water.

    PubMed

    Besmer, Michael D; Hammes, Frederik; Sigrist, Jürg A; Ort, Christoph

    2017-01-01

    Monitoring of microbial drinking water quality is a key component for ensuring safety and understanding risk, but conventional monitoring strategies are typically based on low sampling frequencies (e.g., quarterly or monthly). This is of concern because many drinking water sources, such as karstic springs are often subject to changes in bacterial concentrations on much shorter time scales (e.g., hours to days), for example after precipitation events. Microbial contamination events are crucial from a risk assessment perspective and should therefore be targeted by monitoring strategies to establish both the frequency of their occurrence and the magnitude of bacterial peak concentrations. In this study we used monitoring data from two specific karstic springs. We assessed the performance of conventional monitoring based on historical records and tested a number of alternative strategies based on a high-resolution data set of bacterial concentrations in spring water collected with online flow cytometry (FCM). We quantified the effect of increasing sampling frequency and found that for the specific case studied, at least bi-weekly sampling would be needed to detect precipitation events with a probability of >90%. We then proposed an optimized monitoring strategy with three targeted samples per event, triggered by precipitation measurements. This approach is more effective and efficient than simply increasing overall sampling frequency. It would enable the water utility to (1) analyze any relevant event and (2) limit median underestimation of peak concentrations to approximately 10%. We conclude with a generalized perspective on sampling optimization and argue that the assessment of short-term dynamics causing microbial peak loads initially requires increased sampling/analysis efforts, but can be optimized subsequently to account for limited resources. This offers water utilities and public health authorities systematic ways to evaluate and optimize their current monitoring strategies.

  12. Simple, quick and cost-efficient: A universal RT-PCR and sequencing strategy for genomic characterisation of foot-and-mouth disease viruses.

    PubMed

    Dill, V; Beer, M; Hoffmann, B

    2017-08-01

    Foot-and-mouth disease (FMD) is a major contributor to poverty and food insecurity in Africa and Asia, and it is one of the biggest threats to agriculture in highly developed countries. As FMD is extremely contagious, strategies for its prevention, early detection, and the immediate characterisation of outbreak strains are of great importance. The generation of whole-genome sequences enables phylogenetic characterisation, the epidemiological tracing of virus transmission pathways and is supportive in disease control strategies. This study describes the development and validation of a rapid, universal and cost-efficient RT-PCR system to generate genome sequences of FMDV, reaching from the IRES to the end of the open reading frame. The method was evaluated using twelve different virus strains covering all seven serotypes of FMDV. Additionally, samples from experimentally infected animals were tested to mimic diagnostic field samples. All primer pairs showed a robust amplification with a high sensitivity for all serotypes. In summary, the described assay is suitable for the generation of FMDV sequences from all serotypes to allow immediate phylogenetic analysis, detailed genotyping and molecular epidemiology. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Rapid Characterization of Constituents in Tribulus terrestris from Different Habitats by UHPLC/Q-TOF MS.

    PubMed

    Zheng, Wei; Wang, Fangxu; Zhao, Yang; Sun, Xinguang; Kang, Liping; Fan, Ziquan; Qiao, Lirui; Yan, Renyi; Liu, Shuchen; Ma, Baiping

    2017-11-01

    A strategy for rapid identification of the chemical constituents from crude extracts of Tribulus terrestris was proposed using an informatics platform for the UHPLC/Q-TOF MS E data analyses. This strategy mainly utilizes neutral losses, characteristic fragments, and in-house library to rapidly identify the structure of the compounds. With this strategy, rapid characterization of the chemical components of T. terrestris from Beijing, China was successfully achieved. A total of 82 steroidal saponins and nine flavonoids were identified or tentatively identified from T. terrestris. Among them, 15 new components were deduced based on retention times and characteristic MS fragmentation patterns. Furthermore, the chemical components of T. terrestris, including the other two samples from Xinjiang Uygur Autonomous region, China, and Rome, Italy, were also identified with this strategy. Altogether, 141 chemical components were identified from these three samples, of which 39 components were identified or tentatively identified as new compounds, including 35 groups of isomers. It demonstrated that this strategy provided an efficient protocol for the rapid identification of chemical constituents in complex samples such as traditional Chinese medicines (TCMs) by UHPLC/Q-TOF MS E with informatics platform. Graphical Abstract ᅟ.

  14. Rapid Characterization of Constituents in Tribulus terrestris from Different Habitats by UHPLC/Q-TOF MS

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Wang, Fangxu; Zhao, Yang; Sun, Xinguang; Kang, Liping; Fan, Ziquan; Qiao, Lirui; Yan, Renyi; Liu, Shuchen; Ma, Baiping

    2017-08-01

    A strategy for rapid identification of the chemical constituents from crude extracts of Tribulus terrestris was proposed using an informatics platform for the UHPLC/Q-TOF MSE data analyses. This strategy mainly utilizes neutral losses, characteristic fragments, and in-house library to rapidly identify the structure of the compounds. With this strategy, rapid characterization of the chemical components of T. terrestris from Beijing, China was successfully achieved. A total of 82 steroidal saponins and nine flavonoids were identified or tentatively identified from T. terrestris. Among them, 15 new components were deduced based on retention times and characteristic MS fragmentation patterns. Furthermore, the chemical components of T. terrestris, including the other two samples from Xinjiang Uygur Autonomous region, China, and Rome, Italy, were also identified with this strategy. Altogether, 141 chemical components were identified from these three samples, of which 39 components were identified or tentatively identified as new compounds, including 35 groups of isomers. It demonstrated that this strategy provided an efficient protocol for the rapid identification of chemical constituents in complex samples such as traditional Chinese medicines (TCMs) by UHPLC/Q-TOF MSE with informatics platform. [Figure not available: see fulltext.

  15. Multisite tumor sampling enhances the detection of intratumor heterogeneity at all different temporal stages of tumor evolution.

    PubMed

    Erramuzpe, Asier; Cortés, Jesús M; López, José I

    2018-02-01

    Intratumor heterogeneity (ITH) is an inherent process of tumor development that has received much attention in previous years, as it has become a major obstacle for the success of targeted therapies. ITH is also temporally unpredictable across tumor evolution, which makes its precise characterization even more problematic since detection success depends on the precise temporal snapshot at which ITH is analyzed. New and more efficient strategies for tumor sampling are needed to overcome these difficulties which currently rely entirely on the pathologist's interpretation. Recently, we showed that a new strategy, the multisite tumor sampling, works better than the routine sampling protocol for the ITH detection when the tumor time evolution was not taken into consideration. Here, we extend this work and compare the ITH detections of multisite tumor sampling and routine sampling protocols across tumor time evolution, and in particular, we provide in silico analyses of both strategies at early and late temporal stages for four different models of tumor evolution (linear, branched, neutral, and punctuated). Our results indicate that multisite tumor sampling outperforms routine protocols in detecting ITH at all different temporal stages of tumor evolution. We conclude that multisite tumor sampling is more advantageous than routine protocols in detecting intratumor heterogeneity.

  16. Aerosol dilution as a simple strategy for analysis of complex samples by ICP-MS.

    PubMed

    Barros, Ariane I; Pinheiro, Fernanda C; Amaral, Clarice D B; Lorençatto, Rodolfo; Nóbrega, Joaquim A

    2018-02-01

    This study investigated the capability of High Matrix Introduction (HMI) strategy for analysis of dialysis solution and urine samples using inductively coupled plasma mass spectrometry. The use of HMI enables the direct introduction of urine samples and dialysis solutions 2-fold diluted with 0.14molL -1 HNO 3 . Bismuth, Ge, Ir, Li, Pt, Rh, Sc and Tl were evaluated as internal standards for Al, Ag, As, Be, Cd, Cr, Pb, Sb, Se, Tl, and Hg determination in dialysis solution and As, Cd, Hg and Pb determination in urine samples. Helium collision cell mode (4.5mLmin -1 ) was efficient to overcome polyatomic interferences in As, Se and Cr determinations. Mercury memory effects were evaluated by washing with 0.12molL -1 HCl or an alkaline diluent solution prepared with n-butanol, NH 4 OH, EDTA, and Triton X-100. This later solution was efficient for avoiding Hg memory effects in 6h of analysis. Linear calibration curves were obtained for all analytes and detection limits were lower than maximum amounts allowed by Brazilian legislations. Recoveries for all analytes in dialysis solutions and urine samples ranged from 82% to 125% and relative standard deviations for all elements and samples were lower than 7%. Analysis of control internal urine samples was in agreement with certified values at 95% confidence level (t-test; p < 0.05). Copyright © 2017 Elsevier B.V. All rights reserved.

  17. High-Flow-Rate Impinger for the Study of Concentration, Viability, Metabolic Activity, and Ice-Nucleation Activity of Airborne Bacteria.

    PubMed

    Šantl-Temkiv, Tina; Amato, Pierre; Gosewinkel, Ulrich; Thyrhaug, Runar; Charton, Anaïs; Chicot, Benjamin; Finster, Kai; Bratbak, Gunnar; Löndahl, Jakob

    2017-10-03

    The study of airborne bacteria relies on a sampling strategy that preserves their integrity and in situ physiological state, e.g. viability, cultivability, metabolic activity, and ice-nucleation activity. Because ambient air harbors low concentrations of bacteria, an effective bioaerosol sampler should have a high sampling efficiency and a high airflow. We characterize a high-flow-rate impinger with respect to particle collection and retention efficiencies in the range 0.5-3.0 μm, and we investigated its ability to preserve the physiological state of selected bacterial species and seawater bacterial community in comparison with four commercial bioaerosol samplers. The collection efficiency increased with particle size and the cutoff diameter was between 0.5 and 1 μm. During sampling periods of 120-300 min, the impinger retained the cultivability, metabolic activity, viability, and ice-nucleation activity of investigated bacteria. Field studies in semiurban, high-altitude, and polar environments included periods of low bacterial air concentrations, thus demonstrating the benefits of the impinger's high flow rate. In conclusion, the impinger described here has many advantages compared with other bioaerosol samplers currently on the market: a potential for long sampling time, a high flow rate, a high sampling and retention efficiency, low costs, and applicability for diverse downstream microbiological and molecular analyses.

  18. Methods for Multiplex Template Sampling in Digital PCR Assays

    PubMed Central

    Petriv, Oleh I.; Heyries, Kevin A.; VanInsberghe, Michael; Walker, David; Hansen, Carl L.

    2014-01-01

    The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision. PMID:24854517

  19. Methods for multiplex template sampling in digital PCR assays.

    PubMed

    Petriv, Oleh I; Heyries, Kevin A; VanInsberghe, Michael; Walker, David; Hansen, Carl L

    2014-01-01

    The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision.

  20. [Improving the feed conversion rate in the pig fattening industry by optimising hygienic management].

    PubMed

    Riedl, Antonia M; Völkel, Inger; Schlindwein, Bernhard; Czerny, Claus-Peter

    2013-01-01

    Considering continuously increasing forage costs, the feed conversion rate has a major impact on the economic efficiency in hog fattening. The influence of hygienic management strategies on animal health and feed efficiency was evaluated by an online-study comprising animal health management data of 202 German pig fatteners. Data analysis included a simple comparison of averages, a linear regression analysis, and a cluster analysis. Due to geographical distribution and size of premises, the random sample was not representative but yielded in significant results. The total impact of hygienic management on feed conversion was calculated to be 23.9 %. Professional performance of rodent control (beta = 0.357; p < or = 0.001), efficient insect larvae control (beta = 0.276; p = 0.008), requiring visitors to wear protective gear (beta = 0.261; p = 0.009), and immediately performed cleaning and disinfection of emptied pens (beta = 0.247; p = 0.017) were top-ranking variables. Furthermore, a significantly better fed efficiency was observed in companies reporting stables in a good state of repair or performing further preventive strategies to control animal health on herd-level (storage of fodder retain samples, health screening based on blood and faecal samples, cross-section to verify unclear death cases). For pig fatteners the benefit resulting from improved feed conversion ranged from Euro 1.15 to and Euro 2.53 per pig. Likewise optimized growth performance as a result of improved hygienic management could partly compensate increasing feed costs. The results of this online-study reveal the need to establish reliable HACCP systems on farm level.

  1. A Preview of Coming Attractions: Classroom Teacher's Idea Notebook.

    ERIC Educational Resources Information Center

    Morin, Joy Ann

    1995-01-01

    Contends that it is important for students to be motivated and well prepared for class units and activities. Describes a "previews of coming attractions" instructional strategy that uses advance organizers to increase information processing efficiency. Includes a sample unit outline illustrating this approach. (CFR)

  2. Semi-supervised learning for photometric supernova classification

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Homrighausen, Darren; Freeman, Peter E.; Schafer, Chad M.; Poznanski, Dovi

    2012-01-01

    We present a semi-supervised method for photometric supernova typing. Our approach is to first use the non-linear dimension reduction technique diffusion map to detect structure in a data base of supernova light curves and subsequently employ random forest classification on a spectroscopically confirmed training set to learn a model that can predict the type of each newly observed supernova. We demonstrate that this is an effective method for supernova typing. As supernova numbers increase, our semi-supervised method efficiently utilizes this information to improve classification, a property not enjoyed by template-based methods. Applied to supernova data simulated by Kessler et al. to mimic those of the Dark Energy Survey, our methods achieve (cross-validated) 95 per cent Type Ia purity and 87 per cent Type Ia efficiency on the spectroscopic sample, but only 50 per cent Type Ia purity and 50 per cent efficiency on the photometric sample due to their spectroscopic follow-up strategy. To improve the performance on the photometric sample, we search for better spectroscopic follow-up procedures by studying the sensitivity of our machine-learned supernova classification on the specific strategy used to obtain training sets. With a fixed amount of spectroscopic follow-up time, we find that, despite collecting data on a smaller number of supernovae, deeper magnitude-limited spectroscopic surveys are better for producing training sets. For supernova Ia (II-P) typing, we obtain a 44 per cent (1 per cent) increase in purity to 72 per cent (87 per cent) and 30 per cent (162 per cent) increase in efficiency to 65 per cent (84 per cent) of the sample using a 25th (24.5th) magnitude-limited survey instead of the shallower spectroscopic sample used in the original simulations. When redshift information is available, we incorporate it into our analysis using a novel method of altering the diffusion map representation of the supernovae. Incorporating host redshifts leads to a 5 per cent improvement in Type Ia purity and 13 per cent improvement in Type Ia efficiency. A web service for the supernova classification method used in this paper can be found at .

  3. The profiling of the metabolites of hirsutine in rat by ultra-high performance liquid chromatography coupled with linear ion trap Orbitrap mass spectrometry: An improved strategy for the systematic screening and identification of metabolites in multi-samples in vivo.

    PubMed

    Wang, Jianwei; Qi, Peng; Hou, Jinjun; Shen, Yao; Yang, Min; Bi, Qirui; Deng, Yanping; Shi, Xiaojian; Feng, Ruihong; Feng, Zijin; Wu, Wanying; Guo, Dean

    2017-02-05

    Drug metabolites identification and construction of metabolic profile are meaningful work for the drug discovery and development. The great challenge during this process is the work of the structural clarification of possible metabolites in the complicated biological matrix, which often resulting in a huge amount data sets, especially in multi-samples in vivo. Analyzing these complex data manually is time-consuming and laborious. The object of this study was to develop a practical strategy for screening and identifying of metabolites from multiple biological samples efficiently. Using hirsutine (HTI), an active components of Uncaria rhynchophylla (Gouteng in Chinese) as a model and its plasma, urine, bile, feces and various tissues were analyzed with data processing software (Metwork), data mining tool (Progenesis QI), and HR-MS n data by ultra-high performance liquid chromatography/linear ion trap-Orbitrap mass spectrometry (U-HPLC/LTQ-Orbitrap-MS). A total of 67 metabolites of HTI in rat biological samples were tentatively identified with established library, and to our knowledge most of which were reported for the first time. The possible metabolic pathways were subsequently proposed, hydroxylation, dehydrogenation, oxidation, N-oxidation, hydrolysis, reduction and glucuronide conjugation were mainly involved according to metabolic profile. The result proved application of this improved strategy was efficient, rapid, and reliable for metabolic profiling of components in multiple biological samples and could significantly expand our understanding of metabolic situation of TCM in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Efficient evaluation of sampling quality of molecular dynamics simulations by clustering of dihedral torsion angles and Sammon mapping.

    PubMed

    Frickenhaus, Stephan; Kannan, Srinivasaraghavan; Zacharias, Martin

    2009-02-01

    A direct conformational clustering and mapping approach for peptide conformations based on backbone dihedral angles has been developed and applied to compare conformational sampling of Met-enkephalin using two molecular dynamics (MD) methods. Efficient clustering in dihedrals has been achieved by evaluating all combinations resulting from independent clustering of each dihedral angle distribution, thus resolving all conformational substates. In contrast, Cartesian clustering was unable to accurately distinguish between all substates. Projection of clusters on dihedral principal component (PCA) subspaces did not result in efficient separation of highly populated clusters. However, representation in a nonlinear metric by Sammon mapping was able to separate well the 48 highest populated clusters in just two dimensions. In addition, this approach also allowed us to visualize the transition frequencies between clusters efficiently. Significantly, higher transition frequencies between more distinct conformational substates were found for a recently developed biasing-potential replica exchange MD simulation method allowing faster sampling of possible substates compared to conventional MD simulations. Although the number of theoretically possible clusters grows exponentially with peptide length, in practice, the number of clusters is only limited by the sampling size (typically much smaller), and therefore the method is well suited also for large systems. The approach could be useful to rapidly and accurately evaluate conformational sampling during MD simulations, to compare different sampling strategies and eventually to detect kinetic bottlenecks in folding pathways.

  5. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE PAGES

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...

    2017-12-27

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  6. A Taylor Expansion-Based Adaptive Design Strategy for Global Surrogate Modeling With Applications in Groundwater Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing

    Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less

  7. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  8. Nonprobability and probability-based sampling strategies in sexual science.

    PubMed

    Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah

    2015-01-01

    With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.

  9. Multi-scale ecosystem monitoring: an application of scaling data to answer multiple ecological questions

    USDA-ARS?s Scientific Manuscript database

    Background/Question/Methods Standardized monitoring data collection efforts using a probabilistic sample design, such as in the Bureau of Land Management’s (BLM) Assessment, Inventory, and Monitoring (AIM) Strategy, provide a core suite of ecological indicators, maximize data collection efficiency,...

  10. Multiscale ecosystem monitoring: an application of scaling data to answer multiple ecological questions

    USDA-ARS?s Scientific Manuscript database

    Standardized monitoring data collection efforts using a probabilistic sample design, such as in the Bureau of Land Management’s (BLM) Assessment, Inventory, and Monitoring (AIM) Strategy, provide a core suite of ecological indicators, maximize data collection efficiency, and promote reuse of monitor...

  11. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  12. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta

    PubMed Central

    Zhang, Zhe; Schindler, Christina E. M.; Lange, Oliver F.; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  13. A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.

    PubMed

    Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang

    2018-01-01

    The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.

  14. A method for cone fitting based on certain sampling strategy in CMM metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Guo, Chaopeng

    2018-04-01

    A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.

  15. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  16. Detection of Toxoplasma gondii oocysts in water: proposition of a strategy and evaluation in Champagne-Ardenne Region, France.

    PubMed

    Aubert, D; Villena, I

    2009-03-01

    Water is a vehicle for disseminating human and veterinary toxoplasmosis due to oocyst contamination. Several outbreaks of toxoplasmosis throughout the world have been related to contaminated drinking water. We have developed a method for the detection of Toxoplasma gondii oocysts in water and we propose a strategy for the detection of multiple waterborne parasites, including Cryptosporidium spp. and Giardia. Water samples were filtered to recover Toxoplasma oocysts and, after the detection of Cryptosporidium oocysts and Giardia cysts by immunofluorescence, as recommended by French norm procedure NF T 90-455, the samples were purified on a sucrose density gradient. Detection of Toxoplasma was based on PCR amplification and mouse inoculation to determine the presence and infectivity of recovered oocysts. After experimental seeding assays, we determined that the PCR assay was more sensitive than the bioassay. This strategy was then applied to 482 environmental water samples collected since 2001. We detected Toxoplasma DNA in 37 environmental samples (7.7%), including public drinking water; however, none of them were positive by bioassay. This strategy efficiently detects Toxoplasma oocysts in water and may be suitable as a public health sentinel method. Alternative methods can be used in conjunction with this one to determine the infectivity of parasites that were detected by molecular methods.

  17. Dual Modifications Strategy to Quantify Neutral and Sialylated N-Glycans Simultaneously by MALDI-MS

    PubMed Central

    2015-01-01

    Differences in ionization efficiency among neutral and sialylated glycans prevent direct quantitative comparison by their respective mass spectrometric signals. To overcome this challenge, we developed an integrated chemical strategy, Dual Reactions for Analytical Glycomics (DRAG), to quantitatively compare neutral and sialylated glycans simultaneously by MALDI-MS. Initially, two glycan samples to be compared undergo reductive amination with 2-aminobenzoic acid and 2-13[C6]-aminobenzoic acid, respectively. The different isotope-incorporated glycans are then combined and subjected to the methylamidation of the sialic acid residues in one mixture, homogenizing the ionization responses for all neutral and sialylated glycans. By this approach, the expression change of relevant glycans between two samples is proportional to the ratios of doublet signals with a static 6 Da mass difference in MALDI-MS and the change in relative abundance of any glycan within samples can also be determined. The strategy was chemically validated using well-characterized N-glycans from bovine fetuin and IgG from human serum. By comparing the N-glycomes from a first morning (AM) versus an afternoon (PM) urine sample obtained from a single donor, we further demonstrated the ability of DRAG strategy to measure subtle quantitative differences in numerous urinary N-glycans. PMID:24766348

  18. Dual modifications strategy to quantify neutral and sialylated N-glycans simultaneously by MALDI-MS.

    PubMed

    Zhou, Hui; Warren, Peter G; Froehlich, John W; Lee, Richard S

    2014-07-01

    Differences in ionization efficiency among neutral and sialylated glycans prevent direct quantitative comparison by their respective mass spectrometric signals. To overcome this challenge, we developed an integrated chemical strategy, Dual Reactions for Analytical Glycomics (DRAG), to quantitatively compare neutral and sialylated glycans simultaneously by MALDI-MS. Initially, two glycan samples to be compared undergo reductive amination with 2-aminobenzoic acid and 2-(13)[C6]-aminobenzoic acid, respectively. The different isotope-incorporated glycans are then combined and subjected to the methylamidation of the sialic acid residues in one mixture, homogenizing the ionization responses for all neutral and sialylated glycans. By this approach, the expression change of relevant glycans between two samples is proportional to the ratios of doublet signals with a static 6 Da mass difference in MALDI-MS and the change in relative abundance of any glycan within samples can also be determined. The strategy was chemically validated using well-characterized N-glycans from bovine fetuin and IgG from human serum. By comparing the N-glycomes from a first morning (AM) versus an afternoon (PM) urine sample obtained from a single donor, we further demonstrated the ability of DRAG strategy to measure subtle quantitative differences in numerous urinary N-glycans.

  19. Compared analysis of different sampling strategies for the monitoring of pesticide contamination in streams

    NASA Astrophysics Data System (ADS)

    Liger, Lucie; Margoum, Christelle; Guillemain, Céline; Carluer, Nadia

    2014-05-01

    The implementation of the WFD (Water Framework Directive), requests European Union member states to achieve good qualitative and quantitative status of all water bodies in 2015. The monitoring of organic micropollutants such as pesticides is an essential step to assess the chemical and biological state of streams, to understand the reasons of degradation and to implement sound mitigation solutions in the watershed. In particular, the water sampling, which can be performed according to several strategies, has to be closely adapted to the experimental goals. In this study, we present and compare 3 different active sampling strategies: grab sampling, time-related and flow-dependent automatic samplings. In addition, the last two can be fractionated (i.e., several samples collected, and each one contained in a single bottle) or averaged (i.e., several samples mixed in the same bottle). Time-related samples allow the assessment of average exposure concentrations of organic micropollutants, whereas flow-dependent samples lead to average flux concentrations. The 3 sampling strategies were applied and compared during the monitoring of the pesticide contamination of a river located in a French vineyard watershed (the Morcille River, located 60 km north of Lyon, in the Beaujolais region). Data were collected between 2007 and 2011, during different seasons and for a range of hydrological events. The Morcille watershed is characterized by contrasted hydrological events with a very short-time response due to its small size (5 km²), steep slopes (20 to 28%) and highly permeable sandy soils. These features make it particularly difficult to monitor water quality, due to fast variations of pesticide concentrations depending on rain events. This comparative study is performed in 2 steps. At first, we compare the timestamps of each sample composing the weekly-averaged samples and those of the grab samples with hydrological data. This allows us to evaluate the efficiency of these 2 sampling strategies in the integration of flow variations and therefore pesticide concentration variations during the sampling campaign. In a second step, we use the fractionated samples data during flood events to calculate the concentrations of virtual averaged samples of the events. Different time or flow steps were used for the calculation, to assess their impact on the pesticide averaged-concentrations or fluxes. These analyses highlight the benefits and drawbacks of each sampling strategy. They show that the sampling strategy should be carefully chosen and designed depending on the final aim of the study and on the watershed characteristics (in particular its hydrological dynamics). This study may help to design future monitoring on water quality. Key Words: sampling strategies, surface water, concentration, flux, pesticides.

  20. The correspondence of surface climate parameters with satellite and terrain data

    NASA Technical Reports Server (NTRS)

    Dozier, Jeff; Davis, Frank

    1987-01-01

    One of the goals of the research was to develop a ground sampling stragegy for calibrating remotely sensed measurements of surface climate parameters. The initial sampling strategy involved the stratification of the terrain based on important ancillary surface variables such as slope, exposure, insolation, geology, drainage, fire history, etc. For a spatially heterogeneous population, sampling error is reduced and efficiency increased by stratification of the landscape into more homogeneous sub-areas and by employing periodic random spacing of samples. These concepts were applied in the initial stratification of the study site for the purpose of locating and allocating instrumentation.

  1. Sample size determination for bibliographic retrieval studies

    PubMed Central

    Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian

    2008-01-01

    Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538

  2. The Sampling Design of the China Family Panel Studies (CFPS)

    PubMed Central

    Xie, Yu; Lu, Ping

    2018-01-01

    The China Family Panel Studies (CFPS) is an on-going, nearly nationwide, comprehensive, longitudinal social survey that is intended to serve research needs on a large variety of social phenomena in contemporary China. In this paper, we describe the sampling design of the CFPS sample for its 2010 baseline survey and methods for constructing weights to adjust for sampling design and survey nonresponses. Specifically, the CFPS used a multi-stage probability strategy to reduce operation costs and implicit stratification to increase efficiency. Respondents were oversampled in five provinces or administrative equivalents for regional comparisons. We provide operation details for both sampling and weights construction. PMID:29854418

  3. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  4. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  5. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    PubMed Central

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.

    2014-01-01

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844

  6. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    DOE PAGES

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...

    2014-11-18

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less

  7. Database extraction strategies for low-template evidence.

    PubMed

    Bleka, Øyvind; Dørum, Guro; Haned, Hinda; Gill, Peter

    2014-03-01

    Often in forensic cases, the profile of at least one of the contributors to a DNA evidence sample is unknown and a database search is needed to discover possible perpetrators. In this article we consider two types of search strategies to extract suspects from a database using methods based on probability arguments. The performance of the proposed match scores is demonstrated by carrying out a study of each match score relative to the level of allele drop-out in the crime sample, simulating low-template DNA. The efficiency was measured by random man simulation and we compared the performance using the SGM Plus kit and the ESX 17 kit for the Norwegian population, demonstrating that the latter has greatly enhanced power to discover perpetrators of crime in large national DNA databases. The code for the database extraction strategies will be prepared for release in the R-package forensim. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Nanosphere-based one-step strategy for efficient and nondestructive detection of circulating tumor cells.

    PubMed

    Wu, Ling-Ling; Wen, Cong-Ying; Hu, Jiao; Tang, Man; Qi, Chu-Bo; Li, Na; Liu, Cui; Chen, Lan; Pang, Dai-Wen; Zhang, Zhi-Ling

    2017-08-15

    Detecting viable circulating tumor cells (CTCs) without disruption to their functions for in vitro culture and functional study could unravel the biology of metastasis and promote the development of personalized anti-tumor therapies. However, existing CTC detection approaches commonly include CTC isolation and subsequent destructive identification, which damages CTC viability and functions and generates substantial CTC loss. To address the challenge of efficiently detecting viable CTCs for functional study, we develop a nanosphere-based cell-friendly one-step strategy. Immunonanospheres with prominent magnetic/fluorescence properties and extraordinary stability in complex matrices enable simultaneous efficient magnetic capture and specific fluorescence labeling of tumor cells directly in whole blood. The collected cells with fluorescent tags can be reliably identified, free of the tedious and destructive manipulations from conventional CTC identification. Hence, as few as 5 tumor cells in ca. 1mL of whole blood can be efficiently detected via only 20min incubation, and this strategy also shows good reproducibility with the relative standard deviation (RSD) of 8.7%. Moreover, due to the time-saving and gentle processing and the minimum disruption of immunonanospheres to cells, 93.8±0.1% of detected tumor cells retain cell viability and proliferation ability with negligible changes of cell functions, capacitating functional study on cell migration, invasion and glucose uptake. Additionally, this strategy exhibits successful CTC detection in 10/10 peripheral blood samples of cancer patients. Therefore, this nanosphere-based cell-friendly one-step strategy enables viable CTC detection and further functional analyses, which will help to unravel tumor metastasis and guide treatment selection. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  10. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  11. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  12. Time- and energy-efficient solution combustion synthesis of binary metal tungstate nanoparticles with enhanced photocatalytic activity.

    PubMed

    Thomas, Abegayl; Janáky, Csaba; Samu, Gergely F; Huda, Muhammad N; Sarker, Pranab; Liu, J Ping; van Nguyen, Vuong; Wang, Evelyn H; Schug, Kevin A; Rajeshwar, Krishnan

    2015-05-22

    In the search for stable and efficient photocatalysts beyond TiO2 , the tungsten-based oxide semiconductors silver tungstate (Ag2 WO4 ), copper tungstate (CuWO4 ), and zinc tungstate (ZnWO4 ) were prepared using solution combustion synthesis (SCS). The tungsten precursor's influence on the product was of particular relevance to this study, and the most significant effects are highlighted. Each sample's photocatalytic activity towards methyl orange degradation was studied and benchmarked against their respective commercial oxide sample obtained by solid-state ceramic synthesis. Based on the results herein, we conclude that SCS is a time- and energy-efficient method to synthesize crystalline binary tungstate nanomaterials even without additional excessive heat treatment. As many of these photocatalysts possess excellent photocatalytic activity, the discussed synthetic strategy may open sustainable materials chemistry avenues to solar energy conversion and environmental remediation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. EcoTurf - a case study: genetic variation and agronomic potential of bermudagrass (Cynodon spp.) germplasm collected from Australian biodiversity

    USDA-ARS?s Scientific Manuscript database

    Australian Cynodon germplasm has not been comprehensively exploited for bermudagrass improvement. In this paper we will describe ‘EcoTurf’ a four year (2007-2011) project to develop water and nutrient use efficient bermudagrasses from Australian biodiversity. We describe the sampling strategies of A...

  15. Efficiency of two-way weirs and prepositioned electrofishing for sampling potamodromous fish migrations

    USGS Publications Warehouse

    Favrot, Scott D.; Kwak, Thomas J.

    2016-01-01

    Potamodromy (i.e., migration entirely in freshwater) is a common life history strategy of North American lotic fishes, and efficient sampling methods for potamodromous fishes are needed to formulate conservation and management decisions. Many potamodromous fishes inhabit medium-sized rivers and are mobile during spawning migrations, which complicates sampling with conventional gears (e.g., nets and electrofishing). We compared the efficiency of a passive migration technique (resistance board weirs) and an active technique (prepositioned areal electrofishers; [PAEs]) for sampling migrating potamodromous fishes in Valley River, a southern Appalachian Mountain river, from March through July 2006 and 2007. A total of 35 fish species from 10 families were collected, 32 species by PAE and 19 species by weir. Species richness and diversity were higher for PAE catch, and species dominance (i.e., proportion of assemblage composed of the three most abundant species) was higher for weir catch. Prepositioned areal electrofisher catch by number was considerably higher than weir catch, but biomass was lower for PAE catch. Weir catch decreased following the spawning migration, while PAEs continued to collect fish. Sampling bias associated with water velocity was detected for PAEs, but not weirs, and neither gear demonstrated depth bias in wadeable reaches. Mean fish mortality from PAEs was five times greater than that from weirs. Catch efficiency and composition comparisons indicated that weirs were effective at documenting migration chronology, sampling nocturnal migration, and yielding samples unbiased by water velocity or habitat, with low mortality. Prepositioned areal electrofishers are an appropriate sampling technique for seasonal fish occupancy objectives, while weirs are more suitable for quantitatively describing spawning migrations. Our comparative results may guide fisheries scientists in selecting an appropriate sampling gear and regime for research, monitoring, conservation, and management of potamodromous fishes.

  16. Optimization of a metatranscriptomic approach to study the lignocellulolytic potential of the higher termite gut microbiome.

    PubMed

    Marynowska, Martyna; Goux, Xavier; Sillam-Dussès, David; Rouland-Lefèvre, Corinne; Roisin, Yves; Delfosse, Philippe; Calusinska, Magdalena

    2017-09-01

    Thanks to specific adaptations developed over millions of years, the efficiency of lignin, cellulose and hemicellulose decomposition of higher termite symbiotic system exceeds that of many other lignocellulose utilizing environments. Especially, the examination of its symbiotic microbes should reveal interesting carbohydrate-active enzymes, which are of primary interest for the industry. Previous metatranscriptomic reports (high-throughput mRNA sequencing) highlight the high representation and overexpression of cellulose and hemicelluloses degrading genes in the termite hindgut digestomes, indicating the potential of this technology in search for new enzymes. Nevertheless, several factors associated with the material sampling and library preparation steps make the metatranscriptomic studies of termite gut prokaryotic symbionts challenging. In this study, we first examined the influence of the sampling strategy, including the whole termite gut and luminal fluid, on the diversity and the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. Secondly, we evaluated different commercially available kits combined in two library preparative pipelines for the best bacterial mRNA enrichment strategy. We showed that the sampling strategy did not significantly impact the generated results, both in terms of the representation of the microbes and their transcriptomic profiles. Nevertheless collecting luminal fluid reduces the co-amplification of unwanted RNA species of host origin. Furthermore, for the four studied higher termite species, the library preparative pipeline employing Ribo-Zero Gold rRNA Removal Kit "Epidemiology" in combination with Poly(A) Purist MAG kit resulted in a more efficient rRNA and poly-A-mRNAdepletion (up to 98.44% rRNA removed) than the pipeline utilizing MICROBExpress and MICROBEnrich kits. High correlation of both Ribo-Zero and MICROBExpresse depleted gene expression profiles with total non-depleted RNA-seq data has been shown for all studied samples, indicating no systematic skewing of the studied pipelines. We have extensively evaluated the impact of the sampling strategy and library preparation steps on the metatranscriptomic profiles of the higher termite gut symbiotic bacteria. The presented methodological approach has great potential to enhance metatranscriptomic studies of the higher termite intestinal flora and to unravel novel carbohydrate-active enzymes.

  17. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    NASA Technical Reports Server (NTRS)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  18. Variable memory strategy use in children's adaptive intratask learning behavior: developmental changes and working memory influences in free recall.

    PubMed

    Lehmann, Martin; Hasselhorn, Marcus

    2007-01-01

    Variability in strategy use within single trials in free recall was analyzed longitudinally from second to fourth grades (ages 8-10 years). To control for practice effects another sample of fourth graders was included (age 10 years). Video analyses revealed that children employed different strategies when preparing for free recall. A gradual shift from labeling to cumulative rehearsal was present both with increasing age and across different list positions. Whereas cumulative rehearsal was frequent at early list positions, labeling was dominant at later list portions. Working memory capacity predicted the extent of cumulative rehearsal usage, which became more efficient with increasing age. Results are discussed in the context of the adaptive strategy choice model.

  19. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  20. To mix or not to mix venous blood samples collected in vacuum tubes?

    PubMed

    Parenmark, Anna; Landberg, Eva

    2011-09-08

    There are recommendations to mix venous blood samples by inverting the tubes immediately after venipuncture. Though mixing allows efficient anticoagulation in plasma tubes and fast initiation of coagulation in serum tubes, the effect on laboratory analyses and risk of haemolysis has not been thoroughly evaluated. Venous blood samples were collected by venipuncture in vacuum tubes from 50 patients (10 or 20 patients in each group). Four types of tubes and 18 parameters used in routine clinical chemistry were evaluated. For each patient and tube, three types of mixing strategies were used: instant mixing, no mixing and 5 min of rest followed by mixing. Most analyses did not differ significantly in samples admitted to different mixing strategies. Plasma lactate dehydrogenase and haemolysis index showed a small but significant increase in samples omitted to instant mixing compared to samples without mixing. However, in one out of twenty non-mixed samples, activated partial thromboplastin time was seriously affected. These results indicate that mixing blood samples after venipuncture is not mandatory for all types of tubes. Instant mixing may introduce interference for those analyses susceptible to haemolysis. However, tubes with liquid-based citrate buffer for coagulation testing should be mixed to avoid clotting.

  1. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    PubMed

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  2. The Heterogeneous Investment Horizon and Dynamic Strategies for Asset Allocation

    NASA Astrophysics Data System (ADS)

    Xiong, Heping; Xu, Yiheng; Xiao, Yi

    This paper discusses the influence of the portfolio rebalancing strategy on the efficiency of long-term investment portfolios under the assumption of independent stationary distribution of returns. By comparing the efficient sets of the stochastic rebalancing strategy, the simple rebalancing strategy and the buy-and-hold strategy with specific data examples, we find that the stochastic rebalancing strategy is optimal, while the simple rebalancing strategy is of the lowest efficiency. In addition, the simple rebalancing strategy lowers the efficiency of the portfolio instead of improving it.

  3. A comparison study on a sulfonated graphene-polyaniline nanocomposite coated fiber for analysis of nicotine in solid samples through the traditional and vacuum-assisted HS-SPME.

    PubMed

    Ghiasvand, Alireza; Koonani, Samira; Yazdankhah, Fatemeh; Farhadi, Saeid

    2018-02-05

    A simple, rapid, and reliable headspace solid-phase microextraction (HS-SPME) procedure, reinforced by applying vacuum in the extraction vial, was developed. It was applied for the extraction of nicotine in solid samples prior to determination by gas chromatography-flame ionization detection (GC-FID). First, the surface of a narrow stainless steel wire was made porous and adhesive by platinization to obtain a durable, higher surface area, and resistant fiber. Then, a thin film of sulfonated graphene/polyaniline (Sulf-G/PANI) nanocomposite was synthesized and simultaneously coated on the platinized fiber using the electrophoretic deposition (EPD) method. It was demonstrated that the extraction efficiency remarkably increased by applying the reduced-pressure condition in the extraction vial. To evaluate the conventional HS-SPME and vacuum-assisted HS-SPME (VA-HS-SPME) platforms, all experimental parameters affecting the extraction efficiency including desorption time and temperature, extraction time and temperature and moisture content of sample matrix were optimized. The highest extraction efficiency was obtained at 60°C, 10min (extraction temperature and time) and 280°C, 2min (desorption condition), for VA-HS-SPME strategy, while for conventional HS-SPME the extraction and desorption conditions found to be 100°C, 30min and 280°C, 2min, respectively. The Sulf-G/PANI coated fiber showed high thermal stability, good chemical/mechanical resistance, and long lifetime. For analysis of nicotine in solid samples using VA-HS-SPME-GC-FID, linear dynamic range (LDR) was 0.01-30μgg -1 (R 2 =0.996), the relative standard deviation (RSD%, n=6), for analyses of 1μgg -1 nicotine was calculated 3.4% and limit of detection (LOD) found to be 0.002μgg -1 . The VA-HS-SPME-GC-FID strategy was successfully carried out for quantitation of nicotine in hair and tobacco real samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Effects of Training in Planfulness on the Performance of Eighth Graders.

    ERIC Educational Resources Information Center

    O'Loughlin, Michael; And Others

    Recent research in metacognition suggests that efficient studying reflects the ability to employ deliberately planful or self-regulative study strategies. An instructional program based on this approach was developed to teach eighth graders how to study text. The sample of 50 eighth graders was divided into experimental (N=24) and control (N=26)…

  5. An Internal Standard for Assessing Phosphopeptide Recovery from Metal Ion/Oxide Enrichment Strategies

    NASA Astrophysics Data System (ADS)

    Paulo, Joao A.; Navarrete-Perea, Jose; Erickson, Alison R.; Knott, Jeffrey; Gygi, Steven P.

    2018-04-01

    Phosphorylation-mediated signaling pathways have major implications in cellular regulation and disease. However, proteins with roles in these pathways are frequently less abundant and phosphorylation is often sub-stoichiometric. As such, the efficient enrichment, and subsequent recovery of phosphorylated peptides, is vital. Mass spectrometry-based proteomics is a well-established approach for quantifying thousands of phosphorylation events in a single experiment. We designed a peptide internal standard-based assay directed toward sample preparation strategies for mass spectrometry analysis to understand better phosphopeptide recovery from enrichment strategies. We coupled mass-differential tandem mass tag (mTMT) reagents (specifically, TMTzero and TMTsuper-heavy), nine mass spectrometry-amenable phosphopeptides (phos9), and peak area measurements from extracted ion chromatograms to determine phosphopeptide recovery. We showcase this mTMT/phos9 recovery assay by evaluating three phosphopeptide enrichment workflows. Our assay provides data on the recovery of phosphopeptides, which complement other metrics, namely the number of identified phosphopeptides and enrichment specificity. Our mTMT/phos9 assay is applicable to any enrichment protocol in a typical experimental workflow irrespective of sample origin or labeling strategy. [Figure not available: see fulltext.

  6. Improving automatic peptide mass fingerprint protein identification by combining many peak sets.

    PubMed

    Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim

    2004-08-05

    An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.

  7. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  8. [Application of marketing strategies for the management of public hospitals from the viewpoint of the staff members].

    PubMed

    Riveros S, Jorge; Berné M, Carmen

    2006-03-01

    The implementation of the marketing strategies in public hospitals provides management advantages and improves the relationship between customers and staff. To analyze the application of marketing strategies in a public hospital, from the perspective of the staff. A structured survey that asked about perceptions in 50 items about communication between personnel and customers/users, customer satisfaction, participation in the development of new policies and incentives for efficiency was applied to a stratified sample of the staff. Factorial and regression analyses were performed to define the impact of marketing strategies on the degree of preoccupation and orientation of the organization towards the satisfaction of customer needs. The survey was applied to 74 males and 122 females. The survey showed that the orientation of the hospital towards the satisfaction of its beneficiaries basically depends on the generation of an organizational culture oriented towards them and the implementation of adequate policies in staff management and quality of service. These basic aspects can be accompanied with practices associated to the new marketing approaches such as a market orientation, customer orientation and relational marketing. All these factors presented positive and significant relations. New marketing strategies should be applied, to achieve an efficient and customer oriented hospital management.

  9. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  10. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data.

    PubMed

    Su, Xiaoquan; Xu, Jian; Ning, Kang

    2012-10-01

    It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.

  11. An integrative strategy for quantitative analysis of the N-glycoproteome in complex biological samples.

    PubMed

    Wang, Ji; Zhou, Chuang; Zhang, Wei; Yao, Jun; Lu, Haojie; Dong, Qiongzhu; Zhou, Haijun; Qin, Lunxiu

    2014-01-15

    The complexity of protein glycosylation makes it difficult to characterize glycosylation patterns on a proteomic scale. In this study, we developed an integrated strategy for comparatively analyzing N-glycosylation/glycoproteins quantitatively from complex biological samples in a high-throughput manner. This strategy entailed separating and enriching glycopeptides/glycoproteins using lectin affinity chromatography, and then tandem labeling them with 18O/16O to generate a mass shift of 6 Da between the paired glycopeptides, and finally analyzing them with liquid chromatography-mass spectrometry (LC-MS) and the automatic quantitative method we developed based on Mascot Distiller. The accuracy and repeatability of this strategy were first verified using standard glycoproteins; linearity was maintained within a range of 1:10-10:1. The peptide concentration ratios obtained by the self-build quantitative method were similar to both the manually calculated and theoretical values, with a standard deviation (SD) of 0.023-0.186 for glycopeptides. The feasibility of the strategy was further confirmed with serum from hepatocellular carcinoma (HCC) patients and healthy individuals; the expression of 44 glycopeptides and 30 glycoproteins were significantly different between HCC patient and control serum. This strategy is accurate, repeatable, and efficient, and may be a useful tool for identification of disease-related N-glycosylation/glycoprotein changes.

  12. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  13. Recruitment Techniques and Strategies in a Community-Based Colorectal Cancer Screening Study of Men and Women of African Ancestry.

    PubMed

    Davis, Stacy N; Govindaraju, Swapamthi; Jackson, Brittany; Williams, Kimberly R; Christy, Shannon M; Vadaparampil, Susan T; Quinn, Gwendolyn P; Shibata, David; Roetzheim, Richard; Meade, Cathy D; Gwede, Clement K

    Recruiting ethnically diverse Black participants to an innovative, community-based research study to reduce colorectal cancer screening disparities requires multipronged recruitment techniques. This article describes active, passive, and snowball recruitment techniques, and challenges and lessons learned in recruiting a diverse sample of Black participants. For each of the three recruitment techniques, data were collected on strategies, enrollment efficiency (participants enrolled/participants evaluated), and reasons for ineligibility. Five hundred sixty individuals were evaluated, and 330 individuals were enrolled. Active recruitment yielded the highest number of enrolled participants, followed by passive and snowball. Snowball recruitment was the most efficient technique, with enrollment efficiency of 72.4%, followed by passive (58.1%) and active (55.7%) techniques. There were significant differences in gender, education, country of origin, health insurance, and having a regular physician by recruitment technique (p < .05). Multipronged recruitment techniques should be employed to increase reach, diversity, and study participation rates among Blacks. Although each recruitment technique had a variable enrollment efficiency, the use of multipronged recruitment techniques can lead to successful enrollment of diverse Blacks into cancer prevention and control interventions.

  14. Combination of biochar amendment and mycoremediation for polycyclic aromatic hydrocarbons immobilization and biodegradation in creosote-contaminated soil.

    PubMed

    García-Delgado, Carlos; Alfaro-Barta, Irene; Eymar, Enrique

    2015-03-21

    Soils impregnated with creosote contain high concentrations of polycyclic aromatic hydrocarbons (PAH). To bioremediate these soils and avoid PAH spread, different bioremediation strategies were tested, based on natural attenuation, biochar application, wheat straw biostimulation, Pleurotus ostreatus mycoremediation, and the novel sequential application of biochar for 21 days and P. ostreatus 21 days more. Soil was sampled after 21 and 42 days after the remediation application. The efficiency and effectiveness of each remediation treatment were assessed according to PAH degradation and immobilization, fungal and bacterial development, soil eco-toxicity and legal considerations. Natural attenuation and biochar treatments did not achieve adequate PAH removal and soil eco-toxicity reduction. Biostimulation showed the highest bacterial development but low PAH degradation rate. Mycoremediation achieved the best PAH degradation rate and the lowest bioavailable fraction and soil eco-toxicity. This bioremediation strategy achieved PAH concentrations below Spanish legislation for contaminated soils (RD 9/2005). Sequential application of biochar and P. ostreatus was the second treatment most effective for PAH biodegradation and immobilization. However, the activity of P. ostreatus was increased by previous biochar application and PAH degradation efficiency was increased. Therefore, the combined strategy for PAH degradation have high potential to increase remediation efficiency. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. The development of strategy use in elementary school children: working memory and individual differences.

    PubMed

    Imbo, Ineke; Vandierendonck, André

    2007-04-01

    The current study tested the development of working memory involvement in children's arithmetic strategy selection and strategy efficiency. To this end, an experiment in which the dual-task method and the choice/no-choice method were combined was administered to 10- to 12-year-olds. Working memory was needed in retrieval, transformation, and counting strategies, but the ratio between available working memory resources and arithmetic task demands changed across development. More frequent retrieval use, more efficient memory retrieval, and more efficient counting processes reduced the working memory requirements. Strategy efficiency and strategy selection were also modified by individual differences such as processing speed, arithmetic skill, gender, and math anxiety. Short-term memory capacity, in contrast, was not related to children's strategy selection or strategy efficiency.

  16. Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling

    PubMed Central

    Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.

    2004-01-01

    Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898

  17. Nontargeted Screening Method for Illegal Additives Based on Ultrahigh-Performance Liquid Chromatography-High-Resolution Mass Spectrometry.

    PubMed

    Fu, Yanqing; Zhou, Zhihui; Kong, Hongwei; Lu, Xin; Zhao, Xinjie; Chen, Yihui; Chen, Jia; Wu, Zeming; Xu, Zhiliang; Zhao, Chunxia; Xu, Guowang

    2016-09-06

    Identification of illegal additives in complex matrixes is important in the food safety field. In this study a nontargeted screening strategy was developed to find illegal additives based on ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS). First, an analytical method for possible illegal additives in complex matrixes was established including fast sample pretreatment, accurate UHPLC separation, and HRMS detection. Second, efficient data processing and differential analysis workflow were suggested and applied to find potential risk compounds. Third, structure elucidation of risk compounds was performed by (1) searching online databases [Metlin and the Human Metabolome Database (HMDB)] and an in-house database which was established at the above-defined conditions of UHPLC-HRMS analysis and contains information on retention time, mass spectra (MS), and tandem mass spectra (MS/MS) of 475 illegal additives, (2) analyzing fragment ions, and (3) referring to fragmentation rules. Fish was taken as an example to show the usefulness of the nontargeted screening strategy, and six additives were found in suspected fish samples. Quantitative analysis was further carried out to determine the contents of these compounds. The satisfactory application of this strategy in fish samples means that it can also be used in the screening of illegal additives in other kinds of food samples.

  18. Real-time label-free quantitative fluorescence microscopy-based detection of ATP using a tunable fluorescent nano-aptasensor platform

    NASA Astrophysics Data System (ADS)

    Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung

    2015-11-01

    Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05839b

  19. "Shoot and Sense" Janus Micromotors-Based Strategy for the Simultaneous Degradation and Detection of Persistent Organic Pollutants in Food and Biological Samples.

    PubMed

    Rojas, D; Jurado-Sánchez, B; Escarpa, A

    2016-04-05

    A novel Janus micromotor-based strategy for the direct determination of diphenyl phthalate (DPP) in food and biological samples is presented. Mg/Au Janus micromotors are employed as novel analytical platforms for the degradation of the non-electroactive DPP into phenol, which is directly measured by difference pulse voltammetry on disposable screen-printed electrodes. The self-movement of the micromotors along the samples result in the generation of hydrogen microbubbles and hydroxyl ions for DPP degradation. The increased fluid transport improves dramatically the analytical signal, increasing the sensitivity while lowering the detection potential. The method has been successfully applied to the direct analysis of DPP in selected food and biological samples, without any sample treatment and avoiding any potential contamination from laboratory equipment. The developed approach is fast (∼5 min) and accurate with recoveries of ∼100%. In addition, efficient propulsion of multiple Mg/Au micromotors in complex samples has also been demonstrated. The advantages of the micromotors-assisted technology, i.e., disposability, portability, and the possibility to carry out multiple analysis simultaneously, hold considerable promise for its application in food and biological control in analytical applications with high significance.

  20. On the vertical resolution for near-nadir looking spaceborne rain radar

    NASA Astrophysics Data System (ADS)

    Kozu, Toshiaki

    A definition of radar resolution for an arbitrary direction is proposed and used to calculate the vertical resolution for a near-nadir looking spaceborne rain radar. Based on the calculation result, a scanning strategy is proposed which efficiently distributes the measurement time to each angle bin and thus increases the number of independent samples compared with a simple linear scanning.

  1. Comparisons of Online Recruitment Strategies for Convenience Samples: Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk

    ERIC Educational Resources Information Center

    Antoun, Christopher; Zhang, Chan; Conrad, Frederick G.; Schober, Michael F.

    2016-01-01

    The rise of social media websites (e.g., Facebook) and online services such as Google AdWords and Amazon Mechanical Turk (MTurk) offers new opportunities for researchers to recruit study participants. Although researchers have started to use these emerging methods, little is known about how they perform in terms of cost efficiency and, more…

  2. Simple, Efficient, and Rapid Methods to Determine the Potential for Vapor Intrusion into the Home: Temporal Trends, Vapor Intrusion Forecasting, Sampling Strategies, and Contaminant Migration Routes

    EPA Science Inventory

    Current practice for evaluating the vapor intrusion pathway involves a multiple line of evidence approach based on direct measurements of volatile organic compound (VOC) concentrations in groundwater, external soil gas, subslab soil gas, and/or indoor air. No single line of evide...

  3. Edge guided image reconstruction in linear scan CT by weighted alternating direction TV minimization.

    PubMed

    Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin

    2014-01-01

    Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.

  4. A Highly Efficient Design Strategy for Regression with Outcome Pooling

    PubMed Central

    Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.

    2014-01-01

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822

  5. A highly efficient design strategy for regression with outcome pooling.

    PubMed

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F

    2014-12-10

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Heat Management Strategies for Solid-state NMR of Functional Proteins

    PubMed Central

    Fowler, Daniel J.; Harris, Michael J.; Thompson, Lynmarie K.

    2012-01-01

    Modern solid-state NMR methods can acquire high-resolution protein spectra for structure determination. However, these methods use rapid sample spinning and intense decoupling fields that can heat and denature the protein being studied. Here we present a strategy to avoid destroying valuable samples. We advocate first creating a sacrificial sample, which contains unlabeled protein (or no protein) in buffer conditions similar to the intended sample. This sample is then doped with the chemical shift thermometer Sm2Sn2O7. We introduce a pulse scheme called TCUP (for Temperature Calibration Under Pulseload) that can characterize the heating of this sacrificial sample rapidly, under a variety of experimental conditions, and with high temporal resolution. Sample heating is discussed with respect to different instrumental variables such as spinning speed, decoupling strength and duration, and cooling gas flow rate. The effects of different sample preparation variables are also discussed, including ionic strength, the inclusion of cryoprotectants, and the physical state of the sample (i.e. liquid, solid, or slurry). Lastly, we discuss probe detuning as a measure of sample thawing that does not require retuning the probe or using chemical shift thermometer compounds. Use of detuning tests and chemical shift thermometers with representative sample conditions makes it possible to maximize the efficiency of the NMR experiment while retaining a functional sample. PMID:22868258

  7. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  8. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  9. An efficient and cost-effective method for preparing transmission electron microscopy samples from powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, Haiming; Lin, Yaojun; Seidman, David N.

    The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less

  10. An efficient and cost-effective method for preparing transmission electron microscopy samples from powders

    DOE PAGES

    Wen, Haiming; Lin, Yaojun; Seidman, David N.; ...

    2015-09-09

    The preparation of transmission electron microcopy (TEM) samples from powders with particle sizes larger than ~100 nm poses a challenge. The existing methods are complicated and expensive, or have a low probability of success. Herein, we report a modified methodology for preparation of TEM samples from powders, which is efficient, cost-effective, and easy to perform. This method involves mixing powders with an epoxy on a piece of weighing paper, curing the powder–epoxy mixture to form a bulk material, grinding the bulk to obtain a thin foil, punching TEM discs from the foil, dimpling the discs, and ion milling the dimpledmore » discs to electron transparency. Compared with the well established and robust grinding–dimpling–ion-milling method for TEM sample preparation for bulk materials, our modified approach for preparing TEM samples from powders only requires two additional simple steps. In this article, step-by-step procedures for our methodology are described in detail, and important strategies to ensure success are elucidated. Furthermore, our methodology has been applied successfully for preparing TEM samples with large thin areas and high quality for many different mechanically milled metallic powders.« less

  11. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  12. Computational Aerothermodynamics in Aeroassist Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

  13. Development of liquid chromatography high resolution mass spectrometry strategies for the screening of complex organic matter: Application to astrophysical simulated materials.

    PubMed

    Eddhif, Balkis; Allavena, Audrey; Liu, Sylvie; Ribette, Thomas; Abou Mrad, Ninette; Chiavassa, Thierry; d'Hendecourt, Louis Le Sergeant; Sternberg, Robert; Danger, Gregoire; Geffroy-Rodier, Claude; Poinot, Pauline

    2018-03-01

    The present work aims at developing two LC-HRMS setups for the screening of organic matter in astrophysical samples. Their analytical development has been demonstrated on a 100-µg residue coming from the photo-thermo chemical processing of a cometary ice analog produced in laboratory. The first 1D-LC-HRMS setup combines a serially coupled columns configuration with HRMS detection. It has allowed to discriminate among different chemical families (amino acids, sugars, nucleobases and oligopeptides) in only one chromatographic run without neither a priori acid hydrolysis nor chemical derivatisation. The second setup is a dual-LC configuration which connects a series of trapping columns with analytical reverse-phase columns. By coupling on-line these two distinct LC units with a HRMS detection, high mass compounds (350

  14. Efficient identification of context dependent subgroups of risk from genome wide association studies

    PubMed Central

    Dyson, Greg; Sing, Charles F.

    2014-01-01

    We have developed a modified Patient Rule-Induction Method (PRIM) as an alternative strategy for analyzing representative samples of non-experimental human data to estimate and test the role of genomic variations as predictors of disease risk in etiologically heterogeneous sub-samples. A computational limit of the proposed strategy is encountered when the number of genomic variations (predictor variables) under study is large (> 500) because permutations are used to generate a null distribution to test the significance of a term (defined by values of particular variables) that characterizes a sub-sample of individuals through the peeling and pasting processes. As an alternative, in this paper we introduce a theoretical strategy that facilitates the quick calculation of Type I and Type II errors in the evaluation of terms in the peeling and pasting processes carried out in the execution of a PRIM analysis that are underestimated and non-existent, respectively, when a permutation-based hypothesis test is employed. The resultant savings in computational time makes possible the consideration of larger numbers of genomic variations (an example genome wide association study is given) in the selection of statistically significant terms in the formulation of PRIM prediction models. PMID:24570412

  15. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  16. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    NASA Astrophysics Data System (ADS)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  17. State-of-the-art molecular approaches to elucidate the genetic inventory of spacecraft surfaces and associated environments

    NASA Astrophysics Data System (ADS)

    Venkateswaran, Kasthuri; La Duc, Myron; James; Osman, Shariff; Andersen, Gary; Huber, Julie; Sogin, Mitchell

    The scientific literature teems with reports of microbial diversity from seemingly every niche imaginable, from deep within Antarctic ice to ocean-floor hydrothermal systems. The fields of applied microbiology and molecular biology have made enormous technological advancements over the past two decades, from the development of PCR-amplification of DNA to the forensic detection of what many consider to be "miniscule" amounts of blood and other such biomatter. Despite advances in the specificity and sensitivity of molecular biological technologies, the abilities to efficiently sample and extract nucleic acids from low-biomass matrices, and accurately describe the true microbial diversity housed in such samples, remain significant challenges. To minimize the likelihood of forward contamination of Mars, Europa, or any other extraterrestrial environment, significant effort is invested to ensure that environments in which spacecraft are assembled are maintained appropriately and kept as free of microbial contamination as possible. To this end, routine analyses, largely based on spore-counts and cultivation-based approaches, are carried out to validate the cleanliness of such surfaces. However, only by applying the most efficient and accurate molecular means of analysis can conclusions be drawn on the actual bioburden and microbial diversity associated with these environments. For any measure of sample-derived bioburden, a large portion is inevitably lost in sampling. Furthermore, a 90 Since the surface area of a spacecraft is fixed, it is not possible to simply increase sample size to improve yield. It is therefore critical to assure that current methods of purification of biomolecules sampled from this limited resource are 1) optimal for achieving total yield of biota present and 2) conserving of the true microbial diversity of the sampled environment. This project focuses on the development of capabilities to effectively and efficiently generate a genetic inventory of microbes present about the surfaces of spacecraft and associated clean-room facilities. This entails the evaluation and optimization of molecular-based strategies designed to assess microbial burden and diversity arising from samples of low biomass. Such strategies include conventional clone library analysis, DNA microarray screening, and V6-Tag Sequencing. The capabilities resulting from this work will enable NASA to establish genetic inventories of spacecraft, as recommended by the National Research Council, to better understand the risk of forward contamination.

  18. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.

  20. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046

  1. The integrated quality assessment of Chinese commercial dry red wine based on a method of online HPLC-DAD-CL combined with HPLC-ESI-MS.

    PubMed

    Yu, Hai-Xiang; Sun, Li-Qiong; Qi, Jin

    2014-07-01

    To apply an integrated quality assessment strategy to investigate the quality of multiple Chinese commercial dry red wine samples. A comprehensive method was developed by combining a high performance liquid chromatography-diode array detector-chemiluminescence (HPLC-DAD-CL) online hyphenated system with an HPLC-ESI-MS technique. Chromatographic and H2O2-scavenging active fingerprints of thirteen batches of different, commercially available Chinese dry red wine samples were obtained and analyzed. Twenty-five compounds, including eighteen antioxidants were identified and evaluated. The dominant and characteristic antioxidants in the samples were identified. The relationships between antioxidant potency and the cultivated variety of grape, producing area, cellaring period, and trade mark are also discussed. The results provide the feasibility for an integrated quality assessment strategy to be efficiently and objectively used in quality (especially antioxidant activity) assessment and identification of dry red wine. Copyright © 2014 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.

  2. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  3. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  4. Field spectroscopy sampling strategies for improved measurement of Earth surface reflectance

    NASA Astrophysics Data System (ADS)

    Mac Arthur, A.; Alonso, L.; Malthus, T. J.; Moreno, J. F.

    2013-12-01

    Over the last two decades extensive networks of research sites have been established to measure the flux of carbon compounds and water vapour between the Earth's surface and the atmosphere using eddy covariance (EC) techniques. However, contributing Earth surface components cannot be determined and (as the ';footprints' are spatially constrained) these measurements cannot be extrapolated to regional cover using this technique. At many of these EC sites researchers have been integrating spectral measurements with EC and ancillary data to better understand light use efficiency and carbon dioxide flux. These spectroscopic measurements could also be used to assess contributing components and provide support for imaging spectroscopy, from airborne or satellite platforms, which can provide unconstrained spatial cover. Furthermore, there is an increasing interest in ';smart' database and information retrieval systems such as that proposed by EcoSIS and OPTIMISE to store, analyse, QA and merge spectral and biophysical measurements and provide information to end users. However, as Earth surfaces are spectrally heterogeneous and imaging and field spectrometers sample different spatial extents appropriate field sampling strategies require to be adopted. To sample Earth surfaces spectroscopists adopt either single; random; regular grid; transect; or 'swiping' point sampling strategies, although little comparative work has been carried out to determine the most appropriate approach; the work by Goetz (2012) is a limited exception. Mac Arthur et al (2012) demonstrated that, for two full wavelength (400 nm to 2,500 nm) field spectroradiometers, the measurement area sampled is defined by each spectroradiometer/fore optic system's directional response function (DRF) rather than the field-of-view (FOV) specified by instrument manufacturers. Mac Arthur et al (2012) also demonstrated that each reflecting element within the sampled area was not weighted equally in the integrated measurement recorded. There were non-uniformities of spectral response with the spectral ';weighting' per wavelength interval being positionally dependent and unique to each spectroradiometer/fore optic system investigated. However, Mac Arthur et al (2012) did not provide any advice on how to compensate for these systematic errors or advise on appropriate sampling strategies. The work reported here will provide the first systematic study of the effect of field spectroscopy sampling strategies for a range of different Earth surface types. Synthetic Earth surface hyperspectral data cubes for each surface type were generated and convolved with a range of the spectrometer/fore optic system directional response functions generated by Mac Arthur et al 2013, to simulate spectroscopic measurements of Earth surfaces. This has enabled different field sampling strategies to be directly compared and their suitability for each measurement purpose and surface type to be assessed and robust field spectroscopy sampling strategy recommendations to be made. This will be particularly of interest to the carbon and water vapour flux communities and assist the development of sampling strategies for field spectroscopy from rotary-wing Unmanned Aerial Vehicles, which will aid acquiring measurements in the spatial domain, and generally further the use of field spectroscopy for quantitative Earth observation.

  5. Solid-phase reductive amination for glycomic analysis.

    PubMed

    Jiang, Kuan; Zhu, He; Xiao, Cong; Liu, Ding; Edmunds, Garrett; Wen, Liuqing; Ma, Cheng; Li, Jing; Wang, Peng George

    2017-04-15

    Reductive amination is an indispensable method for glycomic analysis, as it tremendously facilitates glycan characterization and quantification by coupling functional tags at the reducing ends of glycans. However, traditional in-solution derivatization based approach for the preparation of reductively aminated glycans is quite tedious and time-consuming. Here, a simpler and more efficient strategy termed solid-phase reductive amination was investigated. The general concept underlying this new approach is to streamline glycan extraction, derivatization, and purification on non-porous graphitized carbon sorbents. Neutral and sialylated standard glycans were utilized to test the feasibility of the solid-phase method. As results, almost complete labeling of those glycans with four common labels of aniline, 2-aminobenzamide (2-AB), 2-aminobenzoic acid (2-AA) and 2-amino-N-(2-aminoethyl)-benzamide (AEAB) was obtained, and negligible desialylation occurred during sample preparation. The labeled glycans derived from glycoproteins showed excellent reproducibility in high performance liquid chromatography (HPLC) and matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis. Direct comparisons based on fluorescent absorbance and relative quantification using isotopic labeling demonstrated that the solid-phase strategy enabled 20-30% increase in sample recovery. In short, the solid-phase strategy is simple, reproducible, efficient, and sensitive for glycan analysis. This method was also successfully applied for N-glycan profiling of HEK 293 cells with MALDI-TOF MS, showing its attractive application in the high-throughput analysis of mammalian glycome. Published by Elsevier B.V.

  6. Subtype-independent near full-length HIV-1 genome sequencing and assembly to be used in large molecular epidemiological studies and clinical management.

    PubMed

    Grossmann, Sebastian; Nowak, Piotr; Neogi, Ujjwal

    2015-01-01

    HIV-1 near full-length genome (HIV-NFLG) sequencing from plasma is an attractive multidimensional tool to apply in large-scale population-based molecular epidemiological studies. It also enables genotypic resistance testing (GRT) for all drug target sites allowing effective intervention strategies for control and prevention in high-risk population groups. Thus, the main objective of this study was to develop a simplified subtype-independent, cost- and labour-efficient HIV-NFLG protocol that can be used in clinical management as well as in molecular epidemiological studies. Plasma samples (n=30) were obtained from HIV-1B (n=10), HIV-1C (n=10), CRF01_AE (n=5) and CRF01_AG (n=5) infected individuals with minimum viral load >1120 copies/ml. The amplification was performed with two large amplicons of 5.5 kb and 3.7 kb, sequenced with 17 primers to obtain HIV-NFLG. GRT was validated against ViroSeq™ HIV-1 Genotyping System. After excluding four plasma samples with low-quality RNA, a total of 26 samples were attempted. Among them, NFLG was obtained from 24 (92%) samples with the lowest viral load being 3000 copies/ml. High (>99%) concordance was observed between HIV-NFLG and ViroSeq™ when determining the drug resistance mutations (DRMs). The N384I connection mutation was additionally detected by NFLG in two samples. Our high efficiency subtype-independent HIV-NFLG is a simple and promising approach to be used in large-scale molecular epidemiological studies. It will facilitate the understanding of the HIV-1 pandemic population dynamics and outline effective intervention strategies. Furthermore, it can potentially be applicable in clinical management of drug resistance by evaluating DRMs against all available antiretrovirals in a single assay.

  7. DNA pooling: a comprehensive, multi-stage association analysis of ACSL6 and SIRT5 polymorphisms in schizophrenia.

    PubMed

    Chowdari, K V; Northup, A; Pless, L; Wood, J; Joo, Y H; Mirnics, K; Lewis, D A; Levitt, P R; Bacanu, S-A; Nimgaonkar, V L

    2007-04-01

    Many candidate gene association studies have evaluated incomplete, unrepresentative sets of single nucleotide polymorphisms (SNPs), producing non-significant results that are difficult to interpret. Using a rapid, efficient strategy designed to investigate all common SNPs, we tested associations between schizophrenia and two positional candidate genes: ACSL6 (Acyl-Coenzyme A synthetase long-chain family member 6) and SIRT5 (silent mating type information regulation 2 homologue 5). We initially evaluated the utility of DNA sequencing traces to estimate SNP allele frequencies in pooled DNA samples. The mean variances for the DNA sequencing estimates were acceptable and were comparable to other published methods (mean variance: 0.0008, range 0-0.0119). Using pooled DNA samples from cases with schizophrenia/schizoaffective disorder (Diagnostic and Statistical Manual of Mental Disorders edition IV criteria) and controls (n=200, each group), we next sequenced all exons, introns and flanking upstream/downstream sequences for ACSL6 and SIRT5. Among 69 identified SNPs, case-control allele frequency comparisons revealed nine suggestive associations (P<0.2). Each of these SNPs was next genotyped in the individual samples composing the pools. A suggestive association with rs 11743803 at ACSL6 remained (allele-wise P=0.02), with diminished evidence in an extended sample (448 cases, 554 controls, P=0.062). In conclusion, we propose a multi-stage method for comprehensive, rapid, efficient and economical genetic association analysis that enables simultaneous SNP detection and allele frequency estimation in large samples. This strategy may be particularly useful for research groups lacking access to high throughput genotyping facilities. Our analyses did not yield convincing evidence for associations of schizophrenia with ACSL6 or SIRT5.

  8. Effects of 16S rDNA sampling on estimates of the number of endosymbiont lineages in sucking lice

    PubMed Central

    Burleigh, J. Gordon; Light, Jessica E.; Reed, David L.

    2016-01-01

    Phylogenetic trees can reveal the origins of endosymbiotic lineages of bacteria and detect patterns of co-evolution with their hosts. Although taxon sampling can greatly affect phylogenetic and co-evolutionary inference, most hypotheses of endosymbiont relationships are based on few available bacterial sequences. Here we examined how different sampling strategies of Gammaproteobacteria sequences affect estimates of the number of endosymbiont lineages in parasitic sucking lice (Insecta: Phthirapatera: Anoplura). We estimated the number of louse endosymbiont lineages using both newly obtained and previously sequenced 16S rDNA bacterial sequences and more than 42,000 16S rDNA sequences from other Gammaproteobacteria. We also performed parametric and nonparametric bootstrapping experiments to examine the effects of phylogenetic error and uncertainty on these estimates. Sampling of 16S rDNA sequences affects the estimates of endosymbiont diversity in sucking lice until we reach a threshold of genetic diversity, the size of which depends on the sampling strategy. Sampling by maximizing the diversity of 16S rDNA sequences is more efficient than randomly sampling available 16S rDNA sequences. Although simulation results validate estimates of multiple endosymbiont lineages in sucking lice, the bootstrap results suggest that the precise number of endosymbiont origins is still uncertain. PMID:27547523

  9. Selective isolation of gonyautoxins 1,4 from the dinoflagellate Alexandrium minutum based on molecularly imprinted solid-phase extraction.

    PubMed

    Lian, Ziru; Wang, Jiangtao

    2017-09-15

    Gonyautoxins 1,4 (GTX1,4) from Alexandrium minutum samples were isolated selectively and recognized specifically by an innovative and effective extraction procedure based on molecular imprinting technology. Novel molecularly imprinted polymer microspheres (MIPMs) were prepared by double-templated imprinting strategy using caffeine and pentoxifylline as dummy templates. The synthesized polymers displayed good affinity to GTX1,4 and were applied as sorbents. Further, an off-line molecularly imprinted solid-phase extraction (MISPE) protocol was optimized and an effective approach based on the MISPE coupled with HPLC-FLD was developed for selective isolation of GTX1,4 from the cultured A. minutum samples. The separation method showed good extraction efficiency (73.2-81.5%) for GTX1,4 and efficient removal of interferences matrices was also achieved after the MISPE process for the microalgal samples. The outcome demonstrated the superiority and great potential of the MISPE procedure for direct separation of GTX1,4 from marine microalgal extracts. Copyright © 2017. Published by Elsevier Ltd.

  10. Patient and Sample Identification. Out of the Maze?

    PubMed

    Lippi, Giuseppe; Chiozza, Laura; Mattiuzzi, Camilla; Plebani, Mario

    2017-04-01

    Patient and sample misidentification may cause significant harm or discomfort to the patients, especially when incorrect data is used for performing specific healthcare activities. It is hence obvious that efficient and quality care can only start from accurate patient identification. There are many opportunities for misidentification in healthcare and laboratory medicine, including homonymy, incorrect patient registration, reliance on wrong patient data, mistakes in order entry, collection of biological specimens from wrong patients, inappropriate sample labeling and inaccurate entry or erroneous transmission of test results through the laboratory information system. Many ongoing efforts are made to prevent this important healthcare problem, entailing streamlined strategies for identifying patients throughout the healthcare industry by means of traditional and innovative identifiers, as well as using technologic tools that may enhance both the quality and efficiency of blood tubes labeling. The aim of this article is to provide an overview about the liability of identification errors in healthcare, thus providing a pragmatic approach for diverging the so-called patient identification crisis.

  11. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  12. Facile and easily popularized synthesis of L-cysteine-functionalized magnetic nanoparticles based on one-step functionalization for highly efficient enrichment of glycopeptides.

    PubMed

    Feng, Xiaoyan; Deng, Chunhui; Gao, Mingxia; Zhang, Xiangmin

    2018-01-01

    Protein glycosylation is one of the most important post-translational modifications. Also, efficient enrichment and separation of glycopeptides from complex samples are crucial for the thorough analysis of glycosylation. Developing novel hydrophilic materials with facile and easily popularized synthesis is an urgent need in large-scale glycoproteomics research. Herein, for the first time, a one-step functionalization strategy based on metal-organic coordination was proposed and Fe 3 O 4 nanoparticles were directly functionalized with zwitterionic hydrophilic L-cysteine (L-Cys), greatly simplifying the synthetic procedure. The easily synthesized Fe 3 O 4 /L-Cys possessed excellent hydrophilicity and brief composition, contributing to affinity for glycopeptides and reduction in nonspecific interaction. Thus, Fe 3 O 4 /L-Cys nanoparticles showed outstanding sensitivity (25 amol/μL), high selectivity (mixture of bovine serum albumin and horseradish peroxidase tryptic digests at a mass ratio of 100:1), good reusability (five repeated times), and stability (room temperature storage of 1 month). Encouragingly, in the glycosylation analysis of human serum, a total of 376 glycopeptides with 393 N-glycosylation sites corresponding to 118 glycoproteins were identified after enrichment with Fe 3 O 4 /L-Cys, which was superior to ever reported L-Cys modified magnetic materials. Furthermore, applying the one-step functionalization strategy, cysteamine and glutathione respectively direct-functionalized Fe 3 O 4 nanoparticles were successfully synthesized and also achieved efficient glycopeptide enrichment in human serum. The results indicated that we have presented an efficient and easily popularized strategy in glycoproteomics as well as in the synthesis of novel materials. Graphical abstract Fe 3 O 4 /L-Cys nanoparticles based on one-step functionalization for highly efficient enrichment of glycopeptides.

  13. A green protocol for efficient discovery of novel natural compounds: characterization of new ginsenosides from the stems and leaves of Panax ginseng as a case study.

    PubMed

    Qiu, Shi; Yang, Wen-Zhi; Shi, Xiao-Jian; Yao, Chang-Liang; Yang, Min; Liu, Xuan; Jiang, Bao-Hong; Wu, Wan-Ying; Guo, De-An

    2015-09-17

    Exploration of new natural compounds is of vital significance for drug discovery and development. The conventional approaches by systematic phytochemical isolation are low-efficiency and consume masses of organic solvent. This study presents an integrated strategy that combines offline comprehensive two-dimensional liquid chromatography, hybrid linear ion-trap/Orbitrap mass spectrometry, and NMR analysis (2D LC/LTQ-Orbitrap-MS/NMR), aimed to establish a green protocol for the efficient discovery of new natural molecules. A comprehensive chemical analysis of the total ginsenosides of stems and leaves of Panax ginseng (SLP), a cardiovascular disease medicine, was performed following this strategy. An offline 2D LC system was constructed with an orthogonality of 0.79 and a practical peak capacity of 11,000. The much greener UHPLC separation and LTQ-Orbitrap-MS detection by data-dependent high-energy C-trap dissociation (HCD)/dynamic exclusion were employed for separation and characterization of ginsenosides from thirteen fractionated SLP samples. Consequently, a total of 646 ginsenosides were characterized, and 427 have not been isolated from the genus of Panax L. The ginsenosides identified from SLP exhibited distinct sapogenin diversity and molecular isomerism. NMR analysis was finally employed to verify and offer complementary structural information to MS-oriented characterization. The established 2D LC/LTQ-Orbitrap-MS/NMR approach outperforms the conventional approaches in respect of significantly improved efficiency, much less use of drug materials and organic solvent. The integrated strategy enables a deep investigation on the therapeutic basis of an herbal medicine, and facilitates new compounds discovery in an efficient and environmentally friendly manner as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Assessing forest windthrow damage using single-date, post-event airborne laser scanning data

    Treesearch

    Gherardo Chirici; Francesca Bottalico; Francesca Giannetti; Barbara Del Perugia; Davide Travaglini; Susanna Nocentini; Erico Kutchartt; Enrico Marchi; Cristiano Foderi; Marco Fioravanti; Lorenzo Fattorini; Lorenzo Bottai; Ronald McRoberts; Erik Næsset; Piermaria Corona; Bernardo Gozzini

    2017-01-01

    One of many possible climate change effects in temperate areas is the increase of frequency and severity of windstorms; thus, fast and cost efficient new methods are needed to evaluate wind-induced damages in forests. We present a method for assessing windstorm damages in forest landscapes based on a two-stage sampling strategy using single-date, post-event airborne...

  15. An integrative strategy for quantitative analysis of the N-glycoproteome in complex biological samples

    PubMed Central

    2014-01-01

    Background The complexity of protein glycosylation makes it difficult to characterize glycosylation patterns on a proteomic scale. In this study, we developed an integrated strategy for comparatively analyzing N-glycosylation/glycoproteins quantitatively from complex biological samples in a high-throughput manner. This strategy entailed separating and enriching glycopeptides/glycoproteins using lectin affinity chromatography, and then tandem labeling them with 18O/16O to generate a mass shift of 6 Da between the paired glycopeptides, and finally analyzing them with liquid chromatography-mass spectrometry (LC-MS) and the automatic quantitative method we developed based on Mascot Distiller. Results The accuracy and repeatability of this strategy were first verified using standard glycoproteins; linearity was maintained within a range of 1:10–10:1. The peptide concentration ratios obtained by the self-build quantitative method were similar to both the manually calculated and theoretical values, with a standard deviation (SD) of 0.023–0.186 for glycopeptides. The feasibility of the strategy was further confirmed with serum from hepatocellular carcinoma (HCC) patients and healthy individuals; the expression of 44 glycopeptides and 30 glycoproteins were significantly different between HCC patient and control serum. Conclusions This strategy is accurate, repeatable, and efficient, and may be a useful tool for identification of disease-related N-glycosylation/glycoprotein changes. PMID:24428921

  16. An automation-assisted generic approach for biological sample preparation and LC-MS/MS method validation.

    PubMed

    Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S

    2011-09-01

    Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.

  17. A new strategy for complete identification of sea buckthorn cultivars by using random amplified polymorphic DNA markers.

    PubMed

    Yang, G; Ding, J; Wu, L R; Duan, Y D; Li, A Y; Shan, J Y; Wu, Y X

    2015-03-13

    DNA fingerprinting is both a popular and important technique with several advantages in plant cultivar identification. However, this technique has not been used widely and efficiently in practical plant identification because the analysis and recording of data generated from fingerprinting and genotyping are tedious and difficult. We developed a novel approach known as a cultivar identification diagram (CID) strategy that uses DNA markers to separate plant individuals in a more efficient, practical, and referable manner. A CID was manually constructed and a polymorphic marker was generated from each polymerase chain reaction for sample separation. In this study, 67 important sea buckthorn cultivars cultivated in China were successfully separated with random amplified polymorphic DNA markers using the CID analysis strategy, with only seven 11-nucleotide primers employed. The utilization of the CID of these 67 sea buckthorn cultivars was verified by identifying 2 randomly chosen groups of cultivars among the 67 cultivars. The main advantages of this identification strategy include fewer primers used and separation of all cultivars using the corresponding primers. This sea buckthorn CID was able to separate any sea buckthorn cultivars among the 67 studied, which is useful for sea buckthorn cultivar identification, cultivar-right-protection, and for the sea buckthorn nursery industry in China.

  18. Integration mechanisms and hospital efficiency in integrated health care delivery systems.

    PubMed

    Wan, Thomas T H; Lin, Blossom Yen-Ju; Ma, Allen

    2002-04-01

    This study analyzes integration mechanisms that affect system performances measured by indicators of efficiency in integrated delivery systems (IDSs) in the United States. The research question is, do integration mechanisms improve IDSs' efficiency in hospital care? American Hospital Association's Annual Survey (1998) and Dorenfest's Survey on Information Systems in Integrated Healthcare Delivery Systems (1998) were used to conduct the study, using IDS as the unit of analysis. A covariance structure equation model of the effects of system integration mechanisms on IDS performance was formulated and validated by an empirical examination of IDSs. The study sample includes 973 hospital-based integrated health care delivery systems operating in the United States, carried in the list of Dorenfests Survey on Information Systems in Integrated Health care Delivery Systems. The measurement indicators of system integration mechanisms are categorized into six related domains: informatic integration, case management, hybrid physician-hospital integration, forward integration, backward integration, and high tech medical services. The multivariate analysis reveals that integration mechanisms in system operation are positively correlated and positively affect IDSs' efficiency. The six domains of integration mechanisms account for 58.9% of the total variance in hospital performance. The service differentiation strategy such as having more high tech medical services have much stronger influences on efficiency than other integration mechanisms do. The beneficial effects of integration mechanisms have been realized in IDS performance. High efficiency in hospital care can be achieved by employing proper integration strategies in operations.

  19. Resource efficiency potential of selected technologies, products and strategies.

    PubMed

    Rohn, Holger; Pastewski, Nico; Lettenmeier, Michael; Wiesen, Klaus; Bienge, Katrin

    2014-03-01

    Despite rising prices for natural resources during the past 30 years, global consumption of natural resources is still growing. This leads to ecological, economical and social problems. So far, however, limited effort has been made to decrease the natural resource use of goods and services. While resource efficiency is already on the political agenda (EU and national resource strategies), there are still substantial knowledge gaps on the effectiveness of resource efficiency improvement strategies in different fields. In this context and within the project "Material Efficiency and Resource Conservation", the natural resource use of 22 technologies, products and strategies was calculated and their resource efficiency potential analysed. In a preliminary literature- and expert-based identification process, over 250 technologies, strategies, and products, which are regarded as resource efficient, were identified. Out of these, 22 subjects with high resource efficiency potential were selected. They cover a wide range of relevant technologies, products and strategies, such as energy supply and storage, Green IT, transportation, foodstuffs, agricultural engineering, design strategies, lightweight construction, as well as the concept "Using Instead of Owning". To assess the life-cycle-wide resource use of the selected subjects, the material footprint has been applied as a reliable indicator. In addition, sustainability criteria on a qualitative basis were considered. The results presented in this paper show significant resource efficiency potential for many technologies, products and strategies. Copyright © 2013. Published by Elsevier B.V.

  20. An audit strategy for progression-free survival

    PubMed Central

    Dodd, Lori E.; Korn, Edward L.; Freidlin, Boris; Gray, Robert; Bhattacharya, Suman

    2010-01-01

    Summary In randomized clinical trials, the use of potentially subjective endpoints has led to frequent use of blinded independent central review (BICR) and event adjudication committees to reduce possible bias in treatment effect estimators based on local evaluations (LE). In oncology trials, progression-free survival (PFS) is one such endpoint. PFS requires image interpretation to determine whether a patient’s cancer has progressed, and BICR has been advocated to reduce the potential for endpoints to be biased by knowledge of treatment assignment. There is current debate, however, about the value of such reviews with time-to-event outcomes like PFS. We propose a BICR audit strategy as an alternative to a complete-case BICR to provide assurance of the presence of a treatment effect. We develop an auxiliary-variable estimator of the log-hazard ratio that is more efficient than simply using the audited (i.e., sampled) BICR data for estimation. Our estimator incorporates information from the LE on all the cases and the audited BICR cases, and is an asymptotically unbiased estimator of the log-hazard ratio from BICR. The estimator offers considerable efficiency gains that improve as the correlation between LE and BICR increases. A two-stage auditing strategy is also proposed and evaluated through simulation studies. The method is applied retrospectively to a large oncology trial that had a complete-case BICR, showing the potential for efficiency improvements. PMID:21210772

  1. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  2. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    PubMed

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to suitably record the local species composition, and (4) separate trap groups by a distance greater than 5-10km to avoid spatial autocorrelation. For the evaluation of other sampling protocols we recommend to, first, identify the elements of sampling design that could affect the sampled effort (the number of traps, sampling duration, type and proportion of bait) and their spatial distribution (spatial arrangement of the traps) and then, to evaluate how they affect richness, abundance and species composition estimates.

  3. A strategy for characterized aerosol-sampling transport efficiency.

    NASA Astrophysics Data System (ADS)

    Schwarz, J. P.

    2017-12-01

    A fundamental concern when sampling aerosol in the laboratory or in situ, on the ground or (especially) from aircraft, is characterizing transport losses due to particles contacting the walls of tubing used for transport. Depending on the size range of the aerosol, different mechanisms dominate these losses: diffusion for the ultra-fine, and inertial and gravitational settling losses for the coarse mode. In the coarse mode, losses become intractable very quickly with increasing particle size above 5 µm diameter. Here we present these issues, with a concept approach to reducing aerosol losses via strategic dilution with porous tubing including results of laboratory testing of a prototype. We infer the potential value of this approach to atmospheric aerosol sampling.

  4. Evaluation on the efficiencies of county-level Centers for Disease Control and Prevention in China: results from a national survey.

    PubMed

    Li, Chengyue; Sun, Mei; Shen, Jay J; Cochran, Christopher R; Li, Xiaojiao; Hao, Mo

    2016-09-01

    The Chinese government has greatly increased funding for disease control and prevention since the 2003 Severe Acute Respiration Syndrome crisis, but it is also concerned whether these increased resources have been used efficiently to improve public health services. We aimed to assess the efficiency of county-level Centers for Disease Control and Prevention (CDCs) of China and to identify strategies for optimising their performance. A total of 446 county-level CDCs were selected based on systematic sampling throughout China. The data envelopment analysis framework was used to calculate the efficiency score of sampled CDCs in 2010. The Charnes, Cooper and Rhodes (CCR) model was applied to calculate the overall and scale efficiency, and the Banker, Charnes and Cooper (BCC) model was used to assess technical efficiency. Models included three inputs and seven outputs. A projection analysis was conducted to identify the difference between projection value and actual value for inputs and outputs. The average overall efficiency score of CDCs was 0.317, and the average technical efficiency score was 0.442 and 88.3% with decreasing returns to scale. Projection analysis indicated that all seven categories of outputs were underproduced. CDCs in the eastern region tended to perform better than CDCs in the middle and the western region. Most county-level CDCs in China were operated inefficiently. Emphasis should be put on increasing staff and general operating expenses through current governmental funding, upgrading healthcare providers' competencies and enhancing the standardisation of operational management, so that CDCs could utilise their resources more efficiently. © 2016 John Wiley & Sons Ltd.

  5. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    NASA Astrophysics Data System (ADS)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  6. High-Resolution Melting (HRM) of Hypervariable Mitochondrial DNA Regions for Forensic Science.

    PubMed

    Dos Santos Rocha, Alípio; de Amorim, Isis Salviano Soares; Simão, Tatiana de Almeida; da Fonseca, Adenilson de Souza; Garrido, Rodrigo Grazinoli; Mencalha, Andre Luiz

    2018-03-01

    Forensic strategies commonly are proceeding by analysis of short tandem repeats (STRs); however, new additional strategies have been proposed for forensic science. Thus, this article standardized the high-resolution melting (HRM) of DNA for forensic analyzes. For HRM, mitochondrial DNA (mtDNA) from eight individuals were extracted from mucosa swabs by DNAzol reagent, samples were amplified by PCR and submitted to HRM analysis to identify differences in hypervariable (HV) regions I and II. To confirm HRM, all PCR products were DNA sequencing. The data suggest that is possible discriminate DNA from different samples by HRM curves. Also, uncommon dual-dissociation was identified in a single PCR product, increasing HRM analyzes by evaluation of melting peaks. Thus, HRM is accurate and useful to screening small differences in HVI and HVII regions from mtDNA and increase the efficiency of laboratory routines based on forensic genetics. © 2017 American Academy of Forensic Sciences.

  7. Generalized essential energy space random walks to more effectively accelerate solute sampling in aqueous environment

    NASA Astrophysics Data System (ADS)

    Lv, Chao; Zheng, Lianqing; Yang, Wei

    2012-01-01

    Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method.

  8. Fast and high-efficiency magnetic surface imprinting based on microwave-accelerated reversible addition fragmentation chain transfer polymerization for the selective extraction of estrogen residues in milk.

    PubMed

    Chen, Fangfang; Wang, Jiayu; Lu, Ruicong; Chen, Huiru; Xie, Xiaoyu

    2018-08-10

    A novel microwave-accelerated reversible addition fragmentation chain transfer (RAFT) polymerization strategy has been introduced to shorten reaction time and improved polymerization efficiency of the conventional molecularly imprinting technology based on RAFT. Magnetic molecular imprinted polymers (MMIPs) were successfully synthesized much more efficiently using 17β-estradiol (E2) as a template for the determination of estrogen residues. The resultant MMIPs had well-defined thin imprinted film, favoring the fast mass transfer. Moreover, the reaction time, which was just 1/24 of the time taken by conventional heating, was significantly decreased, improving the reaction efficiency and reducing the probability of side reactions. Meanwhile, the obtained polymers have good capacity of 6.67 mg g -1 and satisfactory selectivity to template molecule with the imprinting factor of 5.11. As a result, a method combination of the resultant MMIPs as solid phase extraction sorbents and high-performance liquid chromatography was successfully set up to determinate three estrogen residues in milk samples. For E2, estrone, and estriol, the limit of detections were calculated to be 0.03, 0.08, and 0.06 ng mL -1 , respectively, and the limit of quantifications were 0.11, 0.27, and 0.21 ng mL -1 , respectively. At the spiked level of 1, 5, and 10 ng mL -1 , the recoveries of the three estrogens were ranged from 69.1% to 91.9% and the intra-day relative standard deviation (RSD) was less than 5.7%. In addition, the resultant MMIPs exhibited good reproducibility and reusability with the inter-batch RSD of 5.3% and the intra-batch RSD of 6.2%, respectively. Overall, the realization of this strategy facilitates the preparation of MMIPs with good architecture and high reaction efficiencies for the analysis of complicated real samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  10. Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats

    PubMed Central

    Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579

  11. Integrated strategy for identifying minor components in complex samples combining mass defect, diagnostic ions and neutral loss information based on ultra-performance liquid chromatography-high resolution mass spectrometry platform: Folium Artemisiae Argyi as a case study.

    PubMed

    Ren, Dabing; Ran, Lu; Yang, Chong; Xu, Meilin; Yi, Lunzhao

    2018-05-18

    Ultra-performance liquid chromatography coupled to high-resolution mass spectrometry (UPLC-HRMS) has been used as a powerful tool to profile chemicals in traditional Chinese medicines. However, identification of potentially bioactive compounds is still a challenging work because of the large amount of information contained in the raw UPLC-HRMS data. Especially the ubiquitous matrix interference makes it more difficult to characterize the minor components. Therefore, rapid recognition and efficient extraction of the corresponding parent ions is critically important for identifying the attractive compounds in complex samples. Herein, we propose an integrated filtering strategy to remove un-related or interference MS 1 ions from the raw UPLC-HRMS data, which helps to retain the MS features of the target components and expose the compounds of interest as effective as possible. The proposed strategy is based on the use of a combination of different filtering methods, including nitrogen rule, mass defect, and neutral loss/diagnostic fragment ions filtering. The strategy was validated by rapid screening and identification of 16 methoxylated flavonoids and 55 chlorogenic acids analogues from the raw UPLC-HRMS dataset of Folium Artemisiae Argyi. Particularly, successful detection of several minor components indicated that the integrated strategy has obvious advantages over individual filtering methods, and it can be used as a promising method for screening and identifying compounds from complex samples, such as herbal medicines. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. A two-magnet strategy for improved mixing and capture from biofluids

    PubMed Central

    Doyle, Andrew B.; Haselton, Frederick R.

    2016-01-01

    Magnetic beads are a popular method for concentrating biomolecules from solution and have been more recently used in multistep pre-arrayed microfluidic cartridges. Typical processing strategies rely on a single magnet, resulting in a tight cluster of beads and requiring long incubation times to achieve high capture efficiencies, especially in highly viscous patient samples. This report describes a two-magnet strategy to improve the interaction of the bead surface with the surrounding fluid inside of a pre-arrayed, self-contained assay-in-a-tube. In the two-magnet system, target biomarker capture occurs at a rate three times faster than the single-magnet system. In clinically relevant biomatrices, we find a 2.5-fold improvement in biomarker capture at lower sample viscosities with the two-magnet system. In addition, we observe a 20% increase in the amount of protein captured at high viscosity for the two-magnet configuration relative to the single magnet approach. The two-magnet approach offers a means to achieve higher biomolecule extraction yields and shorter assay times in magnetic capture assays and in self-contained processor designs. PMID:27158286

  13. Implementation of a reimbursed medication review program: Corporate and pharmacy level strategies.

    PubMed

    MacKeigan, Linda D; Ijaz, Nadine; Bojarski, Elizabeth A; Dolovich, Lisa

    In 2006, the Ontario drug plan greatly reduced community pharmacy reimbursement for generic drugs. In exchange, a fee-for-service medication review program was introduced to help patients better understand their medication therapy and ensure that medications were taken as prescribed. A qualitative study of community pharmacy implementation strategies was undertaken to inform a mixed methods evaluation of the program. To describe strategies used by community pharmacies to implement a government-funded medication review service. Key informant interviews were conducted with pharmacy corporate executives and managers, as well as independent pharmacy owners. All pharmacy corporations in the province were approached; owners were purposively sampled from the registry of the pharmacist licensing body to obtain diversity in pharmacy attributes; and pharmacy managers were identified through a mix of snowball and registry sampling. Thematic qualitative coding and analysis were applied to interview transcripts. 42 key informants, including 14 executives, 15 managers/franchisees, and 11 owners, participated. The most common implementation strategy was software adaptation to flag eligible patients and to document the service. Human resource management (task shifting to technicians and increasing the technician complement), staff training, and patient identification and recruitment processes were widely mentioned. Motivational strategies including service targets and financial incentives were less frequent but controversial. Strategies typically unfolded over time, and became multifaceted. Apart from the use of targets in chain pharmacies only, strategies were similar across pharmacy ownership types. Ontario community pharmacies appeared to have done little preplanning of implementation strategies. Strategies focused on service efficiency and quantity, rather than quality. Unlike other jurisdictions, many managers supported the use of targets as motivators, and very few reported feeling pressured. This detailed account of a range of implementation strategies may be of practical value to community pharmacy decision makers. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Accelerating assimilation development for new observing systems using EFSO

    NASA Astrophysics Data System (ADS)

    Lien, Guo-Yuan; Hotta, Daisuke; Kalnay, Eugenia; Miyoshi, Takemasa; Chen, Tse-Chun

    2018-03-01

    To successfully assimilate data from a new observing system, it is necessary to develop appropriate data selection strategies, assimilating only the generally useful data. This development work is usually done by trial and error using observing system experiments (OSEs), which are very time and resource consuming. This study proposes a new, efficient methodology to accelerate the development using ensemble forecast sensitivity to observations (EFSO). First, non-cycled assimilation of the new observation data is conducted to compute EFSO diagnostics for each observation within a large sample. Second, the average EFSO conditionally sampled in terms of various factors is computed. Third, potential data selection criteria are designed based on the non-cycled EFSO statistics, and tested in cycled OSEs to verify the actual assimilation impact. The usefulness of this method is demonstrated with the assimilation of satellite precipitation data. It is shown that the EFSO-based method can efficiently suggest data selection criteria that significantly improve the assimilation results.

  15. The potential of Fourier transform infrared spectroscopy of milk samples to predict energy intake and efficiency in dairy cows.

    PubMed

    McParland, S; Berry, D P

    2016-05-01

    Knowledge of animal-level and herd-level energy intake, energy balance, and feed efficiency affect day-to-day herd management strategies; information on these traits at an individual animal level is also useful in animal breeding programs. A paucity of data (especially at the individual cow level), of feed intake in particular, hinders the inclusion of such attributes in herd management decision-support tools and breeding programs. Dairy producers have access to an individual cow milk sample at least once daily during lactation, and consequently any low-cost phenotyping strategy should consider exploiting measureable properties in this biological sample, reflecting the physiological status and performance of the cow. Infrared spectroscopy is the study of the interaction of an electromagnetic wave with matter and it is used globally to predict milk quality parameters on routinely acquired individual cow milk samples and bulk tank samples. Thus, exploiting infrared spectroscopy in next-generation phenotyping will ensure potentially rapid application globally with a negligible additional implementation cost as the infrastructure already exists. Fourier-transform infrared spectroscopy (FTIRS) analysis is already used to predict milk fat and protein concentrations, the ratio of which has been proposed as an indicator of energy balance. Milk FTIRS is also able to predict the concentration of various fatty acids in milk, the composition of which is known to change when body tissue is mobilized; that is, when the cow is in negative energy balance. Energy balance is mathematically very similar to residual energy intake (REI), a suggested measure of feed efficiency. Therefore, the prediction of energy intake, energy balance, and feed efficiency (i.e., REI) from milk FTIRS seems logical. In fact, the accuracy of predicting (i.e., correlation between predicted and actual values; root mean square error in parentheses) energy intake, energy balance, and REI from milk FTIRS in dairy cows was 0.88 (20.0MJ), 0.78 (18.6MJ), and 0.63 (22.0MJ), respectively, based on cross-validation. These studies, however, are limited to results from one research group based on data from 2 contrasting production systems in the United Kingdom and Ireland and would need to be replicated, especially in a range of production systems because the prediction equations are not accurate when the variability used in validation is not represented in the calibration data set. Heritable genetic variation exists for all predicted traits. Phenotypic differences in energy intake also exists among animals stratified based on genetic merit for energy intake predicted from milk FTIRS, substantiating the usefulness of such FTIR-predicted phenotypes not only for day-to-day herd management, but also as part of a breeding strategy to improve cow performance. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Evaluation of two-dimensional electrophoresis and liquid chromatography – tandem mass spectrometry for tissue-specific protein profiling of laser-microdissected plant samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schad, Martina; Lipton, Mary S.; Giavalisco, Patrick

    2005-07-14

    Laser microdissection (LM) allows the collection of homogeneous tissue- and cell specific plant samples. The employment of this technique with subsequent protein analysis has thus far not been reported for plant tissues, probably due to the difficulties associated with defining a reasonable cellular morphology and, in parallel, allowing efficient protein extraction from tissue samples. The relatively large sample amount needed for successful proteome analysis is an additional issue that complicates protein profiling on a tissue- or even cell-specific level. In contrast to transcript profiling that can be performed from very small sample amounts due to efficient amplification strategies, there ismore » as yet no amplification procedure for proteins available. In the current study, we compared different tissue preparation techniques prior to LM/laser pressure catapulting (LMPC) with respect to their suitability for protein retrieval. Cryosectioning was identified as the best compromise between tissue morphology and effective protein extraction. After collection of vascular bundles from Arabidopsis thaliana stem tissue by LMPC, proteins were extracted and subjected to protein analysis, either by classical two-dimensional gel electrophoresis (2-DE), or by high-efficiency liquid chromatography (LC) in conjunction with tandem mass spectrometry (MS/MS). Our results demonstrate that both methods can be used with LMPC collected plant material. But because of the significantly lower sample amount required for LC-MS/MS than for 2-DE, the combination of LMPC and LC-MS/MS has a higher potential to promote comprehensive proteome analysis of specific plant tissues.« less

  17. Settling Efficiency of Urban Particulate Matter Transported by Stormwater Runoff.

    PubMed

    Carbone, Marco; Penna, Nadia; Piro, Patrizia

    2015-09-01

    The main purpose of control measures in urban areas is to retain particulate matter washed out by stormwater over impermeable surfaces. In stormwater control measures, particulate matter removal typically occurs via sedimentation. Settling column tests were performed to examine the settling efficiency of such units using monodisperse and heterodisperse particulate matter (for which the particle size distributions were measured and modelled by the cumulative gamma distribution). To investigate the dependence of settling efficiency from the particulate matter, a variant of the evolutionary polynomial regression (EPR), a Microsoft Excel function based on multi-objective EPR technique (EPR-MOGA), called EPR MOGA XL, was used as a data-mining strategy. The results from this study have shown that settling efficiency is a function of the initial total suspended solids (TSS) concentration and of the median diameter (d50 index), obtained from the particle size distributions (PSDs) of the samples.

  18. Accurate Sample Assignment in a Multiplexed, Ultrasensitive, High-Throughput Sequencing Assay for Minimal Residual Disease.

    PubMed

    Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike

    2016-07-01

    High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  19. [Fundamental strategies to address the problem of public health service delivery insufficiency of disease prevention and control system of China].

    PubMed

    Shao, Jing-jing; Yu, Jing-jin; Yu, Ming-zhu; Duan, Yong; Gong, Xiangguang; Chen, Zheng; Wang, Hua; Shi, Peiwu; Liang, Zhankai; Yang, Feng; Wang, Dunzhi; Yue, Jianning; Luo, Shi; Luo, Li; Wang, Weicheng; Wang, Ying; Sun, Mei; Su, Zhongxin; Ma, Ning; Xie, Hongbin; Hao, Mo

    2005-03-01

    To develop and demonstrate the strategies to solve the problem of public health service delivery insufficiency of disease prevention and control system of China. 205 literatures in 8 national academic journals concerning health service management have been reviewed. The method of boundary analysis has been employed to conclude the various reform strategies. Based on the causes and mechanism of public health service delivery insufficiency of disease prevention and control system, the logic analysis has been employed to develop fundamental strategies, which has been demonstrated by 154 CDC using intention questionnaires. There are fundamental strategies to which the agreeing rate for sampling CDC was over 95%: to make sure government should afford the financing function of disease prevention and control and secure the feasible investment for centers of disease prevention and control. Meanwhile, the working efficiency of CDC should be improved through strengthening management and reforming government investing manner.

  20. General method for rapid purification of native chromatin fragments.

    PubMed

    Kuznetsov, Vyacheslav I; Haws, Spencer A; Fox, Catherine A; Denu, John M

    2018-05-24

    Biochemical, proteomic and epigenetic studies of chromatin rely on the efficient ability to isolate native nucleosomes in high yield and purity. However, isolation of native chromatin suitable for many downstream experiments remains a challenging task. This is especially true for the budding yeast Saccharomyces cerevisiae, which continues to serve as an important model organism for the study of chromatin structure and function. Here, we developed a time- and cost-efficient universal protocol for isolation of native chromatin fragments from yeast, insect, and mammalian cells. The resulting protocol preserves histone posttranslational modification in the native chromatin state, and is applicable for both parallel multi-sample spin-column purification and large scale isolation. This protocol is based on the efficient and stable purification of polynucleosomes, features a combination of optimized cell lysis and purification conditions, three options for chromatin fragmentation, and a novel ion-exchange chromatographic purification strategy.  The procedure will aid chromatin researchers interested in isolating native chromatin material for biochemical studies, and as a mild, acid- and detergent-free sample preparation method for mass-spectrometry analysis. Published under license by The American Society for Biochemistry and Molecular Biology, Inc.

  1. Sex differences in navigation strategy and efficiency.

    PubMed

    Boone, Alexander P; Gong, Xinyi; Hegarty, Mary

    2018-05-22

    Research on human navigation has indicated that males and females differ in self-reported navigation strategy as well as objective measures of navigation efficiency. In two experiments, we investigated sex differences in navigation strategy and efficiency using an objective measure of strategy, the dual-solution paradigm (DSP; Marchette, Bakker, & Shelton, 2011). Although navigation by shortcuts and learned routes were the primary strategies used in both experiments, as in previous research on the DSP, individuals also utilized route reversals and sometimes found the goal location as a result of wandering. Importantly, sex differences were found in measures of both route selection and navigation efficiency. In particular, males were more likely to take shortcuts and reached their goal location faster than females, while females were more likely to follow learned routes and wander. Self-report measures of strategy were only weakly correlated with objective measures of strategy, casting doubt on their usefulness. This research indicates that the sex difference in navigation efficiency is large, and only partially related to an individual's navigation strategy as measured by the dual-solution paradigm.

  2. Sensor Measurement Strategies for Monitoring Offshore Wind and Wave Energy Devices

    NASA Astrophysics Data System (ADS)

    O'Donnell, Deirdre; Srbinovsky, Bruno; Murphy, Jimmy; Popovici, Emanuel; Pakrashi, Vikram

    2015-07-01

    While the potential of offshore wind and wave energy devices is well established and accepted, operations and maintenance issues are still not very well researched or understood. In this regard, scaled model testing has gained popularity over time for such devices at various technological readiness levels. The dynamic response of these devices are typically measured by different instruments during such scaled tests but agreed sensor choice, measurement and placement guidelines are still not in place. This paper compared the dynamic responses of some of these sensors from a scaled ocean wave testing to highlight the importance of sensor measurement strategies. The possibility of using multiple, cheaper sensors of seemingly inferior performance as opposed to the deployment of a small number of expensive and accurate sensors are also explored. An energy aware adaptive sampling theory is applied to highlight the possibility of more efficient computing when large volumes of data are available from the tested structures. Efficient sensor measurement strategies are expected to have a positive impact on the development of an device at different technological readiness levels while it is expected to be helpful in reducing operation and maintenance costs if such an approach is considered for the devices when they are in operation.

  3. 75 FR 29993 - Department of Commerce: Trade Promotion Coordinating Committee Renewable Energy and Energy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... Coordinating Committee Renewable Energy and Energy Efficiency Export Strategy To Support the National Export... Trade Promotion Coordinating Committee's (TPCC) Renewable Energy and Energy Efficiency Working Group is developing a U.S. Renewable Energy and Energy Efficiency Export Strategy (the Strategy) to guide U.S...

  4. Desert Beetle-Inspired Superwettable Patterned Surfaces for Water Harvesting.

    PubMed

    Yu, Zhenwei; Yun, Frank F; Wang, Yanqin; Yao, Li; Dou, Shixue; Liu, Kesong; Jiang, Lei; Wang, Xiaolin

    2017-09-01

    With the impacts of climate change and impending crisis of clean drinking water, designing functional materials for water harvesting from fog with large water capacity has received much attention in recent years. Nature has evolved different strategies for surviving dry, arid, and xeric conditions. Nature is a school for human beings. In this contribution, inspired by the Stenocara beetle, superhydrophilic/superhydrophobic patterned surfaces are fabricated on the silica poly(dimethylsiloxane) (PDMS)-coated superhydrophobic surfaces using a pulsed laser deposition approach with masks. The resultant samples with patterned wettability demonstrate water-harvesting efficiency in comparison with the silica PDMS-coated superhydrophobic surface and the Pt nanoparticles-coated superhydrophilic surface. The maximum water-harvesting efficiency can reach about 5.3 g cm -2 h -1 . Both the size and the percentage of the Pt-coated superhydrophilic square regions on the patterned surface affect the condensation and coalescence of the water droplet, as well as the final water-harvesting efficiency. The present water-harvesting strategy should provide an avenue to alleviate the water crisis facing mankind in certain arid regions of the world. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Preparation of solid-phase microextraction fibers by in-mold coating strategy for derivatization analysis of 24-epibrassinolide in pollen samples.

    PubMed

    Pan, Jialiang; Hu, Yuling; Liang, Tingan; Li, Gongke

    2012-11-02

    A novel and simple in-mold coating strategy was proposed for the preparation of uniform solid-phase microextraction (SPME) coatings. Such a strategy is based on the direct synthesis of the polymer coating on the surface of a solid fiber using a glass capillary as the mold. The capillary was removed and the polymer with well-controlled thickness could be coated on the silica fiber reproductively. Following the strategy, a new poly(acrylamide-co-ethylene glycol dimethacrylate) (poly(AM-co-EGDMA)) coating was prepared for the preconcentration of 24-epibrassinolide (24-epiBL) from plant matrix. The coating had the enrichment factor of 32 folds, and the extraction efficiency per unit thickness was 5 times higher than that of the commercial polydimethylsiloxane/divinylbenzene (PDMS/DVB) coating. A novel method based on SPME coupled with derivatization and large volume injection-high performance liquid chromatography (LVI-HPLC) was developed for the analysis of 24-epiBL. The linear range was 0.500-20.0 μg/L with the detection limit of 0.13 μg/L. The amounts of endogenous 24-epiBL in rape and sunflower breaking-wall pollens samples were determined with satisfactory recovery (77.8-104%) and reproducibility (3.9-7.9%). The SPME-DE/LVI-HPLC method is rapid, reliable, convenient and applicable for complicated plant samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Multidimensional electrostatic repulsion-hydrophilic interaction chromatography (ERLIC) for quantitative analysis of the proteome and phosphoproteome in clinical and biomedical research.

    PubMed

    Loroch, Stefan; Schommartz, Tim; Brune, Wolfram; Zahedi, René Peiman; Sickmann, Albert

    2015-05-01

    Quantitative proteomics and phosphoproteomics have become key disciplines in understanding cellular processes. Fundamental research can be done using cell culture providing researchers with virtually infinite sample amounts. In contrast, clinical, pre-clinical and biomedical research is often restricted to minute sample amounts and requires an efficient analysis with only micrograms of protein. To address this issue, we generated a highly sensitive workflow for combined LC-MS-based quantitative proteomics and phosphoproteomics by refining an ERLIC-based 2D phosphoproteomics workflow into an ERLIC-based 3D workflow covering the global proteome as well. The resulting 3D strategy was successfully used for an in-depth quantitative analysis of both, the proteome and the phosphoproteome of murine cytomegalovirus-infected mouse fibroblasts, a model system for host cell manipulation by a virus. In a 2-plex SILAC experiment with 150 μg of a tryptic digest per condition, the 3D strategy enabled the quantification of ~75% more proteins and even ~134% more peptides compared to the 2D strategy. Additionally, we could quantify ~50% more phosphoproteins by non-phosphorylated peptides, concurrently yielding insights into changes on the levels of protein expression and phosphorylation. Beside its sensitivity, our novel three-dimensional ERLIC-strategy has the potential for semi-automated sample processing rendering it a suitable future perspective for clinical, pre-clinical and biomedical research. Copyright © 2015. Published by Elsevier B.V.

  7. Inverse Statistics and Asset Allocation Efficiency

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam

    In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.

  8. Antifoaming effect of chemical compounds in manure biogas reactors.

    PubMed

    Kougias, P G; Tsapekos, P; Boe, K; Angelidaki, I

    2013-10-15

    A precise and efficient antifoaming control strategy in bioprocesses is a challenging task as foaming is a very complex phenomenon. Nevertheless, foam control is necessary, as foam is a major operational problem in biogas reactors. In the present study, the effect of 14 chemical compounds on foam reduction was evaluated at concentration of 0.05%, 0.1% and 0.5% v/v(sample), in raw and digested manure. Moreover, two antifoam injection methods were compared for foam reduction efficiency. Natural oils (rapeseed and sunflower oil), fatty acids (oleic, octanoic and derivative of natural fatty acids), siloxanes (polydimethylsiloxane) and ester (tributylphosphate) were found to be the most efficient compounds to suppress foam. The efficiency of antifoamers was dependant on their physicochemical properties and greatly correlated to their chemical characteristics for dissolving foam. The antifoamers were more efficient in reducing foam when added directly into the liquid phase rather than added in the headspace of the reactor. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Automation of DNA and miRNA co-extraction for miRNA-based identification of human body fluids and tissues.

    PubMed

    Kulstein, Galina; Marienfeld, Ralf; Miltner, Erich; Wiegand, Peter

    2016-10-01

    In the last years, microRNA (miRNA) analysis came into focus in the field of forensic genetics. Yet, no standardized and recommendable protocols for co-isolation of miRNA and DNA from forensic relevant samples have been developed so far. Hence, this study evaluated the performance of an automated Maxwell® 16 System-based strategy (Promega) for co-extraction of DNA and miRNA from forensically relevant (blood and saliva) samples compared to (semi-)manual extraction methods. Three procedures were compared on the basis of recovered quantity of DNA and miRNA (as determined by real-time PCR and Bioanalyzer), miRNA profiling (shown by Cq values and extraction efficiency), STR profiles, duration, contamination risk and handling. All in all, the results highlight that the automated co-extraction procedure yielded the highest miRNA and DNA amounts from saliva and blood samples compared to both (semi-)manual protocols. Also, for aged and genuine samples of forensically relevant traces the miRNA and DNA yields were sufficient for subsequent downstream analysis. Furthermore, the strategy allows miRNA extraction only in cases where it is relevant to obtain additional information about the sample type. Besides, this system enables flexible sample throughput and labor-saving sample processing with reduced risk of cross-contamination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Social learning strategies modify the effect of network structure on group performance.

    PubMed

    Barkoczi, Daniel; Galesic, Mirta

    2016-10-07

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.

  11. Social learning strategies modify the effect of network structure on group performance

    NASA Astrophysics Data System (ADS)

    Barkoczi, Daniel; Galesic, Mirta

    2016-10-01

    The structure of communication networks is an important determinant of the capacity of teams, organizations and societies to solve policy, business and science problems. Yet, previous studies reached contradictory results about the relationship between network structure and performance, finding support for the superiority of both well-connected efficient and poorly connected inefficient network structures. Here we argue that understanding how communication networks affect group performance requires taking into consideration the social learning strategies of individual team members. We show that efficient networks outperform inefficient networks when individuals rely on conformity by copying the most frequent solution among their contacts. However, inefficient networks are superior when individuals follow the best member by copying the group member with the highest payoff. In addition, groups relying on conformity based on a small sample of others excel at complex tasks, while groups following the best member achieve greatest performance for simple tasks. Our findings reconcile contradictory results in the literature and have broad implications for the study of social learning across disciplines.

  12. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  13. Actuation of chitosan-aptamer nanobrush borders for pathogen sensing.

    PubMed

    Hills, Katherine D; Oliveira, Daniela A; Cavallaro, Nicholas D; Gomes, Carmen L; McLamore, Eric S

    2018-03-26

    We demonstrate a sensing mechanism for rapid detection of Listeria monocytogenes in food samples using the actuation of chitosan-aptamer nanobrush borders. The bio-inspired soft material and sensing strategy mimic natural symbiotic systems, where low levels of bacteria are selectively captured from complex matrices. To engineer this biomimetic system, we first develop reduced graphene oxide/nanoplatinum (rGO-nPt) electrodes, and characterize the fundamental electrochemical behavior in the presence and absence of chitosan nanobrushes during actuation (pH-stimulated osmotic swelling). We then characterize the electrochemical behavior of the nanobrush when receptors (antibodies or DNA aptamers) are conjugated to the surface. Finally, we test various techniques to determine the most efficient capture strategy based on nanobrush actuation, and then apply the biosensors in a food product. Maximum cell capture occurs when aptamers conjugated to the nanobrush bind cells in the extended conformation (pH < 6), followed by impedance measurement in the collapsed nanobrush conformation (pH > 6). The aptamer-nanobrush hybrid material was more efficient than the antibody-nanobrush material, which was likely due to the relatively high adsorption capacity for aptamers. The biomimetic material was used to develop a rapid test (17 min) for selectively detecting L. monocytogenes at concentrations ranging from 9 to 107 CFU mL-1 with no pre-concentration, and in the presence of other Gram-positive cells (Listeria innocua and Staphylococcus aureus). Use of this bio-inspired material is among the most efficient for L. monocytogenes sensing to date, and does not require sample pretreatment, making nanobrush borders a promising new material for rapid pathogen detection in food.

  14. Computational modeling of electrically conductive networks formed by graphene nanoplatelet-carbon nanotube hybrid particles

    NASA Astrophysics Data System (ADS)

    Mora, A.; Han, F.; Lubineau, G.

    2018-04-01

    One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in the remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite’s conductivity based on these parameters.

  15. A pragmatic examination of active and passive recruitment methods to improve the reach of community lifestyle programs: The Talking Health Trial.

    PubMed

    Estabrooks, Paul; You, Wen; Hedrick, Valisa; Reinholt, Margaret; Dohm, Erin; Zoellner, Jamie

    2017-01-19

    A primary challenge for behavior change strategies is ensuring that interventions can be effective while also attracting a broad and representative sample of the target population. The purpose of this case-study was to report on (1) the reach of a randomized controlled trial targeting reduced sugary beverages, (2) potential participant characteristic differences based on active versus passive recruitment strategies, and (3) recruitment strategy cost. Demographic and recruitment information was obtained for 8 counties and for individuals screened for participation. Personnel activities and time were tracked. Costs were calculated and compared by active versus passive recruitment. Six-hundred and twenty, of 1,056 screened, individuals were eligible and 301enrolled (77% women; 90% white; mean income $21,981 ± 16,443). Eighty-two and 44% of those responding to passive and active methods, respectively, enrolled in the trial. However, active recruitment strategies yielded considerably more enrolled (active = 199; passive = 102) individuals. Passive recruitment strategies yielded a less representative sample in terms of gender (more women), education (higher), and income (higher; p's <0.05). The average cost of an actively recruited and enrolled participant was $278 compared to $117 for a passively recruited and enrolled participant. Though passive recruitment is more cost efficient it may reduce the reach of sugary drink reduction strategies in lower educated and economic residents in rural communities. Clinicaltrials.gov; ID: NCT02193009 , July 2014, retrospectively registered.

  16. Effects of forcefield and sampling method in all-atom simulations of inherently disordered proteins: Application to conformational preferences of human amylin

    PubMed Central

    Peng, Enxi; Todorova, Nevena

    2017-01-01

    Although several computational modelling studies have investigated the conformational behaviour of inherently disordered protein (IDP) amylin, discrepancies in identifying its preferred solution conformations still exist between various forcefields and sampling methods used. Human islet amyloid polypeptide has long been a subject of research, both experimentally and theoretically, as the aggregation of this protein is believed to be the lead cause of type-II diabetes. In this work, we present a systematic forcefield assessment using one of the most advanced non-biased sampling techniques, Replica Exchange with Solute Tempering (REST2), by comparing the secondary structure preferences of monomeric amylin in solution. This study also aims to determine the ability of common forcefields to sample a transition of the protein from a helical membrane bound conformation into the disordered solution state of amylin. Our results demonstrated that the CHARMM22* forcefield showed the best ability to sample multiple conformational states inherent for amylin. It is revealed that REST2 yielded results qualitatively consistent with experiments and in quantitative agreement with other sampling methods, however far more computationally efficiently and without any bias. Therefore, combining an unbiased sampling technique such as REST2 with a vigorous forcefield testing could be suggested as an important step in developing an efficient and robust strategy for simulating IDPs. PMID:29023509

  17. Effects of forcefield and sampling method in all-atom simulations of inherently disordered proteins: Application to conformational preferences of human amylin.

    PubMed

    Peng, Enxi; Todorova, Nevena; Yarovsky, Irene

    2017-01-01

    Although several computational modelling studies have investigated the conformational behaviour of inherently disordered protein (IDP) amylin, discrepancies in identifying its preferred solution conformations still exist between various forcefields and sampling methods used. Human islet amyloid polypeptide has long been a subject of research, both experimentally and theoretically, as the aggregation of this protein is believed to be the lead cause of type-II diabetes. In this work, we present a systematic forcefield assessment using one of the most advanced non-biased sampling techniques, Replica Exchange with Solute Tempering (REST2), by comparing the secondary structure preferences of monomeric amylin in solution. This study also aims to determine the ability of common forcefields to sample a transition of the protein from a helical membrane bound conformation into the disordered solution state of amylin. Our results demonstrated that the CHARMM22* forcefield showed the best ability to sample multiple conformational states inherent for amylin. It is revealed that REST2 yielded results qualitatively consistent with experiments and in quantitative agreement with other sampling methods, however far more computationally efficiently and without any bias. Therefore, combining an unbiased sampling technique such as REST2 with a vigorous forcefield testing could be suggested as an important step in developing an efficient and robust strategy for simulating IDPs.

  18. Systems analysis of singly and multiply O-glycosylated peptides in the human serum glycoproteome via EThcD and HCD mass spectrometry.

    PubMed

    Zhang, Yong; Xie, Xinfang; Zhao, Xinyuan; Tian, Fang; Lv, Jicheng; Ying, Wantao; Qian, Xiaohong

    2018-01-06

    Human serum has been intensively studied to identify biomarkers via global proteomic analysis. The altered O-glycoproteome is associated with human pathological state including cancer, inflammatory and degenerative diseases and is an attractive source of disease biomarkers. Because of the microheterogeneity and macroheterogeneity of O-glycosylation, site-specific O-glycosylation analysis in human serum is still challenging. Here, we developed a systematic strategy that combined multiple enzyme digestion, multidimensional separation for sample preparation and high-resolution tandem MS with Byonic software for intact O-glycopeptide characterization. We demonstrated that multiple enzyme digestion or multidimensional separation can make sample preparation more efficient and that EThcD is not only suitable for the identification of singly O-glycosylated peptides (50.3%) but also doubly (21.2%) and triply (28.5%) O-glycosylated peptides. Totally, with the strict scoring criteria, 499 non-redundant intact O-glycopeptides, 173 O-glycosylation sites and 6 types of O-glycans originating from 49 O-glycoprotein groups were identified in human serum, including 121 novel O-glycosylation sites. Currently, this is the largest data set of site-specific native O-glycoproteome from human serum samples. We expect that the strategies developed by this study will facilitate in-depth analyses of native O-glycoproteomes in human serum and provide opportunities to understand the functional roles of protein O-glycosylation in human health and diseases. The altered O-glycoproteome is associated with human pathological state and is an attractive source of disease biomarkers. However, site-specific O-glycosylation analysis is challenging because of the microheterogeneity (different glycoforms attached to one glycosylation site) and macroheterogeneity (site occupancy) of O-glycosylation. In this work, we developed a systematic strategy for intact O-glycopeptide characterization. This study took advantage of the inherent properties of the new fragmentation method called EThcD, which provides more complete fragmentation information about O-glycosylated peptides and a more confident site localization of O-glycans than collision-induced dissociation (HCD). We demonstrated that multiple enzyme digestion or multidimensional separation can make sample preparation more efficient and that EThcD was not only suitable for the identification of singly O-glycosylated peptides (50.3%) but also doubly (21.2%) and triply (28.5%) O-glycosylated peptides. Finally, we got a largest data set of site-specific native O-glycoproteome from human serum samples. Furthermore, quantitative analysis of intact O-glycopeptides from the serum samples of IgA nephropathy (IgAN) patients and healthy donors was performed, and the results showed the potential of the strategy to discover O-glycosylation biomarkers. We expect that the strategies developed by this study will facilitate in-depth analyses of native O-glycoproteomes in human serum and lead to exciting opportunities to understand the functional roles of protein O-glycosylation in human health and diseases. Copyright © 2017. Published by Elsevier B.V.

  19. Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.

    PubMed

    Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E

    2018-01-01

    The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.

  20. The application of DEA (Data Envelopment Analysis) window analysis in the assessment of influence on operational efficiencies after the establishment of branched hospitals.

    PubMed

    Jia, Tongying; Yuan, Huiyun

    2017-04-12

    Many large-scaled public hospitals have established branched hospitals in China. This study is to provide evidence for strategy making on the management and development of multi-branched hospitals by evaluating and comparing the operational efficiencies of different hospitals before and after their establishment of branched hospitals. DEA (Data Envelopment Analysis) window analysis was performed on a 7-year data pool from five public hospitals provided by health authorities and institutional surveys. The operational efficiencies of sample hospitals measured in this study (including technical efficiency, pure technical efficiency and scale efficiency) had overall trends towards increase during this 7-year period of time, however, a temporary downturn occurred shortly after the establishment of branched hospitals; pure technical efficiency contributed more to the improvement of technical efficiency compared to scale efficiency. The establishment of branched-hospitals did not lead to a long-term negative effect on hospital operational efficiencies. Our data indicated the importance of improving scale efficiency via the optimization of organizational management, as well as the advantage of a different form of branch-establishment, merging and reorganization. This study brought an insight into the practical application of DEA window analysis on the assessment of hospital operational efficiencies.

  1. The NYC native air sampling pilot project: using HVAC filter data for urban biological incident characterization.

    PubMed

    Ackelsberg, Joel; Leykam, Frederic M; Hazi, Yair; Madsen, Larry C; West, Todd H; Faltesek, Anthony; Henderson, Gavin D; Henderson, Christopher L; Leighton, Terrance

    2011-09-01

    Native air sampling (NAS) is distinguished from dedicated air sampling (DAS) devices (eg, BioWatch) that are deployed to detect aerosol disseminations of biological threat agents. NAS uses filter samples from heating, ventilation, and air conditioning (HVAC) systems in commercial properties for environmental sampling after DAS detection of biological threat agent incidents. It represents an untapped, scientifically sound, efficient, widely distributed, and comparably inexpensive resource for postevent environmental sampling. Calculations predict that postevent NAS would be more efficient than environmental surface sampling by orders of magnitude. HVAC filter samples could be collected from pre-identified surrounding NAS facilities to corroborate the DAS alarm and delineate the path taken by the bioaerosol plume. The New York City (NYC) Native Air Sampling Pilot Project explored whether native air sampling would be acceptable to private sector stakeholders and could be implemented successfully in NYC. Building trade associations facilitated outreach to and discussions with property owners and managers, who expedited contact with building managers of candidate NAS properties that they managed or owned. Nominal NAS building requirements were determined; procedures to identify and evaluate candidate NAS facilities were developed; data collection tools and other resources were designed and used to expedite candidate NAS building selection and evaluation in Manhattan; and exemplar environmental sampling playbooks for emergency responders were completed. In this sample, modern buildings with single or few corporate tenants were the best NAS candidate facilities. The Pilot Project successfully demonstrated that in one urban setting a native air sampling strategy could be implemented with effective public-private collaboration.

  2. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  3. Sample preservation, transport and processing strategies for honeybee RNA extraction: Influence on RNA yield, quality, target quantification and data normalization.

    PubMed

    Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de

    2017-08-01

    Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  5. A robust high resolution reversed-phase HPLC strategy to investigate various metabolic species in different biological models.

    PubMed

    D'Alessandro, Angelo; Gevi, Federica; Zolla, Lello

    2011-04-01

    Recent advancements in the field of omics sciences have paved the way for further expansion of metabolomics. Originally tied to NMR spectroscopy, metabolomic disciplines are constantly and growingly involving HPLC and mass spectrometry (MS)-based analytical strategies and, in this context, we hereby propose a robust and efficient extraction protocol for metabolites from four different biological sources which are subsequently analysed, identified and quantified through high resolution reversed-phase fast HPLC and mass spectrometry. To this end, we demonstrate the elevated intra- and inter-day technical reproducibility, ease of an MRM-based MS method, allowing simultaneous detection of up to 10 distinct features, and robustness of multiple metabolite detection and quantification in four different biological samples. This strategy might become routinely applicable to various samples/biological matrices, especially for low-availability ones. In parallel, we compare the present strategy for targeted detection of a representative metabolite, L-glutamic acid, with our previously-proposed chemical-derivatization through dansyl chloride. A direct comparison of the present method against spectrophotometric assays is proposed as well. An application of the proposed method is also introduced, using the SAOS-2 cell line, either induced or non-induced to express the TAp63 isoform of the p63 gene, as a model for determination of variations of glutamate concentrations.

  6. Seven Strategies for Improving the Quality and Efficiency of the Education System. Notes, Comments... No. 192 = Sept strategies visant a ameliorer la qualite et l'efficacite du systeme d'education.

    ERIC Educational Resources Information Center

    Schiefelbein, Ernesto

    Seven strategies for improving the quality and efficiency of educational system in Latin American are delineated within the context of background information on the coverage and efficiency of school systems from 1970 to 1980, technical and institutional limitations to educational progress, and an estimate of the impact of the strategies.…

  7. Sample-Based Surface Coloring

    PubMed Central

    Bürger, Kai; Krüger, Jens; Westermann, Rüdiger

    2011-01-01

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392

  8. Supercoiled plasmid DNA purification by integrating membrane technology with a monolithic chromatography.

    PubMed

    Nunes, Catherine; Sousa, Angela; Nunes, José C; Morão, António M; Sousa, Fani; Queiroz, João A

    2014-06-01

    The present study describes the integration of membrane technology with monolithic chromatography to obtain plasmid DNA with high quality. Isolation and clarification of plasmid DNA lysate were first conducted by a microfiltration step, by using a hydrophilic nylon microfiltration membrane, avoiding the need of centrifugation. For the total elimination of the remaining impurities, a suitable purification step is required. Monolithic stationary phases have been successfully applied as an alternative to conventional supports. Thus, the sample recovered from the membrane process was applied into a nongrafted CarbonylDiImidazole disk. Throughout the global procedure, a reduced level of impurities such as proteins and RNA was obtained, and no genomic DNA was detectable in the plasmid DNA sample. The chromatographic process demonstrated an efficient performance on supercoiled plasmid DNA purity and recovery (100 and 84.44%, respectively). Thereby, combining the membrane technology to eliminate some impurities from lysate sample with an efficient chromatographic strategy to purify the supercoiled plasmid DNA arises as a powerful approach for industrial-scale systems aiming at plasmid DNA purification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. IT solutions for privacy protection in biobanking.

    PubMed

    Eder, J; Gottweis, H; Zatloukal, K

    2012-01-01

    Biobanks containing human biological samples and associated data are key resources for the advancement of medical research. Efficient access to samples and data increases competitiveness in medical research, reduces effort and time for achieving scientific results and promotes scientific progress. In order to address upcoming health challenges, there is increasing need for transnational collaboration. This requires innovative solutions improving interoperability of biobanks in fields such as sample and data management as well as governance including ethical and legal frameworks. In this context, rights and expectations of donors to determine the usage of their biological material and data and to ensure their privacy have to be observed. We discuss the benefits of biobanks, the needs to support medical research and the societal demands and regulations, in particular, securing the rights of donors and present IT solutions that allow both to maintain the security of personal data and to increase the efficiency of access to data in biobanks. Disclosure filters are discussed as a strategy to combine European public expectations concerning informed consent with the requirements of biobank research. Copyright © 2012 S. Karger AG, Basel.

  10. Banana peel as an adsorbent for removing atrazine and ametryne from waters.

    PubMed

    Silva, Claudineia R; Gomes, Taciana F; Andrade, Graziela C R M; Monteiro, Sergio H; Dias, Ana C R; Zagatto, Elias A G; Tornisielo, Valdemar L

    2013-03-13

    The feasibility of using banana peel for removal of the pesticides atrazine and ametryne from river and treated waters has been demonstrated, allowing the design of an efficient, fast, and low-cost strategy for remediation of polluted waters. The conditions for removal of these pesticides in a laboratory scale were optimized as sample volume = 50 mL, banana mass = 3.0 g, stirring time = 40 min, and no pH adjustment necessary. KF(sor) values for atrazine and ametryne were evaluated as 35.8 and 54.1 μg g(-1) (μL mL(-1)) by using liquid scintillation spectrometry. Adsorption was also evaluated by LC-ESI-MS/MS. As quantification limits were 0.10 and 0.14 μg L(-1) for both pesticides, sample preconcentration was not needed. Linear analytical curves (up to 10 μg L(-1)), precise results (RSD < 4.5%), good recoveries (82.9-106.6%), and a > 90% removal efficiency were attained for both pesticides. Water samples collected near an intensively cultivated area were adequately remedied.

  11. Classification of amyotrophic lateral sclerosis disease based on convolutional neural network and reinforcement sample learning algorithm.

    PubMed

    Sengur, Abdulkadir; Akbulut, Yaman; Guo, Yanhui; Bajaj, Varun

    2017-12-01

    Electromyogram (EMG) signals contain useful information of the neuromuscular diseases like amyotrophic lateral sclerosis (ALS). ALS is a well-known brain disease, which can progressively degenerate the motor neurons. In this paper, we propose a deep learning based method for efficient classification of ALS and normal EMG signals. Spectrogram, continuous wavelet transform (CWT), and smoothed pseudo Wigner-Ville distribution (SPWVD) have been employed for time-frequency (T-F) representation of EMG signals. A convolutional neural network is employed to classify these features. In it, Two convolution layers, two pooling layer, a fully connected layer and a lost function layer is considered in CNN architecture. The CNN architecture is trained with the reinforcement sample learning strategy. The efficiency of the proposed implementation is tested on publicly available EMG dataset. The dataset contains 89 ALS and 133 normal EMG signals with 24 kHz sampling frequency. Experimental results show 96.80% accuracy. The obtained results are also compared with other methods, which show the superiority of the proposed method.

  12. Application experiments to trace N-P interactions in forest ecosystems

    NASA Astrophysics Data System (ADS)

    Krüger, Jaane; Niederberger, Jörg; Schulz, Stefanie; Lang, Friederike

    2017-04-01

    Phosphorus is a limited resource and there is increasing debate regarding the principles of tight P recycling. Forest ecosystems show commonly high P use efficiencies but the processes behind this phenomenon are still unresolved. In frame of the priority program "SPP 1685 Ecosystem nutrition - Forest strategies for limited phosphorus resources" around 70 researchers from different disciplines collaborate to unravel these processes. The overall hypothesis to be tested is that the P nutrition strategy of forest ecosystems at sites rich in mineral P is characterized by high P uptake efficiency (acquiring systems). In contrast, the P strategy of forest ecosystems facing low soil P stocks is characterized by highly efficient mechanisms of P recycling. To test this hypothesis, we analyzed five beech forest ecosystems on silicate rock with different parent materials representing a gradient of total P stocks (160 - 900 g P m-2, down to 1m soil depth). In fact, we found evidence confirming our hypothesis, but controls and drivers of P strategies are still unknown as other environmental variables differ. One of those might be the N content, as organisms strive to reach a specific internal N:P ratio. Thus, an additional application of N might also alter P nutrition. To test this, we established a factorial P x N application experiment at three of the study sites. With our presentation we will introduce this experiment and give a review on published P x N experiments discussing different advantages and disadvantages of different basic conditions (e.g. amount and application form, doses, sampling and statistical design, monitoring periods, budget calculation, isotopic tracing). Finally, we want to initiate a common discussion on the standardization of P x N field experiments to enable interdisciplinary and across-compartment comparisons (e.g. different land use, different climate zones, terrestrial and aquatic ecosystems).

  13. Proposal for a new protein therapeutic immunogenicity titer assay cutpoint.

    PubMed

    Wakshull, Eric; Hendricks, Robert; Amaya, Caroline; Coleman, Daniel

    2011-12-01

    Generally, immunogenicity assessment strategies follow this assay triage schema: screen→confirm→titer. Each requires the determination of a threshold value (cutpoint) for decision making. No guidance documents exist for the determination of a specific titration assay cutpoint. The default practice is to use the screening assay cutpoint, frequently leading to controls or samples not reaching this cutpoint. We propose a method for determination of a titration cutpoint based upon the variance of the negative-control sample. Positive-control samples that did not cross a screening cutpoint did cross the titer cutpoint, albeit generating slightly lower titer values. Our approach is consistent with the statistical methods currently recommended for the screening and confirmatory assay cutpoints and is operationally simple and efficient.

  14. Tracking of climatic niche boundaries under recent climate change.

    PubMed

    La Sorte, Frank A; Jetz, Walter

    2012-07-01

    1. Global climate has changed significantly during the past 30 years and especially in northern temperate regions which have experienced poleward shifts in temperature regimes. While there is evidence that some species have responded by moving their distributions to higher latitudes, the efficiency of this response in tracking species' climatic niche boundaries over time has yet to be addressed. 2. Here, we provide a continental assessment of the temporal structure of species responses to recent spatial shifts in climatic conditions. We examined geographic associations with minimum winter temperature for 59 species of winter avifauna at 476 Christmas Bird Count circles in North America from 1975 to 2009 under three sampling schemes that account for spatial and temporal sampling effects. 3. Minimum winter temperature associated with species occurrences showed an overall increase with a weakening trend after 1998. Species displayed highly variable responses that, on average and across sampling schemes, contained a strong lag effect that weakened in strength over time. In general, the conservation of minimum winter temperature was relevant when all species were considered together but only after an initial lag period (c. 35 years) was overcome. The delayed niche tracking observed at the combined species level was likely supported by the post1998 lull in the warming trend. 4. There are limited geographic and ecological explanations for the observed variability, suggesting that the efficiency of species' responses under climate change is likely to be highly idiosyncratic and difficult to predict. This outcome is likely to be even more pronounced and time lags more persistent for less vagile taxa, particularly during the periods of consistent or accelerating warming. Current modelling efforts and conservation strategies need to better appreciate the variation, strength and duration of lag effects and their association with climatic variability. Conservation strategies in particular will benefit through identifying and maintaining dispersal corridors that accommodate diverging dispersal strategies and timetables. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  15. A novel strategy for highly efficient isolation and analysis of circulating tumor-specific cell-free DNA from lung cancer patients using a reusable conducting polymer nanostructure.

    PubMed

    Lee, HyungJae; Jeon, SeungHyun; Seo, Jin-Suck; Goh, Sung-Ho; Han, Ji-Youn; Cho, Youngnam

    2016-09-01

    We have developed a reusable nanostructured polypyrrole nanochip and demonstrated its use in the electric field-mediated recovery of circulating cell-free DNA (cfDNA) from the plasma of lung cancer patients. Although cfDNA has been recognized and widely studied as a versatile and promising biomarker for the diagnosis and prognosis of cancers, the lack of efficient strategies to directly isolate cfDNA from the plasma has become a great hindrance to its potential clinical use. As a proof-of-concept study, we demonstrated a technique for the rapid and efficient isolation of cfDNA with high yield and purity. In particular, the synergistic effects of the electro-activity and the nanostructured features of the polypyrrole polymer enabled repeated retrieval of cfDNA using a single platform. Moreover, polypyrrole nanochip facilitated the amplification of tumor-specific DNA fragments from the plasma samples of patients with lung cancer characterized by mutations in exons 21 of the epidermal growth factor receptor gene (EGFR). Overall, the proposed polypyrrole nanochip has enormous potential for industrial and clinical applications with significantly enhanced efficiency in the recovery of tumor-associated circulating cfDNA. This may ultimately contribute to more robust and reliable evaluation of gene mutations in peripheral blood. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Structure optimisation by thermal cycling for the hydrophobic-polar lattice model of protein folding

    NASA Astrophysics Data System (ADS)

    Günther, Florian; Möbius, Arnulf; Schreiber, Michael

    2017-03-01

    The function of a protein depends strongly on its spatial structure. Therefore the transition from an unfolded stage to the functional fold is one of the most important problems in computational molecular biology. Since the corresponding free energy landscapes exhibit huge numbers of local minima, the search for the lowest-energy configurations is very demanding. Because of that, efficient heuristic algorithms are of high value. In the present work, we investigate whether and how the thermal cycling (TC) approach can be applied to the hydrophobic-polar (HP) lattice model of protein folding. Evaluating the efficiency of TC for a set of two- and three-dimensional examples, we compare the performance of this strategy with that of multi-start local search (MSLS) procedures and that of simulated annealing (SA). For this aim, we incorporated several simple but rather efficient modifications into the standard procedures: in particular, a strong improvement was achieved by also allowing energy conserving state modifications. Furthermore, the consideration of ensembles instead of single samples was found to greatly improve the efficiency of TC. In the framework of different benchmarks, for all considered HP sequences, we found TC to be far superior to SA, and to be faster than Wang-Landau sampling.

  17. Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme

    PubMed Central

    Li, Shanbin; Sauter, Dominique; Xu, Bugong

    2011-01-01

    In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590

  18. GMOseek: a user friendly tool for optimized GMO testing.

    PubMed

    Morisset, Dany; Novak, Petra Kralj; Zupanič, Darko; Gruden, Kristina; Lavrač, Nada; Žel, Jana

    2014-08-01

    With the increasing pace of new Genetically Modified Organisms (GMOs) authorized or in pipeline for commercialization worldwide, the task of the laboratories in charge to test the compliance of food, feed or seed samples with their relevant regulations became difficult and costly. Many of them have already adopted the so called "matrix approach" to rationalize the resources and efforts used to increase their efficiency within a limited budget. Most of the time, the "matrix approach" is implemented using limited information and some proprietary (if any) computational tool to efficiently use the available data. The developed GMOseek software is designed to support decision making in all the phases of routine GMO laboratory testing, including the interpretation of wet-lab results. The tool makes use of a tabulated matrix of GM events and their genetic elements, of the laboratory analysis history and the available information about the sample at hand. The tool uses an optimization approach to suggest the most suited screening assays for the given sample. The practical GMOseek user interface allows the user to customize the search for a cost-efficient combination of screening assays to be employed on a given sample. It further guides the user to select appropriate analyses to determine the presence of individual GM events in the analyzed sample, and it helps taking a final decision regarding the GMO composition in the sample. GMOseek can also be used to evaluate new, previously unused GMO screening targets and to estimate the profitability of developing new GMO screening methods. The presented freely available software tool offers the GMO testing laboratories the possibility to select combinations of assays (e.g. quantitative real-time PCR tests) needed for their task, by allowing the expert to express his/her preferences in terms of multiplexing and cost. The utility of GMOseek is exemplified by analyzing selected food, feed and seed samples from a national reference laboratory for GMO testing and by comparing its performance to existing tools which use the matrix approach. GMOseek proves superior when tested on real samples in terms of GMO coverage and cost efficiency of its screening strategies, including its capacity of simple interpretation of the testing results.

  19. Improved Efficient Routing Strategy on Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liang, Man-Gui

    Since the betweenness of nodes in complex networks can theoretically represent the traffic load of nodes under the currently used routing strategy, we propose an improved efficient (IE) routing strategy to enhance to the network traffic capacity based on the betweenness centrality. Any node with the highest betweenness is susceptible to traffic congestion. An efficient way to improve the network traffic capacity is to redistribute the heavy traffic load from these central nodes to non-central nodes, so in this paper, we firstly give a path cost function by considering the sum of node betweenness with a tunable parameter β along the actual path. Then, by minimizing the path cost, our IE routing strategy achieved obvious improvement on the network transport efficiency. Simulations on scale-free Barabási-Albert (BA) networks confirmed the effectiveness of our strategy, when compared with the efficient routing (ER) and the shortest path (SP) routing.

  20. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  1. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  2. Sampling efficiency of the Moore egg collector

    USGS Publications Warehouse

    Worthington, Thomas A.; Brewer, Shannon K.; Grabowski, Timothy B.; Mueller, Julia

    2013-01-01

    Quantitative studies focusing on the collection of semibuoyant fish eggs, which are associated with a pelagic broadcast-spawning reproductive strategy, are often conducted to evaluate reproductive success. Many of the fishes in this reproductive guild have suffered significant reductions in range and abundance. However, the efficiency of the sampling gear used to evaluate reproduction is often unknown and renders interpretation of the data from these studies difficult. Our objective was to assess the efficiency of a modified Moore egg collector (MEC) using field and laboratory trials. Gear efficiency was assessed by releasing a known quantity of gellan beads with a specific gravity similar to that of eggs from representatives of this reproductive guild (e.g., the Arkansas River Shiner Notropis girardi) into an outdoor flume and recording recaptures. We also used field trials to determine how discharge and release location influenced gear efficiency given current methodological approaches. The flume trials indicated that gear efficiency ranged between 0.0% and 9.5% (n = 57) in a simple 1.83-m-wide channel and was positively related to discharge. Efficiency in the field trials was lower, ranging between 0.0% and 3.6%, and was negatively related to bead release distance from the MEC and discharge. The flume trials indicated that the gellan beads were not distributed uniformly across the channel, although aggregation was reduced at higher discharges. This clustering of passively drifting particles should be considered when selecting placement sites for an MEC; further, the use of multiple devices may be warranted in channels with multiple areas of concentrated flow.

  3. Value of recruitment strategies used in a primary care practice-based trial.

    PubMed

    Ellis, Shellie D; Bertoni, Alain G; Bonds, Denise E; Clinch, C Randall; Balasubramanyam, Aarthi; Blackwell, Caroline; Chen, Haiying; Lischke, Michael; Goff, David C

    2007-05-01

    "Physicians-recruiting-physicians" is the preferred recruitment approach for practice-based research. However, yields are variable; and the approach can be costly and lead to biased, unrepresentative samples. We sought to explore the potential efficiency of alternative methods. We conducted a retrospective analysis of the yield and cost of 10 recruitment strategies used to recruit primary care practices to a randomized trial to improve cardiovascular disease risk factor management. We measured response and recruitment yields and the resources used to estimate the value of each strategy. Providers at recruited practices were surveyed about motivation for participation. Response to 6 opt-in marketing strategies was 0.40% (53/13290), ranging from 0% to 2.86% by strategy; 33.96% (18/53) of responders were recruited to the study. Of those recruited from opt-out strategies, 8.68% joined the study, ranging from 5.35% to 41.67% per strategy. A strategy that combined both opt-in and opt-out approaches resulted in a 51.14% (90/176) response and a 10.80% (19/90) recruitment rate. Cost of recruitment was $613 per recruited practice. Recruitment approaches based on in-person meetings (41.67%), previous relationships (33.33%), and borrowing an Area Health Education Center's established networks (10.80%), yielded the most recruited practices per effort and were most cost efficient. Individual providers who chose to participate were motivated by interest in improving their clinical practice (80.5%); contributing to CVD primary prevention (54.4%); and invigorating their practice with new ideas (42.1%). This analysis provides suggestions for future recruitment efforts and research. Translational studies with limited funds could consider multi-modal recruitment approaches including in-person presentations to practice groups and exploitation of previous relationships, which require the providers to opt-out, and interactive opt-in approaches which rely on borrowed networks. These approaches can be supplemented with non-relationship-based opt-out strategies such as cold calls strategically targeted to underrepresented provider groups.

  4. Monitoring the endogenous steroid profile disruption in urine and blood upon nandrolone administration: An efficient and innovative strategy to screen for nandrolone abuse in entire male horses.

    PubMed

    Kaabia, Z; Dervilly-Pinel, G; Popot, M A; Bailly-Chouriberry, L; Plou, P; Bonnaire, Y; Le Bizec, B

    2014-04-01

    Nandrolone (17β-hydroxy-4-estren-3-one) is amongst the most misused endogenous steroid hormones in entire male horses. The detection of such a substance is challenging with regard to its endogenous presence. The current international threshold level for nandrolone misuse is based on the urinary concentration ratio of 5α-estrane-3β,17α-diol (EAD) to 5(10)-estrene-3β,17α-diol (EED). This ratio, however, can be influenced by a number of factors due to existing intra- and inter-variability standing, respectively, for the variation occurring in endogenous steroids concentration levels in a single subject and the variation in those same concentration levels observed between different subjects. Targeting an efficient detection of nandrolone misuse in entire male horses, an analytical strategy was set up in order to profile a group of endogenous steroids in nandrolone-treated and non-treated equines. Experiment plasma and urine samples were steadily collected over more than three months from a stallion administered with nandrolone laurate (1 mg/kg). Control plasma and urine samples were collected monthly from seven non-treated stallions over a one-year period. A large panel of steroids of interest (n = 23) were extracted from equine urine and plasma samples using a C18 cartridge. Following a methanolysis step, liquid-liquid and solid-phase extractions purifications were performed before derivatization and analysis on gas chromatography-tandem mass spectrometry (GC-MS/MS) for quantification. Statistical processing of the collected data permitted to establish statistical models capable of discriminating control samples from those collected during the three months following administration. Furthermore, these statistical models succeeded in predicting the compliance status of additional samples collected from racing horses. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Polarized ensembles of random pure states

    NASA Astrophysics Data System (ADS)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe

    2013-08-01

    A new family of polarized ensembles of random pure states is presented. These ensembles are obtained by linear superposition of two random pure states with suitable distributions, and are quite manageable. We will use the obtained results for two purposes: on the one hand we will be able to derive an efficient strategy for sampling states from isopurity manifolds. On the other, we will characterize the deviation of a pure quantum state from separability under the influence of noise.

  6. Efficient genotype compression and analysis of large genetic variation datasets

    PubMed Central

    Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.

    2015-01-01

    Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772

  7. Large-Scale Low-Cost NGS Library Preparation Using a Robust Tn5 Purification and Tagmentation Protocol

    PubMed Central

    Hennig, Bianca P.; Velten, Lars; Racke, Ines; Tu, Chelsea Szu; Thoms, Matthias; Rybin, Vladimir; Besir, Hüseyin; Remans, Kim; Steinmetz, Lars M.

    2017-01-01

    Efficient preparation of high-quality sequencing libraries that well represent the biological sample is a key step for using next-generation sequencing in research. Tn5 enables fast, robust, and highly efficient processing of limited input material while scaling to the parallel processing of hundreds of samples. Here, we present a robust Tn5 transposase purification strategy based on an N-terminal His6-Sumo3 tag. We demonstrate that libraries prepared with our in-house Tn5 are of the same quality as those processed with a commercially available kit (Nextera XT), while they dramatically reduce the cost of large-scale experiments. We introduce improved purification strategies for two versions of the Tn5 enzyme. The first version carries the previously reported point mutations E54K and L372P, and stably produces libraries of constant fragment size distribution, even if the Tn5-to-input molecule ratio varies. The second Tn5 construct carries an additional point mutation (R27S) in the DNA-binding domain. This construct allows for adjustment of the fragment size distribution based on enzyme concentration during tagmentation, a feature that opens new opportunities for use of Tn5 in customized experimental designs. We demonstrate the versatility of our Tn5 enzymes in different experimental settings, including a novel single-cell polyadenylation site mapping protocol as well as ultralow input DNA sequencing. PMID:29118030

  8. A decade of aquatic invasive species (AIS) early detection ...

    EPA Pesticide Factsheets

    As an invasion prone location, the St. Louis River Estuary (SLRE) has been a case study for ongoing research to develop the framework for a practical Great Lakes monitoring network for early detection of aquatic invasive species (AIS). Early detection, however, necessitates finding new invaders before they are common. Here we outline our research (2005 present) approach and findings, including strategies to increase detection efficiency by optimizing specimen collection and identification methods. Initial surveys were designed to over-sample to amass data as the basis for numerical experiments to investigate to the effort required for a given detection probability. Later surveys tested the outcome of implementing these strategies, examined the potential benefits of sampling larval fish instead of adults and explored the prospect of using advanced DNA based methods as an alternative to traditional taxonomy. To date we have identified several previously undetected invertebrate invaders, developed survey design and gear recommendations and have refined the search strategy for systems beyond the SLRE. In addition, because we’ve accumulated such a large body of data we now have the basis to show spatial-temporal trends for native and non-native species in the SLRE. not applicable

  9. Efficient mitigation strategies for epidemics in rural regions.

    PubMed

    Scoglio, Caterina; Schumm, Walter; Schumm, Phillip; Easton, Todd; Roy Chowdhury, Sohini; Sydney, Ali; Youssef, Mina

    2010-07-13

    Containing an epidemic at its origin is the most desirable mitigation. Epidemics have often originated in rural areas, with rural communities among the first affected. Disease dynamics in rural regions have received limited attention, and results of general studies cannot be directly applied since population densities and human mobility factors are very different in rural regions from those in cities. We create a network model of a rural community in Kansas, USA, by collecting data on the contact patterns and computing rates of contact among a sampled population. We model the impact of different mitigation strategies detecting closely connected groups of people and frequently visited locations. Within those groups and locations, we compare the effectiveness of random and targeted vaccinations using a Susceptible-Exposed-Infected-Recovered compartmental model on the contact network. Our simulations show that the targeted vaccinations of only 10% of the sampled population reduced the size of the epidemic by 34.5%. Additionally, if 10% of the population visiting one of the most popular locations is randomly vaccinated, the epidemic size is reduced by 19%. Our results suggest a new implementation of a highly effective strategy for targeted vaccinations through the use of popular locations in rural communities.

  10. Using continuous monitoring of physical parameters to better estimate phosphorus fluxes in a small agricultural catchment

    NASA Astrophysics Data System (ADS)

    Minaudo, Camille; Dupas, Rémi; Moatar, Florentina; Gascuel-Odoux, Chantal

    2016-04-01

    Phosphorus fluxes in streams are subjected to high temporal variations, questioning the relevance of the monitoring strategies (generally monthly sampling) chosen to assist EU Directives to capture phosphorus fluxes and their variations over time. The objective of this study was to estimate the annual and seasonal P flux uncertainties depending on several monitoring strategies, with varying sampling frequencies, but also taking into account simultaneous and continuous time-series of parameters such as turbidity, conductivity, groundwater level and precipitation. Total Phosphorus (TP), Soluble Reactive Phosphorus (SRP) and Total Suspended Solids (TSS) concentrations were surveyed at a fine temporal frequency between 2007 and 2015 at the outlet of a small agricultural catchment in Brittany (Naizin, 5 km2). Sampling occurred every 3 to 6 days between 2007 and 2012 and daily between 2013 and 2015. Additionally, 61 storms were intensively surveyed (1 sample every 30 minutes) since 2007. Besides, water discharge, turbidity, conductivity, groundwater level and precipitation were monitored on a sub-hourly basis. A strong temporal decoupling between SRP and particulate P (PP) was found (Dupas et al., 2015). The phosphorus-discharge relationships displayed two types of hysteretic patterns (clockwise and counterclockwise). For both cases, time-series of PP and SRP were estimated continuously for the whole period using an empirical model linking P concentrations with the hydrological and physic-chemical variables. The associated errors of the estimated P concentrations were also assessed. These « synthetic » PP and SRP time-series allowed us to discuss the most efficient monitoring strategies, first taking into account different sampling strategies based on Monte Carlo random simulations, and then adding the information from continuous data such as turbidity, conductivity and groundwater depth based on empirical modelling. Dupas et al., (2015, Distinct export dynamics for dissolved and particulate phosphorus reveal independent transport mechanisms in an arable headwater catchment, Hydrological Processes, 29(14), 3162-3178

  11. A link-adding strategy for transport efficiency of complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Han, Weizhan; Guo, Qing; Wang, Zhenyong; Zhang, Shuai

    2016-12-01

    The transport efficiency is one of the critical parameters to evaluate the performance of a network. In this paper, we propose an improved efficient (IE) strategy to enhance the network transport efficiency of complex networks by adding a fraction of links to an existing network based on the node’s local degree centrality and the shortest path length. Simulation results show that the proposed strategy can bring better traffic capacity and shorter average shortest path length than the low-degree-first (LDF) strategy under the shortest path routing protocol. It is found that the proposed strategy is beneficial to the improvement of overall traffic handling and delivering ability of the network. This study can alleviate the congestion in networks, and is helpful to design and optimize realistic networks.

  12. Strategies for monitoring the emerging polar organic contaminants in water with emphasis on integrative passive sampling.

    PubMed

    Söderström, Hanna; Lindberg, Richard H; Fick, Jerker

    2009-01-16

    Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.

  13. Integrating feeding behavior, ecological data, and DNA barcoding to identify developmental differences in invertebrate foraging strategies in wild white-faced capuchins (Cebus capucinus).

    PubMed

    Mallott, Elizabeth K; Garber, Paul A; Malhi, Ripan S

    2017-02-01

    Invertebrate foraging strategies in nonhuman primates often require complex extractive foraging or prey detection techniques. As these skills take time to master, juveniles may have reduced foraging efficiency or concentrate their foraging efforts on easier to acquire prey than adults. We use DNA barcoding, behavioral observations, and ecological data to assess age-based differences in invertebrate prey foraging strategies in a group of white-faced capuchins (Cebus capucinus) in northeastern Costa Rica. Invertebrate availability was monitored using canopy traps and sweep netting. Fecal samples were collected from adult female, adult male, and juvenile white-faced capuchins (n = 225). COI mtDNA sequences were compared with known sequences in GenBank and the Barcode of Life Database. Frequencies of Lepidoptera and Hymenoptera consumption were higher in juveniles than in adults. A significantly smaller proportion of juvenile fecal samples contained Gryllidae and Cercopidae sequences, compared with adults (0% and 4.2% vs. 4.6% and 12.5%), and a significantly larger proportion contained Tenthredinidae, Culicidae, and Crambidae (5.6%, 9.7%, and 5.6% vs. 1.3%, 0.7%, and 1.3%). Juveniles spent significantly more time feeding and foraging than adults, and focused their foraging efforts on prey that require different skills to capture or extract. Arthropod availability was not correlated with foraging efficiency, and the rate of consumption of specific orders of invertebrates was not correlated with the availability of those same taxa. Our data support the hypothesis that juveniles are concentrating their foraging efforts on different prey than adults, potentially focusing their foraging efforts on more easily acquired types of prey. © 2016 Wiley Periodicals, Inc.

  14. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    PubMed Central

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  15. Single-Drop Raman Imaging Exposes the Trace Contaminants in Milk.

    PubMed

    Tan, Zong; Lou, Ting-Ting; Huang, Zhi-Xuan; Zong, Jing; Xu, Ke-Xin; Li, Qi-Feng; Chen, Da

    2017-08-02

    Better milk safety control can offer important means to promote public health. However, few technologies can detect different types of contaminants in milk simultaneously. In this regard, the present work proposes a single-drop Raman imaging (SDRI) strategy for semiquantitation of multiple hazardous factors in milk solutions. By developing SDRI strategy that incorporates the coffee-ring effect (a natural phenomenon often presents in a condensed circle pattern after a drop evaporated) for sample pretreatment and discrete wavelet transform for spectra processing, the method serves well to expose typical hazardous molecular species in milk products, such as melamine, sodium thiocyanate and lincomycin hydrochloride, with little sample preparation. The detection sensitivity for melamine, sodium thiocyanate, and lincomycin hydrochloride are 0.1 mg kg -1 , 1 mg kg -1 , and 0.1 mg kg -1 , respectively. Theoretically, we establish that the SDRI represents a novel and environment-friendly method that screens the milk safety efficiently, which could be well extended to inspection of other food safety.

  16. Exome sequencing of extreme phenotypes identifies DCTN4 as a modifier of chronic Pseudomonas aeruginosa infection in cystic fibrosis.

    PubMed

    Emond, Mary J; Louie, Tin; Emerson, Julia; Zhao, Wei; Mathias, Rasika A; Knowles, Michael R; Wright, Fred A; Rieder, Mark J; Tabor, Holly K; Nickerson, Deborah A; Barnes, Kathleen C; Gibson, Ronald L; Bamshad, Michael J

    2012-07-08

    Exome sequencing has become a powerful and effective strategy for the discovery of genes underlying Mendelian disorders. However, use of exome sequencing to identify variants associated with complex traits has been more challenging, partly because the sample sizes needed for adequate power may be very large. One strategy to increase efficiency is to sequence individuals who are at both ends of a phenotype distribution (those with extreme phenotypes). Because the frequency of alleles that contribute to the trait are enriched in one or both phenotype extremes, a modest sample size can potentially be used to identify novel candidate genes and/or alleles. As part of the National Heart, Lung, and Blood Institute (NHLBI) Exome Sequencing Project (ESP), we used an extreme phenotype study design to discover that variants in DCTN4, encoding a dynactin protein, are associated with time to first P. aeruginosa airway infection, chronic P. aeruginosa infection and mucoid P. aeruginosa in individuals with cystic fibrosis.

  17. Mesoporous structured MIPs@CDs fluorescence sensor for highly sensitive detection of TNT.

    PubMed

    Xu, Shoufang; Lu, Hongzhi

    2016-11-15

    A facile strategy was developed to prepare mesoporous structured molecularly imprinted polymers capped carbon dots (M-MIPs@CDs) fluorescence sensor for highly sensitive and selective determination of TNT. The strategy using amino-CDs directly as "functional monomer" for imprinting simplify the imprinting process and provide well recognition sites accessibility. The as-prepared M-MIPs@CDs sensor, using periodic mesoporous silica as imprinting matrix, and amino-CDs directly as "functional monomer", exhibited excellent selectivity and sensitivity toward TNT with detection limit of 17nM. The recycling process was sustainable for 10 times without obvious efficiency decrease. The feasibility of the developed method in real samples was successfully evaluated through the analysis of TNT in soil and water samples with satisfactory recoveries of 88.6-95.7%. The method proposed in this work was proved to be a convenient and practical way to prepare high sensitive and selective fluorescence MIPs@CDs sensors. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Anaerobic digestion of grass: the effect of temperature applied during the storage of substrate on the methane production.

    PubMed

    Míchal, Pavel; Švehla, Pavel; Plachý, Vladimír; Tlustoš, Pavel

    2017-07-01

    Within this research, biogas production, representation of methane in biogas and volatile solids (VSs) removal efficiency were compared using batch tests performed with the samples of intensively and extensively planted grasses originating from public areas. Before the batch tests, the samples were stored at different temperatures achievable on biogas plants applying trigeneration strategy (-18°C, +3°C, +18°C and +35°C). Specific methane production from intensively planted grasses was relatively high (0.33-0.41 m 3 /kg VS) compared to extensively planted grasses (0.20-0.33 m 3 /kg VS). VSs removal efficiency reached 59.8-68.8% for intensively planted grasses and 34.6-56.5% for extensively planted grasses. Freezing the intensively planted grasses at -18°C proved to be an effective thermal pretreatment leading to high biogas production (0.61 m 3 /kg total solid (TS)), high representation of methane (64.0%) in biogas and good VSs removal efficiency (68.8%). The results of this research suggest that public areas or sport parks seem to be available, cheap and at the same time very effective feedstock for biogas production.

  19. High-efficiency wavefunction updates for large scale Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Kent, Paul; McDaniel, Tyler; Li, Ying Wai; D'Azevedo, Ed

    Within ab intio Quantum Monte Carlo (QMC) simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunctions. The evaluation of each Monte Carlo move requires finding the determinant of a dense matrix, which is traditionally iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. For calculations with thousands of electrons, this operation dominates the execution profile. We propose a novel rank- k delayed update scheme. This strategy enables probability evaluation for multiple successive Monte Carlo moves, with application of accepted moves to the matrices delayed until after a predetermined number of moves, k. Accepted events grouped in this manner are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency. This procedure does not change the underlying Monte Carlo sampling or the sampling efficiency. For large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude speedups can be obtained on both multi-core CPU and on GPUs, making this algorithm highly advantageous for current petascale and future exascale computations.

  20. Designing long-term fish community assessments in connecting channels: Lessons from the Saint Marys River

    USGS Publications Warehouse

    Schaeffer, Jeff; Rogers, Mark W.; Fielder, David G.; Godby, Neal; Bowen, Anjanette K.; O'Connor, Lisa; Parrish, Josh; Greenwood, Susan; Chong, Stephen; Wright, Greg

    2014-01-01

    Long-term surveys are useful in understanding trends in connecting channel fish communities; a gill net assessment in the Saint Marys River performed periodically since 1975 is the most comprehensive connecting channels sampling program within the Laurentian Great Lakes. We assessed efficiency of that survey, with intent to inform development of assessments at other connecting channels. We evaluated trends in community composition, effort versus estimates of species richness, ability to detect abundance changes for four species, and effects of subsampling yellow perch catches on size and age-structure metrics. Efficiency analysis revealed low power to detect changes in species abundance, whereas reduced effort could be considered to index species richness. Subsampling simulations indicated that subsampling would have allowed reliable estimates of yellow perch (Perca flavescens) population structure, while greatly reducing the number of fish that were assigned ages. Analyses of statistical power and efficiency of current sampling protocols are useful for managers collecting and using these types of data as well as for the development of new monitoring programs. Our approach provides insight into whether survey goals and objectives were being attained and can help evaluate ability of surveys to answer novel questions that arise as management strategies are refined.

  1. Climate change adaptation: a panacea for food security in Ondo State, Nigeria

    NASA Astrophysics Data System (ADS)

    Fatuase, A. I.

    2017-08-01

    This paper examines the likely perceived causes of climate change, adaptation strategies employed and technical inefficiency of arable crop farmers in Ondo State, Nigeria. Data were obtained from primary sources using a set of structured questionnaire assisted with interview schedule. Multistage sampling technique was used. Data were analyzed using the following: descriptive statistics and the stochastic frontier production function. The findings showed that majority of the respondents (59.1 %) still believed that climate change is a natural phenomenon that is beyond man's power to abate while industrial release, improper sewage disposal, fossil fuel use, deforestation and bush burning were perceived as the most human factors that influence climate change by the category that chose human activities (40.9 %) as the main causes of climate change. The main employed adaptation strategies by the farmers were mixed cropping, planting early matured crop, planting of resistant crops and use of agrochemicals. The arable crop farmers were relatively technically efficient with about 53 % of them having technical efficiency above the average of 0.784 for the study area. The study observed that education, adaptation, perception, climate information and farming experience were statistically significant in decreasing inefficiency of arable crop production. Therefore, advocacy on climate change and its adaptation strategies should be intensified in the study area.

  2. Orderly arranged fluorescence dyes as a highly efficient chemiluminescence resonance energy transfer probe for peroxynitrite.

    PubMed

    Wang, Zhihua; Teng, Xu; Lu, Chao

    2015-03-17

    Chemiluminescence (CL) probes for reactive oxygen species (ROS) are commonly based on a redox reaction between a CL reagent and ROS, leading to poor selectivity toward a specific ROS. The energy-matching rules in the chemiluminescence resonance energy transfer (CRET) process between a specific ROS donor and a suitable fluorescence dye acceptor is a promising method for the selective detection of ROS. Nevertheless, higher concentrations of fluorescence dyes can lead to the intractable aggregation-caused quenching effect, decreasing the CRET efficiency. In this report, we fabricated an orderly arranged structure of calcein-sodium dodecyl sulfate (SDS) molecules to improve the CRET efficiency between ONOOH* donor and calcein acceptor. Such orderly arranged calcein-SDS composites can distinguish peroxynitrite (ONOO(-)) from a variety of other ROS owing to the energy matching in the CRET process between ONOOH* donor and calcein acceptor. Under the optimal experimental conditions, ONOO(-) could be assayed in the range of 1.0-20.0 μM, and the detection limit for ONOO(-) [signal-to-noise ratio (S/N) = 3] was 0.3 μM. The proposed strategy has been successfully applied in both detecting ONOO(-) in cancer mouse plasma samples and monitoring the generation of ONOO(-) from 3-morpholinosydnonimine (SIN-1). Recoveries from cancer mouse plasma samples were in the range of 96-105%. The success of this work provides a unique opportunity to develop a CL tool to monitor ONOO(-) with high selectivity in a specific manner. Improvement of selectivity and sensitivity of CL probes holds great promise as a strategy for developing a wide range of probes for various ROS by tuning the types of fluorescence dyes.

  3. Real-time label-free quantitative fluorescence microscopy-based detection of ATP using a tunable fluorescent nano-aptasensor platform.

    PubMed

    Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung

    2015-12-14

    Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.

  4. Green and Mild Oxidation: An Efficient Strategy toward Water-Dispersible Graphene.

    PubMed

    You, Xiaofei; Yang, Siwei; Li, Jipeng; Deng, Yuan; Dai, Lianqi; Peng, Xiong; Huang, Haoguang; Sun, Jing; Wang, Gang; He, Peng; Ding, Guqiao; Xie, Xiaoming

    2017-01-25

    Scalable fabrication of water-dispersible graphene (W-Gr) is highly desirable yet technically challenging for most practical applications of graphene. Herein, a green and mild oxidation strategy to prepare bulk W-Gr (dispersion, slurry, and powder) with high yield was proposed by fully exploiting structure defects of thermally reduced graphene oxide (TRGO) and oxidizing radicals generated from hydrogen peroxide (H 2 O 2 ). Owing to the increased carboxyl group from the mild oxidation process, the obtained W-Gr can be redispersed in low-boiling solvents with a reasonable concentration. Benefiting from the modified surface chemistry, macroscopic samples processed from the W-Gr show good hydrophilicity (water contact angle of 55.7°) and excellent biocompatibility, which is expected to be an alternative biomaterial for bone, vessel, and skin regeneration. In addition, the green and mild oxidation strategy is also proven to be effective for dispersing other carbon nanomaterials in a water system.

  5. Forensic Application of Microbiological Culture Analysis To Identify Mail Intentionally Contaminated with Bacillus anthracis Spores†

    PubMed Central

    Beecher, Douglas J.

    2006-01-01

    The discovery of a letter intentionally filled with dried Bacillus anthracis spores in the office of a United States senator prompted the collection and quarantine of all mail in congressional buildings. This mail was subsequently searched for additional intentionally contaminated letters. A microbiological sampling strategy was used to locate heavy contamination within the 642 separate plastic bags containing the mail. Swab sampling identified 20 bags for manual and visual examination. Air sampling within the 20 bags indicated that one bag was orders of magnitude more contaminated than all the others. This bag contained a letter addressed to Senator Patrick Leahy that had been loaded with dried B. anthracis spores. Microbiological sampling of compartmentalized batches of mail proved to be efficient and relatively safe. Efficiency was increased by inoculating culture media in the hot zone rather than transferring swab samples to a laboratory for inoculation. All mail sampling was complete within 4 days with minimal contamination of the sampling environment or personnel. However, physically handling the intentionally contaminated letter proved to be exceptionally hazardous, as did sorting of cross-contaminated mail, which resulted in generation of hazardous aerosol and extensive contamination of protective clothing. Nearly 8 × 106 CFU was removed from the most highly cross-contaminated piece of mail found. Tracking data indicated that this and other heavily contaminated envelopes had been processed through the same mail sorting equipment as, and within 1 s of, two intentionally contaminated letters. PMID:16885280

  6. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  7. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  8. SPR based immunosensor for detection of Legionella pneumophila in water samples

    NASA Astrophysics Data System (ADS)

    Enrico, De Lorenzis; Manera, Maria G.; Montagna, Giovanni; Cimaglia, Fabio; Chiesa, Maurizio; Poltronieri, Palmiro; Santino, Angelo; Rella, Roberto

    2013-05-01

    Detection of legionellae by water sampling is an important factor in epidemiological investigations of Legionnaires' disease and its prevention. To avoid labor-intensive problems with conventional methods, an alternative, highly sensitive and simple method is proposed for detecting L. pneumophila in aqueous samples. A compact Surface Plasmon Resonance (SPR) instrumentation prototype, provided with proper microfluidics tools, is built. The developed immunosensor is capable of dynamically following the binding between antigens and the corresponding antibody molecules immobilized on the SPR sensor surface. A proper immobilization strategy is used in this work that makes use of an important efficient step aimed at the orientation of antibodies onto the sensor surface. The feasibility of the integration of SPR-based biosensing setups with microfluidic technologies, resulting in a low-cost and portable biosensor is demonstrated.

  9. Microfluidic Air Sampler for Highly Efficient Bacterial Aerosol Collection and Identification.

    PubMed

    Bian, Xiaojun; Lan, Ying; Wang, Bing; Zhang, Yu Shrike; Liu, Baohong; Yang, Pengyuan; Zhang, Weijia; Qiao, Liang

    2016-12-06

    The early warning capability of the presence of biological aerosol threats is an urgent demand in ensuing civilian and military safety. Efficient and rapid air sample collection in relevant indoor or outdoor environment is a key step for subsequent analysis of airborne microorganisms. Herein, we report a portable battery-powered sampler that is capable of highly efficient bioaerosol collection. The essential module of the sampler is a polydimethylsiloxane (PDMS) microfluidic chip, which consisted of a 3-loop double-spiral microchannel featuring embedded herringbone and sawtooth wave-shaped structures. Vibrio parahemolyticus (V. parahemolyticus) as a model microorganism, was initially employed to validate the bioaerosol collection performance of the device. Results showed that the sampling efficacy reached as high as >99.9%. The microfluidic sampler showed greatly improved capturing efficiency compared with traditional plate sedimentation methods. The high performance of our device was attributed to the horizontal inertial centrifugal force and the vertical turbulence applied to airflow during sampling. The centrifugation field and turbulence were generated by the specially designed herringbone structures when air circulated in the double-spiral microchannel. The sawtooth wave-shaped microstructure created larger specific surface area for accommodating more aerosols. Furthermore, a mixture of bacterial aerosols formed by V. parahemolyticus, Listeria monocytogenes, and Escherichia coli was extracted by the microfluidic sampler. Subsequent integration with mass spectrometry conveniently identified the multiple bacterial species captured by the sampler. Our developed stand-alone and cable-free sampler shows clear advantages comparing with conventional strategies, including portability, easy-to-use, and low cost, indicating great potential in future field applications.

  10. An efficient sampling strategy for selection of biobank samples using risk scores.

    PubMed

    Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna

    2017-07-01

    The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m 2 ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m 2 ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( p<0.001), BMI of both parents ( p<0.001 for both), type of residence ( p=0.04) and economic situation ( p=0.12), yielded an area under the receiver operating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.

  11. Ultra-facile fabrication of phosphorus doped egg-like hierarchic porous carbon with superior supercapacitance performance by microwave irradiation combining with self-activation strategy

    NASA Astrophysics Data System (ADS)

    Zhang, Deyi; Han, Mei; Li, Yubing; He, Jingjing; Wang, Bing; Wang, Kunjie; Feng, Huixia

    2017-12-01

    Herein, we report an ultra-facile fabrication method for a phosphorus doped egg-like hierarchic porous carbon by microwave irradiation combining with self-activation strategy under air atmosphere. Comparing with the traditional pyrolytic carbonization method, the reported method exhibits incomparable merits, such as high energy efficiency, ultra-fast and inert atmosphere protection absent fabrication process. Similar morphology and graphitization degree with the sample fabricated by the traditional pyrolytic carbonization method under inert atmosphere protection for 2 h can be easily achieved by the reported microwave irradiation method just for 3 min under ambient atmosphere. The samples fabricated by the reported method display a unique phosphorus doped egg-like hierarchic porous structure, high specific surface area (1642 m2 g-1) and large pore volume (2.04 cm3 g-1). Specific capacitance of the samples fabricated by the reported method reaches up to 209 F g-1, and over 96.2% of initial capacitance remains as current density increasing from 0.5 to 20 A g-1, indicating the superior capacitance performance of the fabricated samples. The hierarchic porous structure, opened microporosity, additional pseudocapacitance, high electrolyte-accessible surface area and good conductivity make essential contribution to its superior capacitance performance.

  12. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    NASA Astrophysics Data System (ADS)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate constants (Rs). Each constant links the mass of the a target contaminant accumulated in the sampler to its concentration in water. At the end of the field application, the Rs are used to calculate the TWA concentration of each target contaminant with the final mass of the contaminants accumulated in the sampler. Stir Bar Sorptive Extraction (SBSE) is a solvent free sample preparation technique dedicated to the analysis of moderately hydrophobic to hydrophobic compounds in liquid and gas samples. It is composed of a magnet enclosed in a glass tube coated with a thick film of polydimethysiloxane (PDMS). We recently developed the in situ application of SBSE as a passive sampling technique (herein named "Passive SBSE") for the monitoring of agricultural pesticides. The aim of this study is to perform the calibration of the passive SBSE in the laboratory, and to apply and compare this technique to active sampling strategies for the monitoring of 16 relatively hydrophobic to hydrophobic pesticides in streams, during 2 1-month sampling campaigns. Time-weighted averaged concentrations of the target pesticides obtained from passive SBSE were compared to the target pesticide concentrations of grab samples, and time-related and flow-dependent samples of the streams. Results showed passive SBSE as an efficient alternative to conventional active sampling strategies.

  14. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  15. Instructional Strategy: Administration of Injury Scripts

    ERIC Educational Resources Information Center

    Schilling, Jim

    2016-01-01

    Context: Learning how to form accurate and efficient clinical examinations is a critical factor in becoming a competent athletic training practitioner, and instructional strategies differ for this complex task. Objective: To introduce an instructional strategy consistent with complex learning to encourage improved efficiency by minimizing…

  16. Multivariate Bayesian analysis of Gaussian, right censored Gaussian, ordered categorical and binary traits using Gibbs sampling

    PubMed Central

    Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just

    2003-01-01

    A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531

  17. Role of Sample Processing Strategies at the European Union National Reference Laboratories (NRLs) Concerning the Analysis of Pesticide Residues.

    PubMed

    Hajeb, Parvaneh; Herrmann, Susan S; Poulsen, Mette E

    2017-07-19

    The guidance document SANTE 11945/2015 recommends that cereal samples be milled to a particle size preferably smaller than 1.0 mm and that extensive heating of the samples should be avoided. The aim of the present study was therefore to investigate the differences in milling procedures, obtained particle size distributions, and the resulting pesticide residue recovery when cereal samples were milled at the European Union National Reference Laboratories (NRLs) with their routine milling procedures. A total of 23 NRLs participated in the study. The oat and rye samples milled by each NRL were sent to the European Union Reference Laboratory on Cereals and Feedingstuff (EURL) for the determination of the particle size distribution and pesticide residue recovery. The results showed that the NRLs used several different brands and types of mills. Large variations in the particle size distributions and pesticide extraction efficiencies were observed even between samples milled by the same type of mill.

  18. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning.

    PubMed

    Gao, Wei-feng; Liu, San-yang; Huang, Ling-ling

    2013-06-01

    The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions.

  19. Using extreme phenotype sampling to identify the rare causal variants of quantitative traits in association studies.

    PubMed

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David

    2011-12-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.

  20. Using Extreme Phenotype Sampling to Identify the Rare Causal Variants of Quantitative Traits in Association Studies

    PubMed Central

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David

    2014-01-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541

  1. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  2. Info-gap theory and robust design of surveillance for invasive species: the case study of Barrow Island.

    PubMed

    Davidovitch, Lior; Stoklosa, Richard; Majer, Jonathan; Nietrzeba, Alex; Whittle, Peter; Mengersen, Kerrie; Ben-Haim, Yakov

    2009-06-01

    Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.

  3. Large-scale identification of core-fucosylated glycopeptide sites in pancreatic cancer serum using mass spectrometry.

    PubMed

    Tan, Zhijing; Yin, Haidi; Nie, Song; Lin, Zhenxin; Zhu, Jianhui; Ruffin, Mack T; Anderson, Michelle A; Simeone, Diane M; Lubman, David M

    2015-04-03

    Glycosylation has significant effects on protein function and cell metastasis, which are important in cancer progression. It is of great interest to identify site-specific glycosylation in search of potential cancer biomarkers. However, the abundance of glycopeptides is low compared to that of nonglycopeptides after trypsin digestion of serum samples, and the mass spectrometric signals of glycopeptides are often masked by coeluting nonglycopeptides due to low ionization efficiency. Selective enrichment of glycopeptides from complex serum samples is essential for mass spectrometry (MS)-based analysis. Herein, a strategy has been optimized using LCA enrichment to improve the identification of core-fucosylation (CF) sites in serum of pancreatic cancer patients. The optimized strategy was then applied to analyze CF glycopeptide sites in 13 sets of serum samples from pancreatic cancer, chronic pancreatitis, healthy controls, and a standard reference. In total, 630 core-fucosylation sites were identified from 322 CF proteins in pancreatic cancer patient serum using an Orbitrap Elite mass spectrometer. Further data analysis revealed that 8 CF peptides exhibited a significant difference between pancreatic cancer and other controls, which may be potential diagnostic biomarkers for pancreatic cancer.

  4. Time-integrated sampling of fluvial suspended sediment: a simple methodology for small catchments

    NASA Astrophysics Data System (ADS)

    Phillips, J. M.; Russell, M. A.; Walling, D. E.

    2000-10-01

    Fine-grained (<62·5 µm) suspended sediment transport is a key component of the geochemical flux in most fluvial systems. The highly episodic nature of suspended sediment transport imposes a significant constraint on the design of sampling strategies aimed at characterizing the biogeochemical properties of such sediment. A simple sediment sampler, utilizing ambient flow to induce sedimentation by settling, is described. The sampler can be deployed unattended in small streams to collect time-integrated suspended sediment samples. In laboratory tests involving chemically dispersed sediment, the sampler collected a maximum of 71% of the input sample mass. However, under natural conditions, the existence of composite particles or flocs can be expected to increase significantly the trapping efficiency. Field trials confirmed that the particle size composition and total carbon content of the sediment collected by the sampler were representative statistically of the ambient suspended sediment.

  5. Search for Directed Networks by Different Random Walk Strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  6. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  7. The Army Communications Objectives Measurement System (ACOMS): Survey Methods

    DTIC Science & Technology

    1988-07-01

    advertising strategy efficiencies; (3) management of the advertising program; and (4) planning and development of new marketing strategies and...scientific methodology. ACOMS is being used for Army (1) assessments of advertising program effectiveness; (2) assessments of advertising strategy efficiencies...advertising program effectiveness in a timely fashion; (2) To support Army assessments of advertising strategy in an integrated framework; and (3) To support

  8. A note on the efficiencies of sampling strategies in two-stage Bayesian regional fine mapping of a quantitative trait.

    PubMed

    Chen, Zhijian; Craiu, Radu V; Bull, Shelley B

    2014-11-01

    In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.

  9. Harnessing Social Networks along with Consumer-Driven Electronic Communication Technologies to Identify and Engage Members of 'Hard-to-Reach' Populations: A Methodological Case Report

    PubMed Central

    2010-01-01

    Background Sampling in the absence of accurate or comprehensive information routinely poses logistical, ethical, and resource allocation challenges in social science, clinical, epidemiological, health service and population health research. These challenges are compounded if few members of a target population know each other or regularly interact. This paper reports on the sampling methods adopted in ethnographic case study research with a 'hard-to-reach' population. Methods To identify and engage a small yet diverse sample of people who met an unusual set of criteria (i.e., pet owners who had been treating cats or dogs for diabetes), four sampling strategies were used. First, copies of a recruitment letter were posted in pet-friendly places. Second, information about the study was diffused throughout the study period via word of mouth. Third, the lead investigator personally sent the recruitment letter via email to a pet owner, who then circulated the information to others, and so on. Fourth, veterinarians were enlisted to refer people who had diabetic pets. The second, third and fourth strategies rely on social networks and represent forms of chain referral sampling. Results Chain referral sampling via email proved to be the most efficient and effective, yielding a small yet diverse group of respondents within one month, and at negligible cost. Conclusions The widespread popularity of electronic communication technologies offers new methodological opportunities for researchers seeking to recruit from hard-to-reach populations. PMID:20089187

  10. Harnessing social networks along with consumer-driven electronic communication technologies to identify and engage members of 'hard-to-reach' populations: a methodological case report.

    PubMed

    Rock, Melanie J

    2010-01-20

    Sampling in the absence of accurate or comprehensive information routinely poses logistical, ethical, and resource allocation challenges in social science, clinical, epidemiological, health service and population health research. These challenges are compounded if few members of a target population know each other or regularly interact. This paper reports on the sampling methods adopted in ethnographic case study research with a 'hard-to-reach' population. To identify and engage a small yet diverse sample of people who met an unusual set of criteria (i.e., pet owners who had been treating cats or dogs for diabetes), four sampling strategies were used. First, copies of a recruitment letter were posted in pet-friendly places. Second, information about the study was diffused throughout the study period via word of mouth. Third, the lead investigator personally sent the recruitment letter via email to a pet owner, who then circulated the information to others, and so on. Fourth, veterinarians were enlisted to refer people who had diabetic pets. The second, third and fourth strategies rely on social networks and represent forms of chain referral sampling. Chain referral sampling via email proved to be the most efficient and effective, yielding a small yet diverse group of respondents within one month, and at negligible cost. The widespread popularity of electronic communication technologies offers new methodological opportunities for researchers seeking to recruit from hard-to-reach populations.

  11. Evaluation of 1H NMR metabolic profiling using biofluid mixture design.

    PubMed

    Athersuch, Toby J; Malik, Shahid; Weljie, Aalim; Newton, Jack; Keun, Hector C

    2013-07-16

    A strategy for evaluating the performance of quantitative spectral analysis tools in conditions that better approximate background variation in a metabonomics experiment is presented. Three different urine samples were mixed in known proportions according to a {3, 3} simplex lattice experimental design and analyzed in triplicate by 1D (1)H NMR spectroscopy. Fifty-four urinary metabolites were subsequently quantified from the sample spectra using two methods common in metabolic profiling studies: (1) targeted spectral fitting and (2) targeted spectral integration. Multivariate analysis using partial least-squares (PLS) regression showed the latent structure of the spectral set recapitulated the experimental mixture design. The goodness-of-prediction statistic (Q(2)) of each metabolite variable in a PLS model was calculated as a metric for the reliability of measurement, across the sample compositional space. Several metabolites were observed to have low Q(2) values, largely as a consequence of their spectral resonances having low s/n or strong overlap with other sample components. This strategy has the potential to allow evaluation of spectral features obtained from metabolic profiling platforms in the context of the compositional background found in real biological sample sets, which may be subject to considerable variation. We suggest that it be incorporated into metabolic profiling studies to improve the estimation of matrix effects that confound accurate metabolite measurement. This novel method provides a rational basis for exploiting information from several samples in an efficient manner and avoids the use of multiple spike-in authentic standards, which may be difficult to obtain.

  12. When continuous observations just won't do: developing accurate and efficient sampling strategies for the laying hen.

    PubMed

    Daigle, Courtney L; Siegford, Janice M

    2014-03-01

    Continuous observation is the most accurate way to determine animals' actual time budget and can provide a 'gold standard' representation of resource use, behavior frequency, and duration. Continuous observation is useful for capturing behaviors that are of short duration or occur infrequently. However, collecting continuous data is labor intensive and time consuming, making multiple individual or long-term data collection difficult. Six non-cage laying hens were video recorded for 15 h and behavioral data collected every 2 s were compared with data collected using scan sampling intervals of 5, 10, 15, 30, and 60 min and subsamples of 2 second observations performed for 10 min every 30 min, 15 min every 1 h, 30 min every 1.5 h, and 15 min every 2 h. Three statistical approaches were used to provide a comprehensive analysis to examine the quality of the data obtained via different sampling methods. General linear mixed models identified how the time budget from the sampling techniques differed from continuous observation. Correlation analysis identified how strongly results from the sampling techniques were associated with those from continuous observation. Regression analysis identified how well the results from the sampling techniques were associated with those from continuous observation, changes in magnitude, and whether a sampling technique had bias. Static behaviors were well represented with scan and time sampling techniques, while dynamic behaviors were best represented with time sampling techniques. Methods for identifying an appropriate sampling strategy based upon the type of behavior of interest are outlined and results for non-caged laying hens are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. HADES RV Programme with HARPS-N at TNG. II. Data treatment and simulations

    NASA Astrophysics Data System (ADS)

    Perger, M.; García-Piquer, A.; Ribas, I.; Morales, J. C.; Affer, L.; Micela, G.; Damasso, M.; Suárez-Mascareño, A.; González-Hernández, J. I.; Rebolo, R.; Herrero, E.; Rosich, A.; Lafarga, M.; Bignamini, A.; Sozzetti, A.; Claudi, R.; Cosentino, R.; Molinari, E.; Maldonado, J.; Maggio, A.; Lanza, A. F.; Poretti, E.; Pagano, I.; Desidera, S.; Gratton, R.; Piotto, G.; Bonomo, A. S.; Martinez Fiorenzano, A. F.; Giacobbe, P.; Malavolta, L.; Nascimbeni, V.; Rainer, M.; Scandariato, G.

    2017-02-01

    Context. The distribution of exoplanets around low-mass stars is still not well understood. Such stars, however, present an excellent opportunity for reaching down to the rocky and habitable planet domains. The number of current detections used for statistical purposes remains relatively modest and different surveys, using both photometry and precise radial velocities, are searching for planets around M dwarfs. Aims: Our HARPS-N red dwarf exoplanet survey is aimed at the detection of new planets around a sample of 78 selected stars, together with the subsequent characterization of their activity properties. Here we investigate the survey performance and strategy. Methods: From 2700 observed spectra, we compare the radial velocity determinations of the HARPS-N DRS pipeline and the HARPS-TERRA code, calculate the mean activity jitter level, evaluate the planet detection expectations, and address the general question of how to define the strategy of spectroscopic surveys in order to be most efficient in the detection of planets. Results: We find that the HARPS-TERRA radial velocities show less scatter and we calculate a mean activity jitter of 2.3 m s-1 for our sample. For a general radial velocity survey with limited observing time, the number of observations per star is key for the detection efficiency. In the case of an early M-type target sample, we conclude that approximately 50 observations per star with exposure times of 900 s and precisions of approximately 1 ms-1 maximizes the number of planet detections. Based on observations made with the Italian Telescopio Nazionale Galileo (TNG), operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias (IAC).

  14. Nontargeted metabolomic analysis and "commercial-homophyletic" comparison-induced biomarkers verification for the systematic chemical differentiation of five different parts of Panax ginseng.

    PubMed

    Qiu, Shi; Yang, Wen-Zhi; Yao, Chang-Liang; Qiu, Zhi-Dong; Shi, Xiao-Jian; Zhang, Jing-Xian; Hou, Jin-Jun; Wang, Qiu-Rong; Wu, Wan-Ying; Guo, De-An

    2016-07-01

    A key segment in authentication of herbal medicines is the establishment of robust biomarkers that embody the intrinsic metabolites difference independent of the growing environment or processing technics. We present a strategy by nontargeted metabolomics and "Commercial-homophyletic" comparison-induced biomarkers verification with new bioinformatic vehicles, to improve the efficiency and reliability in authentication of herbal medicines. The chemical differentiation of five different parts (root, leaf, flower bud, berry, and seed) of Panax ginseng was illustrated as a case study. First, an optimized ultra-performance liquid chromatography/quadrupole time-of-flight-MS(E) (UPLC/QTOF-MS(E)) approach was established for global metabolites profiling. Second, UNIFI™ combined with search of an in-house library was employed to automatically characterize the metabolites. Third, pattern recognition multivariate statistical analysis of the MS(E) data of different parts of commercial and homophyletic samples were separately performed to explore potential biomarkers. Fourth, potential biomarkers deduced from commercial and homophyletic root and leaf samples were cross-compared to infer robust biomarkers. Fifth, discriminating models by artificial neutral network (ANN) were established to identify different parts of P. ginseng. Consequently, 164 compounds were characterized, and 11 robust biomarkers enabling the differentiation among root, leaf, flower bud, and berry, were discovered by removing those structurally unstable and possibly processing-related ones. The ANN models using the robust biomarkers managed to exactly discriminate four different parts and root adulterant with leaf as well. Conclusively, biomarkers verification using homophyletic samples conduces to the discovery of robust biomarkers. The integrated strategy facilitates authentication of herbal medicines in a more efficient and more intelligent manner. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Energy Efficiency Programs in K-12 Schools: A Guide to Developing and Implementing Greenhouse Gas Reduction Programs. Local Government Climate and Energy Strategy Series

    ERIC Educational Resources Information Center

    US Environmental Protection Agency, 2011

    2011-01-01

    Saving energy through energy efficiency improvements can cost less than generating, transmitting, and distributing energy from power plants, and provides multiple economic and environmental benefits. Local governments can promote energy efficiency in their jurisdictions by developing and implementing strategies that improve the efficiency of…

  16. Optimization of Planet Finder Observing Strategy

    NASA Astrophysics Data System (ADS)

    Sinukoff, E.

    2014-03-01

    We evaluate radial velocity observing strategies to be considered for future planethunting surveys with the Automated Planet Finder, a new 2.4-m telescope at Lick Observatory. Observing strategies can be optimized to mitigate stellar noise, which can mask and imitate the weak Doppler signals of low-mass planets. We estimate and compare sensitivities of 5 different observing strategies to planets around G2-M2 dwarfs, constructing RV noise models for each stellar spectral type, accounting for acoustic, granulation, and magnetic activity modes. The strategies differ in exposure time, nightly and monthly cadence, and number of years. Synthetic RV time-series are produced by injecting a planet signal onto the stellar noise, sampled according to each observing strategy. For each star and each observing strategy, thousands of planet injection recovery trials are conducted to determine the detection efficiency as a function of orbital period, minimum mass, and eccentricity. We find that 4-year observing strategies of 10 nights per month are sensitive to planets ~25-40% lower in mass than the corresponding 1 year strategies of 30 nights per month. Three 5-minute exposures spaced evenly throughout each night provide a 10% gain in sensitivity over the corresponding single 15-minute exposure strategies. All strategies are sensitive to planets of lowest mass around the modeled K7 dwarf. This study indicates that APF surveys adopting the 4-year strategies should detect Earth-mass planets on < 10-day orbits around quiet late-K dwarfs as well as > 1.6 Earth-mass planets in their habitable zones.

  17. Tradeoffs between physical captures and PIT tag antenna array detections: A case study for the Lower Colorado River Basin population of humpback chub (Gila cypha)

    USGS Publications Warehouse

    Pearson, Kristen Nicole; Kendall, William L.; Winkelman, Dana L.; Persons, William R.

    2016-01-01

    A key component of many monitoring programs for special status species involves capture and handling of individuals as part of capture-recapture efforts for tracking population health and demography. Minimizing negative impacts from sampling, such as through reduced handling, aids prevention of negative impacts on species from monitoring efforts. Using simulation analyses, we found that long-term population monitoring techniques, requiring physical capture (i.e. hoop-net sampling), can be reduced and supplemented with passive detections (i.e. PIT tag antenna array detections) without negatively affecting estimates of adult humpback chub (HBC; Gila cypha) survival (S) and skipped spawning probabilities (γ' = spawner transitions to a skipped spawner, γ′ = skipped spawner remains a skipped spawner). Based on our findings of the array’s in situ detection efficiency (0.42), estimability of such demographic parameters would improve over hoop-netting alone. In addition, the array provides insight into HBC population dynamics and movement patterns outside of traditional sampling periods. However, given current timing of sampling efforts, spawner abundance estimates were negatively biased when hoop-netting was reduced, suggesting not all spawning HBC are present during the current sampling events. Despite this, our findings demonstrate that PIT tag antenna arrays, even with moderate potential detectability, may allow for reduced handling of special status species while also offering potentially more efficient monitoring strategies, especially if ideal timing of sampling can be determined.

  18. Accounting for spatially heterogeneous conditions in local-scale surveillance strategies: case study of the biosecurity insect pest, grape phylloxera (Daktulosphaira vitifoliae (Fitch)).

    PubMed

    Triska, Maggie D; Powell, Kevin S; Collins, Cassandra; Pearce, Inca; Renton, Michael

    2018-04-29

    Surveillance strategies are often standardized and completed on grid patterns to detect pest incursions quickly; however, it may be possible to improve surveillance through more targeted surveillance that accounts for landscape heterogeneity, dispersal and the habitat requirements of the invading organism. We simulated pest spread at a local-scale, using grape phylloxera (Daktulosphaira vitifoliae (Fitch)) as a case study, and assessed the influence of incorporating spatial heterogeneity into surveillance strategies compared to current, standard surveillance strategies. Time to detection, spread within and spread beyond the vineyard were reduced by conducting surveys that target sampling effort in soil that is highly suitable to the invading pest in comparison to standard surveillance strategies. However, these outcomes were dependent on the virulence level of phylloxera as phylloxera is a complex pest with multiple genotypes that influence spread and detectability. Targeting surveillance strategies based on local-scale spatial heterogeneity can decrease the time to detection without increasing the survey cost and surveillance that targets highly suitable soil is the most efficient strategy for detecting new incursions. Additionally, combining targeted surveillance strategies with buffer zones and hygiene procedures, and updating surveillance strategies as additional species information becomes available, will further decrease the risk of pest spread. This article is protected by copyright. All rights reserved.

  19. Soil sampling strategies for site assessments in petroleum-contaminated areas.

    PubMed

    Kim, Geonha; Chowdhury, Saikat; Lin, Yen-Min; Lu, Chih-Jen

    2017-04-01

    Environmental site assessments are frequently executed for monitoring and remediation performance evaluation purposes, especially in total petroleum hydrocarbon (TPH)-contaminated areas, such as gas stations. As a key issue, reproducibility of the assessment results must be ensured, especially if attempts are made to compare results between different institutions. Although it is widely known that uncertainties associated with soil sampling are much higher than those with chemical analyses, field guides or protocols to deal with these uncertainties are not stipulated in detail in the relevant regulations, causing serious errors and distortion of the reliability of environmental site assessments. In this research, uncertainties associated with soil sampling and sample reduction for chemical analysis were quantified using laboratory-scale experiments and the theory of sampling. The research results showed that the TPH mass assessed by sampling tends to be overestimated and sampling errors are high, especially for the low range of TPH concentrations. Homogenization of soil was found to be an efficient method to suppress uncertainty, but high-resolution sampling could be an essential way to minimize this.

  20. Strategies to improve energy efficiency in sewage treatment plants

    NASA Astrophysics Data System (ADS)

    Au, Mau Teng; Pasupuleti, Jagadeesh; Chua, Kok Hua

    2013-06-01

    This paper discusses on strategies to improve energy efficiency in Sewage Treatment Plant (STP). Four types of STP; conventional activated sludge, extended aeration, oxidation ditch, and sequence batch reactor are presented and strategized to reduce energy consumption based on their influent flow. Strategies to reduce energy consumption include the use of energy saving devices, energy efficient motors, automation/control and modification of processes. It is envisaged that 20-30% of energy could be saved from these initiatives.

  1. Improving the existing diagnostic strategy by accounting for characteristics of the women in the diagnostic work up for postmenopausal bleeding.

    PubMed

    Opmeer, B C; van Doorn, H C; Heintz, A P M; Burger, C W; Bossuyt, P M M; Mol, B W J

    2007-01-01

    The aim of this study was to evaluate whether the efficiency of the current diagnostic work up following postmenopausal bleeding could be improved by diagnostic strategies that take into account characteristics of the women in addition to the currently recommended transvaginal measurement of endometrial thickness to determine for subsequent histological assessment. Multicenter, prospective cohort study. A university hospital and seven teaching hospitals in the Netherlands. Consecutive women not using hormone replacement therapy, presenting with postmenopausal bleeding. Five hundred and forty women underwent transvaginal sonography, and in case of endometrial thickness (double layer) above 4 mm, subsequent endometrial sampling was performed. Presence of carcinoma was ruled out by the absence of abnormalities in histological specimen or by an uneventful follow up of at least 6 months. Probability of endometrial carcinoma was estimated by multivariable logistic regression models. For each diagnostic strategy, we calculated diagnostic accuracy (area under receiver operating characteristic curve [AUC]), negative predictive value (NPV) and the number of diagnostic procedures. A strategy with transvaginal sonography alone with a fixed threshold incorrectly classified 0.7% of the women as nonmalignant (NPV: 99.3% [98.5-100%]), with 97% sensitivity and 56% specificity. A strategy integrating characteristics of the women with transvaginal sonography could result in less false reassurances (NPV: 99.6% [99.2-100%]), with only marginal decrease in diagnostic procedures, or a minor increase in false reassurances (NPV: 99.0% [98.3-100%]), with a substantial reduction (15-20%) in the procedures. AUCs associated with these strategies could improve from 0.76 (0.73-0.79) for transvaginal sonography alone to 0.90 (0.87-0.93) for the integrated strategy. Taking into account the characteristics of the women could increase the efficiency of the diagnostic work up for postmenopausal bleeding.

  2. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  3. Using Decision-Analytic Modeling to Isolate Interventions That Are Feasible, Efficient and Optimal: An Application from the Norwegian Cervical Cancer Screening Program.

    PubMed

    Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø

    2015-12-01

    Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.

  4. A new strategy for preparation of hair slurries using cryogenic grinding and water-soluble tertiary-amines medium

    NASA Astrophysics Data System (ADS)

    Kamogawa, Marcos Y.; Nogueira, Ana Rita A.; Costa, Letícia M.; Garcia, Edivaldo E.; Nóbrega, Joaquim A.

    2001-10-01

    The investigation of trace metal contents in hair can be used as an index of exposure to potentially toxic elements. Direct determination of Cd, Cu and Pb in slurries of hair samples was investigated using an atomic absorption spectrometer with Zeeman-effect background correction. The samples were pulverized in a freezer/mill for 13 min, and hair slurries with 1.0 g l -1 for the determination of Cu and Pb, and 5.0 g l -1 for the determination of Cd, respectively, were prepared in three different media: 0.1% v/v Triton X-100, 0.14 mol l -1 HNO 3, and 0.1% v/v of CFA-C, a mixture of tertiary amines. The easiest way to manipulate the hair samples was in CFA-C medium. The optimum pyrolysis and atomization temperatures were established with hair sample slurries spiked with 10 μg l -1 Cd 2+, 30 μg l -1 Pb 2+, and 10 μg l -1 Cu 2+. For Cd and Pb, Pd was used as a chemical modifier, and for Cu no modifier was needed. The analyte addition technique was used for quantification of Cd, Cu, and Pb in hair sample slurries. A reference material (GBW076901) was analyzed, and a paired t-test showed that the results for all elements obtained with the proposed slurry sampling procedure were in agreement at a 95% confidence level with the certified values. The cryogenic grinding was an effective strategy to efficiently pulverize hair samples.

  5. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  6. Application of Molecular Dynamics Simulations in Molecular Property Prediction II: Diffusion Coefficient

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2011-01-01

    In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689

  7. Evaluating performance of stormwater sampling approaches using a dynamic watershed model.

    PubMed

    Ackerman, Drew; Stein, Eric D; Ritter, Kerry J

    2011-09-01

    Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.

  8. Photocatalytic Performance of a Novel MOF/BiFeO₃ Composite.

    PubMed

    Si, Yunhui; Li, Yayun; Zou, Jizhao; Xiong, Xinbo; Zeng, Xierong; Zhou, Ji

    2017-10-10

    In this study, MOF/BiFeO₃ composite (MOF, metal-organic framework) has been synthesized successfully through a one-pot hydrothermal method. The MOF/BiFeO₃ composite samples, pure MOF samples and BiFeO₃ samples were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), and by UV-vis spectrophotometry. The results and analysis reveal that MOF/BiFeO₃ composite has better photocatalytic behavior for methylene blue (MB) compared to pure MOF and pure BiFeO₃. The enhancement of photocatalytic performance should be due to the introduction of MOF change the surface morphology of BiFeO 3, which will increase the contact area with MB. This composing strategy of MOF/BiFeO₃ composite may bring new insight into the designing of highly efficient photocatalysts.

  9. Detection of genomic loci associated with environmental variables using generalized linear mixed models.

    PubMed

    Lobréaux, Stéphane; Melodelima, Christelle

    2015-02-01

    We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Improved efficient routing strategy on two-layer complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Han, Weizhan; Guo, Qing; Zhang, Shuai; Wang, Junfang; Wang, Zhihao

    2016-10-01

    The traffic dynamics of multi-layer networks has become a hot research topic since many networks are comprised of two or more layers of subnetworks. Due to its low traffic capacity, the traditional shortest path routing (SPR) protocol is susceptible to congestion on two-layer complex networks. In this paper, we propose an efficient routing strategy named improved global awareness routing (IGAR) strategy which is based on the betweenness centrality of nodes in the two layers. With the proposed strategy, the routing paths can bypass hub nodes of both layers to enhance the transport efficiency. Simulation results show that the IGAR strategy can bring much better traffic capacity than the SPR and the global awareness routing (GAR) strategies. Because of the significantly improved traffic performance, this study is helpful to alleviate congestion of the two-layer complex networks.

  11. Quantitative learning strategies based on word networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng

    2018-02-01

    Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.

  12. Progress of OLED devices with high efficiency at high luminance

    NASA Astrophysics Data System (ADS)

    Nguyen, Carmen; Ingram, Grayson; Lu, Zhenghong

    2014-03-01

    Organic light emitting diodes (OLEDs) have progressed significantly over the last two decades. For years, OLEDs have been promoted as the next generation technology for flat panel displays and solid-state lighting due to their potential for high energy efficiency and dynamic range of colors. Although high efficiency can readily be obtained at low brightness levels, a significant decline at high brightness is commonly observed. In this report, we will review various strategies for achieving highly efficient phosphorescent OLED devices at high luminance. Specifically, we will provide details regarding the performance and general working principles behind each strategy. We will conclude by looking at how some of these strategies can be combined to produce high efficiency white OLEDs at high brightness.

  13. Fuel Efficient Strategies for Reducing Contrail Formations in United States Air Space

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chen, Neil Y.; Ng, Hok K.

    2010-01-01

    This paper describes a class of strategies for reducing persistent contrail formation in the United States airspace. The primary objective is to minimize potential contrail formation regions by altering the aircraft's cruising altitude in a fuel-efficient way. The results show that the contrail formations can be reduced significantly without extra fuel consumption and without adversely affecting congestion in the airspace. The contrail formations can be further reduced by using extra fuel. For the day tested, the maximal reduction strategy has a 53% contrail reduction rate. The most fuel-efficient strategy has an 8% reduction rate with 2.86% less fuel-burnt compared to the maximal reduction strategy. Using a cost function which penalizes extra fuel consumed while maximizing the amount of contrail reduction provides a flexible way to trade off between contrail reduction and fuel consumption. It can achieve a 35% contrail reduction rate with only 0.23% extra fuel consumption. The proposed fuel-efficient contrail reduction strategy provides a solution to reduce aviation-induced environmental impact on a daily basis.

  14. [The effects of instruction about strategies for efficient calculation].

    PubMed

    Suzuki, Masayuki; Ichikawa, Shin'ichi

    2016-06-01

    Calculation problems such as "12x7÷3" can be solved rapidly and easily by using certain techniques; we call these problems "efficient calculation problems." However, it has been pointed out that many students do not always solve them efficiently. In the present study, we examined the effects of an intervention on 35 seventh grade students (23 males, 12 females). The students were instructed to use an overview strategy that stated, "Think carefully about the whole expression", and were then taught three sub-strategies. The results showed that students solved similar problems efficiently after the intervention and the effects were preserved for five months.

  15. Case studies: the impact of nonanalyte components on LC-MS/MS-based bioanalysis: strategies for identifying and overcoming matrix effects.

    PubMed

    Li, Fumin; Ewles, Matthew; Pelzer, Mary; Brus, Theodore; Ledvina, Aaron; Gray, Nicholas; Koupaei-Abyazani, Mohammad; Blackburn, Michael

    2013-10-01

    Achieving sufficient selectivity in bioanalysis is critical to ensure accurate quantitation of drugs and metabolites in biological matrices. Matrix effects most classically refer to modification of ionization efficiency of an analyte in the presence of matrix components. However, nonanalyte or matrix components present in samples can adversely impact the performance of a bioanalytical method and are broadly considered as matrix effects. For the current manuscript, we expand the scope to include matrix elements that contribute to isobaric interference and measurement bias. These three categories of matrix effects are illustrated with real examples encountered. The causes, symptoms, and suggested strategies and resolutions for each form of matrix effects are discussed. Each case is presented in the format of situation/action/result to facilitate reading.

  16. A combination strategy for extraction and isolation of multi-component natural products by systematic two-phase solvent extraction-(13)C nuclear magnetic resonance pattern recognition and following conical counter-current chromatography separation: Podophyllotoxins and flavonoids from Dysosma versipellis (Hance) as examples.

    PubMed

    Yang, Zhi; Wu, Youqian; Wu, Shihua

    2016-01-29

    Despite of substantial developments of extraction and separation techniques, isolation of natural products from natural resources is still a challenging task. In this work, an efficient strategy for extraction and isolation of multi-component natural products has been successfully developed by combination of systematic two-phase liquid-liquid extraction-(13)C NMR pattern recognition and following conical counter-current chromatography separation. A small-scale crude sample was first distributed into 9 systematic hexane-ethyl acetate-methanol-water (HEMWat) two-phase solvent systems for determination of the optimum extraction solvents and partition coefficients of the prominent components. Then, the optimized solvent systems were used in succession to enrich the hydrophilic and lipophilic components from the large-scale crude sample. At last, the enriched components samples were further purified by a new conical counter-current chromatography (CCC). Due to the use of (13)C NMR pattern recognition, the kinds and structures of major components in the solvent extracts could be predicted. Therefore, the method could collect simultaneously the partition coefficients and the structural information of components in the selected two-phase solvents. As an example, a cytotoxic extract of podophyllotoxins and flavonoids from Dysosma versipellis (Hance) was selected. After the systematic HEMWat system solvent extraction and (13)C NMR pattern recognition analyses, the crude extract of D. versipellis was first degreased by the upper phase of HEMWat system (9:1:9:1, v/v), and then distributed in the two phases of the system of HEMWat (2:8:2:8, v/v) to obtain the hydrophilic lower phase extract and lipophilic upper phase extract, respectively. These extracts were further separated by conical CCC with the HEMWat systems (1:9:1:9 and 4:6:4:6, v/v). As results, total 17 cytotoxic compounds were isolated and identified. In general, whole results suggested that the strategy was very efficient for the systematic extraction and isolation of biological active components from the complex biomaterials. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. The efficiency of asset management strategies to reduce urban flood risk.

    PubMed

    ten Veldhuis, J A E; Clemens, F H L R

    2011-01-01

    In this study, three asset management strategies were compared with respect to their efficiency to reduce flood risk. Data from call centres at two municipalities were used to quantify urban flood risks associated with three causes of urban flooding: gully pot blockage, sewer pipe blockage and sewer overloading. The efficiency of three flood reduction strategies was assessed based on their effect on the causes contributing to flood risk. The sensitivity of the results to uncertainty in the data source, citizens' calls, was analysed through incorporation of uncertainty ranges taken from customer complaint literature. Based on the available data it could be shown that increasing gully pot blockage is the most efficient action to reduce flood risk, given data uncertainty. If differences between cause incidences are large, as in the presented case study, call data are sufficient to decide how flood risk can be most efficiently reduced. According to the results of this analysis, enlargement of sewer pipes is not an efficient strategy to reduce flood risk, because flood risk associated with sewer overloading is small compared to other failure mechanisms.

  18. Effect of postweaning handling strategies on welfare and productive traits in lambs.

    PubMed

    Pascual-Alonso, María; Miranda-de la Lama, Genaro C; Aguayo-Ulloa, Lorena; Ezquerro, Laura; Villarroel, Morris; Marín, Raúl H; Maria, Gustavo A

    2015-01-01

    Postweaning management strategies that include an element of social enrichment may reduce weaning stress and improve welfare and productive performance. We analyzed the effect of postweaning handling strategies on welfare and production traits in lambs. After weaning, 36 lambs were assigned to 3 experimental groups with 12 lambs each (control [C], fattening with gentle human female contact [H], and fattening with 2 adult ewes [E]). The average daily gain (ADG) was estimated. Blood samples were taken, and infrared thermography was used to estimate stress variables. There were significant differences among treatments (in favor of alternative strategies) regarding production and stress variables (cortisol, glucose, and creatine kinase). The results suggest that the lambs handled gently during the fattening were less reactive and better able to modulate their physiological stress. The E group adapted better to acute stress than the C group but was less efficient in modulating chronic stress. Both treatments showed higher slaughter live weights and better ADGs compared with the control. The use of social enrichment at weaning, especially to establish a positive human-nonhuman animal bond, alleviates lamb weaning stress and improves welfare and performance.

  19. Discontinuous pH gradient-mediated separation of TiO2-enriched phosphopeptides

    PubMed Central

    Park, Sung-Soo; Maudsley, Stuart

    2010-01-01

    Global profiling of phosphoproteomes has proven a great challenge due to the relatively low stoichiometry of protein phosphorylation and poor ionization efficiency in mass spectrometers. Effective, physiologically-relevant, phosphoproteome research relies on the efficient phosphopeptide enrichment from complex samples. Immobilized metal affinity chromatography and titanium dioxide chromatography (TOC) can greatly assist selective phosphopeptide enrichment. However, the complexity of resultant enriched samples is often still high, suggesting that further separation of enriched phosphopeptides is required. We have developed a pH-gradient elution technique for enhanced phosphopeptide identification in conjunction with TOC. Using this process, we have demonstrated its superiority to the traditional ‘one-pot’ strategies for differential protein identification. Our technique generated a highly specific separation of phosphopeptides by an applied pH-gradient between 9.2 and 11.3. The most efficient elution range for high-resolution phosphopeptide separation was between pH 9.2 and 9.4. High-resolution separation of multiply-phosphorylated peptides was primarily achieved using elution ranges > pH 9.4. Investigation of phosphopeptide sequences identified in each pH fraction indicated that phosphopeptides with phosphorylated residues proximal to acidic residues, including glutamic acid, aspartic acid, and other phosphorylated residues, were preferentially eluted at higher pH values. PMID:20946866

  20. Folic Acid Targeting for Efficient Isolation and Detection of Ovarian Cancer CTCs from Human Whole Blood Based on Two-Step Binding Strategy.

    PubMed

    Nie, Liju; Li, Fulai; Huang, Xiaolin; Aguilar, Zoraida P; Wang, Yongqiang Andrew; Xiong, Yonghua; Fu, Fen; Xu, Hengyi

    2018-04-25

    Studies regarding circulating tumor cells (CTCs) have great significance for cancer prognosis, treatment monitoring, and metastasis diagnosis. However, due to their extremely low concentration in peripheral blood, isolation and enrichment of CTCs are the key steps for early detection. To this end, targeting the folic acid receptors (FRs) on the CTC surface for capture with folic acid (FA) using bovine serum albumin (BSA)-tether for multibiotin enhancement in combination with streptavidin-coated magnetic nanoparticles (MNPs-SA) was developed for ovarian cancer CTC isolation. The streptavidin-biotin-system-mediated two-step binding strategy was shown to capture CTCs from whole blood efficiently without the need for a pretreatment process. The optimized parameters for this system exhibited an average capture efficiency of 80%, which was 25% higher than that of FA-decorated magnetic nanoparticles based on the one-step CTC separation method. Moreover, the isolated cells remained highly viable and were cultured directly without detachment from the MNPs-SA-biotin-CTC complex. Furthermore, when the system was applied for the isolation and detection of CTCs in ovarian cancer patients' peripheral blood samples, it exhibited an 80% correlation with clinical diagnostic criteria. The results indicated that FA targeting, in combination with BSA-based multibiotin enhancement magnetic nanoparticle separation, is a promising tool for CTC enrichment and detection of early-stage ovarian cancer.

  1. Centrifugo-Magnetophoretic Purification of CD4+ Cells from Whole Blood Toward Future HIV/AIDS Point-of-Care Applications.

    PubMed

    Glynn, Macdara; Kirby, Daniel; Chung, Danielle; Kinahan, David J; Kijanka, Gregor; Ducrée, Jens

    2014-06-01

    In medical diagnostics, detection of cells exhibiting specific phenotypes constitutes a paramount challenge. Detection technology must ensure efficient isolation of (often rare) targets while eliminating nontarget background cells. Technologies exist for such investigations, but many require high levels of expertise, expense, and multistep protocols. Increasing automation, miniaturization, and availability of such technologies is an aim of microfluidic lab-on-a-chip strategies. To this end, we present an integrated, dual-force cellular separation strategy using centrifugo-magnetophoresis. Whole blood spiked with target cells is incubated with (super-)paramagnetic microparticles that specifically bind phenotypic markers on target cells. Under rotation, all cells sediment into a chamber located opposite a co-rotating magnet. Unbound cells follow the radial vector, but under the additional attraction of the lateral magnetic field, bead-bound target cells are deflected to a designated reservoir. This multiforce separation is continuous and low loss. We demonstrate separation efficiently up to 92% for cells expressing the HIV/AIDS relevant epitope (CD4) from whole blood. Such highly selective separation systems may be deployed for accurate diagnostic cell isolations from biological samples such as blood. Furthermore, this high efficiency is delivered in a cheap and simple device, thus making it an attractive option for future deployment in resource-limited settings. © 2013 Society for Laboratory Automation and Screening.

  2. An assessment of thermodynamic merits for current and potential future engine operating strategies

    DOE PAGES

    Wissink, Martin L.; Splitter, Derek A.; Dempsey, Adam B.; ...

    2017-02-01

    The present work compares the fundamental thermodynamic underpinnings (i.e., working fluid properties and heat release profile) of various combustion strategies with engine measurements. The approach employs a model that separately tracks the impacts on efficiency due to differences in rate of heat addition, volume change, mass addition, and molecular weight change for a given combination of working fluid, heat release profile, and engine geometry. Comparative analysis between measured and modeled efficiencies illustrates fundamental sources of efficiency reductions or opportunities inherent to various combustion regimes. Engine operating regimes chosen for analysis include stoichiometric spark-ignited combustion and lean compression-ignited combustion including HCCI,more » SA-HCCI, RCCI, GCI, and CDC. Within each combustion regime, effects such as engine load, combustion duration, combustion phasing, combustion chamber geometry, fuel properties, and charge dilution are explored. Model findings illustrate that even in the absence of losses such as heat transfer or incomplete combustion, the maximum possible thermal efficiency inherent to each operating strategy varies to a significant degree. Additionally, the experimentally measured losses are observed to be unique within a given operating strategy. The findings highlight the fact that in order to create a roadmap for future directions in ICE technologies, it is important to not only compare the absolute real-world efficiency of a given combustion strategy, but to also examine the measured efficiency in context of what is thermodynamically possible with the working fluid and boundary conditions prescribed by a strategy.« less

  3. An assessment of thermodynamic merits for current and potential future engine operating strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, Martin L.; Splitter, Derek A.; Dempsey, Adam B.

    The present work compares the fundamental thermodynamic underpinnings (i.e., working fluid properties and heat release profile) of various combustion strategies with engine measurements. The approach employs a model that separately tracks the impacts on efficiency due to differences in rate of heat addition, volume change, mass addition, and molecular weight change for a given combination of working fluid, heat release profile, and engine geometry. Comparative analysis between measured and modeled efficiencies illustrates fundamental sources of efficiency reductions or opportunities inherent to various combustion regimes. Engine operating regimes chosen for analysis include stoichiometric spark-ignited combustion and lean compression-ignited combustion including HCCI,more » SA-HCCI, RCCI, GCI, and CDC. Within each combustion regime, effects such as engine load, combustion duration, combustion phasing, combustion chamber geometry, fuel properties, and charge dilution are explored. Model findings illustrate that even in the absence of losses such as heat transfer or incomplete combustion, the maximum possible thermal efficiency inherent to each operating strategy varies to a significant degree. Additionally, the experimentally measured losses are observed to be unique within a given operating strategy. The findings highlight the fact that in order to create a roadmap for future directions in ICE technologies, it is important to not only compare the absolute real-world efficiency of a given combustion strategy, but to also examine the measured efficiency in context of what is thermodynamically possible with the working fluid and boundary conditions prescribed by a strategy.« less

  4. Enhanced genetic analysis of single human bioparticles recovered by simplified micromanipulation from forensic 'touch DNA' evidence.

    PubMed

    Farash, Katherine; Hanson, Erin K; Ballantyne, Jack

    2015-03-09

    DNA profiles can be obtained from 'touch DNA' evidence, which comprises microscopic traces of human biological material. Current methods for the recovery of trace DNA employ cotton swabs or adhesive tape to sample an area of interest. However, such a 'blind-swabbing' approach will co-sample cellular material from the different individuals, even if the individuals' cells are located in geographically distinct locations on the item. Thus, some of the DNA mixtures encountered in touch DNA samples are artificially created by the swabbing itself. In some instances, a victim's DNA may be found in significant excess thus masking any potential perpetrator's DNA. In order to circumvent the challenges with standard recovery and analysis methods, we have developed a lower cost, 'smart analysis' method that results in enhanced genetic analysis of touch DNA evidence. We describe an optimized and efficient micromanipulation recovery strategy for the collection of bio-particles present in touch DNA samples, as well as an enhanced amplification strategy involving a one-step 5 µl microvolume lysis/STR amplification to permit the recovery of STR profiles from the bio-particle donor(s). The use of individual or few (i.e., "clumps") bioparticles results in the ability to obtain single source profiles. These procedures represent alternative enhanced techniques for the isolation and analysis of single bioparticles from forensic touch DNA evidence. While not necessary in every forensic investigation, the method could be highly beneficial for the recovery of a single source perpetrator DNA profile in cases involving physical assault (e.g., strangulation) that may not be possible using standard analysis techniques. Additionally, the strategies developed here offer an opportunity to obtain genetic information at the single cell level from a variety of other non-forensic trace biological material.

  5. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  6. An optimal routing strategy on scale-free networks

    NASA Astrophysics Data System (ADS)

    Yang, Yibo; Zhao, Honglin; Ma, Jinlong; Qi, Zhaohui; Zhao, Yongbin

    Traffic is one of the most fundamental dynamical processes in networked systems. With the traditional shortest path routing (SPR) protocol, traffic congestion is likely to occur on the hub nodes on scale-free networks. In this paper, we propose an improved optimal routing (IOR) strategy which is based on the betweenness centrality and the degree centrality of nodes in the scale-free networks. With the proposed strategy, the routing paths can accurately bypass hub nodes in the network to enhance the transport efficiency. Simulation results show that the traffic capacity as well as some other indexes reflecting transportation efficiency are further improved with the IOR strategy. Owing to the significantly improved traffic performance, this study is helpful to design more efficient routing strategies in communication or transportation systems.

  7. Efficient immunization strategies to prevent financial contagion

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi; Hasui, Kohei

    2014-01-01

    Many immunization strategies have been proposed to prevent infectious viruses from spreading through a network. In this work, we study efficient immunization strategies to prevent a default contagion that might occur in a financial network. An essential difference from the previous studies on immunization strategy is that we take into account the possibility of serious side effects. Uniform immunization refers to a situation in which banks are ``vaccinated'' with a common low-risk asset. The riskiness of immunized banks will decrease significantly, but the level of systemic risk may increase due to the de-diversification effect. To overcome this side effect, we propose another immunization strategy, called counteractive immunization, which prevents pairs of banks from failing simultaneously. We find that counteractive immunization can efficiently reduce systemic risk without altering the riskiness of individual banks.

  8. Soil sampling strategies: evaluation of different approaches.

    PubMed

    de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-11-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  9. Method Evaluations for Adsorption Free Energy Calculations at the Solid/Water Interface through Metadynamics, Umbrella Sampling, and Jarzynski's Equality.

    PubMed

    Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning

    2018-03-19

    Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Review of in situ derivatization techniques for enhanced bioanalysis using liquid chromatography with mass spectrometry.

    PubMed

    Baghdady, Yehia Z; Schug, Kevin A

    2016-01-01

    Accurate and specific analysis of target molecules in complex biological matrices remains a significant challenge, especially when ultra-trace detection limits are required. Liquid chromatography with mass spectrometry is often the method of choice for bioanalysis. Conventional sample preparation and clean-up methods prior to the analysis of biological fluids such as liquid-liquid extraction, solid-phase extraction, or protein precipitation are time-consuming, tedious, and can negatively affect target recovery and detection sensitivity. An alternative or complementary strategy is the use of an off-line or on-line in situ derivatization technique. In situ derivatization can be incorporated to directly derivatize target analytes in their native biological matrices, without any prior sample clean-up methods, to substitute or even enhance the extraction and preconcentration efficiency of these traditional sample preparation methods. Designed appropriately, it can reduce the number of sample preparation steps necessary prior to analysis. Moreover, in situ derivatization can be used to enhance the performance of the developed liquid chromatography with mass spectrometry-based bioanalysis methods regarding stability, chromatographic separation, selectivity, and ionization efficiency. This review presents an overview of the commonly used in situ derivatization techniques coupled to liquid chromatography with mass spectrometry-based bioanalysis to guide and to stimulate future research. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Application of immobilized synthetic anti-lipopolysaccharide peptides for the isolation and detection of bacteria.

    PubMed

    Sandetskaya, N; Engelmann, B; Brandenburg, K; Kuhlmeier, D

    2015-08-01

    The molecular detection of microorganisms in liquid samples generally requires their enrichment or isolation. The aim of our study was to evaluate the capture and pre-concentration of bacteria by immobilized particular cationic antimicrobial peptides, called synthetic anti-lipopolysaccharide peptides (SALP). For the proof-of-concept and screening of different SALP, the peptides were covalently immobilized on glass slides, and the binding of bacteria was confirmed by microscopic examination of the slides or their scanning, in case of fluorescent bacterial cells. The most efficient SALP was further tethered to magnetic beads. SALP beads were used for the magnetic capture of Escherichia coli in liquid samples. The efficiency of this strategy was evaluated using polymerase chain reaction (PCR). Covalently immobilized SALP were capable of capturing bacteria in liquid samples. However, PCR was hampered by the unspecific binding of DNA to the positively charged peptide. We developed a method for DNA recovery by the enzymatic digestion of the peptide, which allowed for a successful PCR, though the method had its own adverse impact on the detection and, thus, did not allow for the reliable quantitative analysis of the pathogen enrichment. Immobilized SALP can be used as capture molecules for bacteria in liquid samples and can be recommended for the design of the assays or decontamination of the fluids. For the accurate subsequent detection of bacteria, DNA-independent methods should be used.

  12. Release modeling and comparison of nanoarchaeosomal, nanoliposomal and pegylated nanoliposomal carriers for paclitaxel.

    PubMed

    Movahedi, Fatemeh; Ebrahimi Shahmabadi, Hasan; Alavi, Seyed Ebrahim; Koohi Moftakhari Esfahani, Maedeh

    2014-09-01

    Breast cancer is the most prevalent cancer among women. Recently, delivering by nanocarriers has resulted in a remarkable evolution in treatment of numerous cancers. Lipid nanocarriers are important ones while liposomes and archaeosomes are common lipid nanocarriers. In this work, paclitaxel was used and characterized in nanoliposomal and nanoarchaeosomal form to improve efficiency. To increase stability, efficiency and solubility, polyethylene glycol 2000 (PEG 2000) was added to some samples. MTT assay confirmed effectiveness of nanocarriers on MCF-7 cell line and size measuring validated nano-scale of particles. Nanoarchaeosomal carriers demonstrated highest encapsulation efficiency and lowest release rate. On the other hand, pegylated nanoliposomal carrier showed higher loading efficiency and less release compared with nanoliposomal carrier which verifies effect of PEG on improvement of stability and efficiency. Additionally, release pattern was modeled using artificial neural network (ANN) and genetic algorithm (GA). Using ANN modeling for release prediction, resulted in R values of 0.976, 0.989 and 0.999 for nanoliposomal, pegylated nanoliposomal and nanoarchaeosomal paclitaxel and GA modeling led to values of 0.954, 0.951 and 0.976, respectively. ANN modeling was more successful in predicting release compared with the GA strategy.

  13. Isotopic variants of light and heavy L-pyroglutamic acid succinimidyl esters as the derivatization reagents for DL-amino acid chiral metabolomics identification by liquid chromatography and electrospray ionization mass spectrometry.

    PubMed

    Mochizuki, Toshiki; Todoroki, Kenichiro; Inoue, Koichi; Min, Jun Zhe; Toyo'oka, Toshimasa

    2014-02-06

    L-Pyroglutamic acid succinimidyl ester (L-PGA-OSu) and its isotopic variant (L-PGA[d5]-OSu) were newly synthesized and evaluated as the chiral labeling reagents for the enantioseparation of amino acids, in terms of separation efficiency by reversed-phase chromatography and detection sensitivity by ESI-MS/MS. The enantiomers of amino acids were easily labeled with the reagents at 60°C within 10 min in an alkaline medium containing triethylamine. Although all the diastereomers derived from 18 proteolytic amino acids could not be satisfactorily separated, the pairs of 9 amino acids were completely separated by reversed-phase chromatography using the small particle (1.7 μm) ODS column (Rs=1.95-8.05). The characteristic daughter ions, i.e., m/z 84.04 and m/z 89.04, were detected from all the derivatives by the collision induced dissociation of the protonated molecular ions. A highly sensitive detection at a low-fmol level (0.5-3.2 fmol) was also obtained from the selected reaction monitoring (SRM) chromatograms. An isotope labeling strategy using light and heavy L-PGA-OSu for the differential analysis of the DL-amino acids in different sample groups is also presented in this paper. The differential analysis of biological sample (i.e., human serum) and food product (i.e., yogurt) were tried to demonstrate the efficiency of the proposed method. The ratios of the DL-amino acids in human serum samples, spiked with the different concentrations of D-amino acids, were determined by the procedures using L-PGA-OSu and L-PGA[d5]-OSu. The D/L ratios in the two sample groups at different concentrations of amino acids were similar to the theoretical values. Furthermore, the ratios of D/L-alanine values in different yogurt products were comparable to the ratios obtained from the d/l values using only light reagent (i.e., L-PGA-OSu). Consequently, the proposed strategy is useful for the differential analysis not only in biological samples but also in food products. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Pool, Sandra; Viviroli, Daniel; Seibert, Jan

    2017-11-01

    Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.

  15. Mass Customization in Schools: Strategies Dutch Secondary Schools Pursue to Cope with the Diversity-Efficiency Dilemma

    ERIC Educational Resources Information Center

    Waslander, Sietske

    2007-01-01

    Faced with the diversity-efficiency dilemma, private companies apply "mass customization" strategies to add diversity without adding costs. As schools are urged to become more "customer oriented" they also face a diversity-efficiency dilemma. This article asks how Dutch secondary schools cope with this dilemma and to what…

  16. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    PubMed

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  17. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  18. Global copy number profiling of cancer genomes | Office of Cancer Genomics

    Cancer.gov

    In this article, we introduce a robust and efficient strategy for deriving global and allele-specific copy number alternations (CNA) from cancer whole exome sequencing data based on Log R ratios and B-allele frequencies. Applying the approach to the analysis of over 200 skin cancer samples, we demonstrate its utility for discovering distinct CNA events and for deriving ancillary information such as tumor purity. Availability and implementation: https://github.com/xfwang/CLOSE CONTACT: xuefeng.wang@stonybrook.edu or michael.krauthammer@yale.edu. (Publication Abstract)

  19. Partial least squares density modeling (PLS-DM) - a new class-modeling strategy applied to the authentication of olives in brine by near-infrared spectroscopy.

    PubMed

    Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia

    2014-12-03

    A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Synthesis and application of in-situ molecularly imprinted silica monolithic in pipette-tip solid-phase microextraction for the separation and determination of gallic acid in orange juice samples.

    PubMed

    Arabi, Maryam; Ghaedi, Mehrorang; Ostovan, Abbas

    2017-03-24

    A novel strategy was presented for the synthesis and application of functionalized silica monolithic as artificial receptor of gallic acid at micro-pipette tip. A sol-gel process was used to prepare the sorbent. In this in-situ polymerization reaction, tetraethyl orthosilicate (TEOS), 3-aminopropyl trimethoxysilane (APTMS), gallic acid and thiourea were used, respectively, as cross-linker, functionalized monomer, template and precursor to make crack-free and non-fragile structure. Such durable and inexpensive in-situ monolithic was successfully employed as useful tool for highly efficient extraction of gallic acid from orange juice samples. The effective parameters in extraction recovery were investigated and optimum conditions were obtained using experimental design methodology. Applying HPLC-UV for separation quantification at optimal conditions, the gallic acid was efficiently extracted without significant matrix interference. Good linearity for gallic acid in the range of 0.02-5.0mgL -1 with correlation coefficients of R 2 >0.999 revealed well applicability of the method for trace analysis. Copyright © 2017. Published by Elsevier B.V.

  1. Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.

    PubMed

    Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R

    2014-01-01

    After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.

  2. Ligand Fishing: A Remarkable Strategy for Discovering Bioactive Compounds from Complex Mixture of Natural Products.

    PubMed

    Zhuo, Rongjie; Liu, Hao; Liu, Ningning; Wang, Yi

    2016-11-11

    Identification of active compounds from natural products is a critical and challenging task in drug discovery pipelines. Besides commonly used bio-guided screening approaches, affinity selection strategy coupled with liquid chromatography or mass spectrometry, known as ligand fishing, has been gaining increasing interest from researchers. In this review, we summarized this emerging strategy and categorized those methods as off-line or on-line mode according to their features. The separation principles of ligand fishing were introduced based on distinct analytical techniques, including biochromatography, capillary electrophoresis, ultrafiltration, equilibrium dialysis, microdialysis, and magnetic beads. The applications of ligand fishing approaches in the discovery of lead compounds were reviewed. Most of ligand fishing methods display specificity, high efficiency, and require less sample pretreatment, which makes them especially suitable for screening active compounds from complex mixtures of natural products. We also summarized the applications of ligand fishing in the modernization of Traditional Chinese Medicine (TCM), and propose some perspectives of this remarkable technique.

  3. Heterogeneous delivering capability promotes traffic efficiency in complex networks

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Bo; Guan, Xiang-Min; Zhang, Xue-Jun

    2015-12-01

    Traffic is one of the most fundamental dynamical processes in networked systems. With the homogeneous delivery capability of nodes, the global dynamic routing strategy proposed by Ling et al. [Phys. Rev. E81, 016113 (2010)] adequately uses the dynamic information during the process and thus it can reach a quite high network capacity. In this paper, based on the global dynamic routing strategy, we proposed a heterogeneous delivery allocation strategy of nodes on scale-free networks with consideration of nodes degree. It is found that the network capacity as well as some other indexes reflecting transportation efficiency are further improved. Our work may be useful for the design of more efficient routing strategies in communication or transportation systems.

  4. Ecodriving in hybrid electric vehicles--Exploring challenges for user-energy interaction.

    PubMed

    Franke, Thomas; Arend, Matthias Georg; McIlroy, Rich C; Stanton, Neville A

    2016-07-01

    Hybrid electric vehicles (HEVs) can help to reduce transport emissions; however, user behaviour has a significant effect on the energy savings actually achieved in everyday usage. The present research aimed to advance understanding of HEV drivers' ecodriving strategies, and the challenges for optimal user-energy interaction. We conducted interviews with 39 HEV drivers who achieved above-average fuel efficiencies. Regression analyses showed that technical system knowledge and ecodriving motivation were both important predictors for ecodriving efficiency. Qualitative data analyses showed that drivers used a plethora of ecodriving strategies and had diverse conceptualisations of HEV energy efficiency regarding aspects such as the efficiency of actively utilizing electric energy or the efficiency of different acceleration strategies. Drivers also reported several false beliefs regarding HEV energy efficiency that could impair ecodriving efforts. Results indicate that ecodriving support systems should facilitate anticipatory driving and help users locate and maintain drivetrain states of maximum efficiency. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. [High-sensitive detection of multiple allergenic proteins in infant food with high-resolution mass spectrometry].

    PubMed

    Wu, Ci; Chen, Xi; Liu, Jianhui; Zhang, Xiaolin; Xue, Weifeng; Liang, Zhen; Liu, Mengyao; Cui, Yan; Huang, Daliang; Zhang, Lihua

    2017-10-08

    A novel method of the simultaneous detection of multiple kinds of allergenic proteins in infant food with parallel reaction monitoring (PRM) mode using liquid chromatography-tandem mass spectrometry (LC-MS/MS) was established. In this method, unique peptides with good stability and high sensibility were used to quantify the corresponding allergenic proteins. Furthermore, multiple kinds of allergenic proteins are inspected simultaneously with high sensitivity. In addition, such method was successfully used for the detection of multiple allergenic proteins in infant food. As for the sample preparation for infant food, compared with the traditional acetone precipitation strategy, the protein extraction efficiency and capacity of resisting disturbance are both higher with in-situ filter-aided sample pretreatment (i-FASP) method. All allergenic proteins gave a good linear response with the correlation coefficients ( R 2 ) ≥ 0.99, and the largest concentration range of the allergenic proteins could be four orders of magnitude, and the lowest detection limit was 0.028 mg/L, which was better than that reported in references. Finally, the method was conveniently used to detect the allergens from four imported infant food real samples. All the results demonstrate that this novel strategy is of great significance for providing a rapid and reliable analytical technique for allergen proteomics.

  6. Effective clinical education: strategies for teaching medical students and residents in the office.

    PubMed

    Cayley, William E

    2011-08-01

    Educating medical students and residents in the office presents the challenges of providing quality medical care, maintaining efficiency, and incorporating meaningful education for learners. Numerous teaching strategies to address these challenges have been described in the medical educational literature, but only a few teaching strategies have been evaluated for their impact on education and office practice. Literature on the impact of office-based teaching strategies on educational outcomes and on office efficiency was selected from a Pub Med search, from review of references in retrieved articles, and from the author's personal files. Two teaching strategies, "one-minute preceptor" (OMP) and "SNAPPS," have been shown to improve educational processes and outcomes. Two additional strategies, "Aunt Minnie" pattern recognition and "activated demonstration," show promise but have not been fully evaluated. None of these strategies has been shown to improve office efficiency. OMP and SNAPPS are strategies that can be used in office precepting to improve educational processes and outcomes, while pattern recognition and activated demonstration show promise but need further assessment. Additional areas of research also are suggested.

  7. Biophysics of Euglena phototaxis

    NASA Astrophysics Data System (ADS)

    Tsang, Alan Cheng Hou; Riedel-Kruse, Ingmar H.

    Phototactic microorganisms usually respond to light stimuli via phototaxis to optimize the process of photosynthesis and avoid photodamage by excessive amount of light. Unicellular phototactic microorganisms such as Euglena gracilis only possesses a single photoreceptor, which highly limits its access to the light in three-dimensional world. However, experiments demonstrated that Euglena responds to light stimuli sensitively and exhibits phototaxis quickly, and it's not well understood how it performs so efficiently. We propose a mathematical model of Euglena's phototaxis that couples the dynamics of Euglena and its phototactic response. This model shows that Euglena exhibits wobbling path under weak ambient light, which is consistent to experimental observation. We show that this wobbling motion can enhance the sensitivity of photoreceptor to signals of small light intensity and provide an efficient mechanism for Euglena to sample light in different directions. We further investigate the optimization of Euglena's phototaxis using different performance metrics, including reorientation time, energy consumption, and swimming efficiency. We characterize the tradeoff among these performance metrics and the best strategy for phototaxis.

  8. Adaptive sparse grid approach for the efficient simulation of pulsed eddy current testing inspections

    NASA Astrophysics Data System (ADS)

    Miorelli, Roberto; Reboud, Christophe

    2018-04-01

    Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.

  9. Highly Efficient Computation of the Basal kon using Direct Simulation of Protein-Protein Association with Flexible Molecular Models.

    PubMed

    Saglam, Ali S; Chong, Lillian T

    2016-01-14

    An essential baseline for determining the extent to which electrostatic interactions enhance the kinetics of protein-protein association is the "basal" kon, which is the rate constant for association in the absence of electrostatic interactions. However, since such association events are beyond the milliseconds time scale, it has not been practical to compute the basal kon by directly simulating the association with flexible models. Here, we computed the basal kon for barnase and barstar, two of the most rapidly associating proteins, using highly efficient, flexible molecular simulations. These simulations involved (a) pseudoatomic protein models that reproduce the molecular shapes, electrostatic, and diffusion properties of all-atom models, and (b) application of the weighted ensemble path sampling strategy, which enhanced the efficiency of generating association events by >130-fold. We also examined the extent to which the computed basal kon is affected by inclusion of intermolecular hydrodynamic interactions in the simulations.

  10. Establishing an efficient way to utilize the drought resistance germplasm population in wheat.

    PubMed

    Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin

    2013-01-01

    Drought resistance breeding provides a hopeful way to improve yield and quality of wheat in arid and semiarid regions. Constructing core collection is an efficient way to evaluate and utilize drought-resistant germplasm resources in wheat. In the present research, 1,683 wheat varieties were divided into five germplasm groups (high resistant, HR; resistant, R; moderate resistant, MR; susceptible, S; and high susceptible, HS). The least distance stepwise sampling (LDSS) method was adopted to select core accessions. Six commonly used genetic distances (Euclidean distance, Euclid; Standardized Euclidean distance, Seuclid; Mahalanobis distance, Mahal; Manhattan distance, Manhat; Cosine distance, Cosine; and Correlation distance, Correlation) were used to assess genetic distances among accessions. Unweighted pair-group average (UPGMA) method was used to perform hierarchical cluster analysis. Coincidence rate of range (CR) and variable rate of coefficient of variation (VR) were adopted to evaluate the representativeness of the core collection. A method for selecting the ideal constructing strategy was suggested in the present research. A wheat core collection for the drought resistance breeding programs was constructed by the strategy selected in the present research. The principal component analysis showed that the genetic diversity was well preserved in that core collection.

  11. Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.

    PubMed

    Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco

    2018-06-07

    Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Functionalized ZnO nanowires for microcantilever biosensors with enhanced binding capability.

    PubMed

    Stassi, Stefano; Chiadò, Alessandro; Cauda, Valentina; Palmara, Gianluca; Canavese, Giancarlo; Laurenti, Marco; Ricciardi, Carlo

    2017-04-01

    An efficient way to increase the binding capability of microcantilever biosensors is here demonstrated by growing zinc oxide nanowires (ZnO NWs) on their active surface. A comprehensive evaluation of the chemical compatibility of ZnO NWs brought to the definition of an innovative functionalization method able to guarantee the proper immobilization of biomolecules on the nanostructured surface. A noteworthy higher amount of grafted molecules was evidenced with colorimetric assays on ZnO NWs-coated devices, in comparison with functionalized and activated silicon flat samples. ZnO NWs grown on silicon microcantilever arrays and activated with the proposed immobilization strategy enhanced the sensor binding capability (and thus the dynamic range) of nearly 1 order of magnitude, with respect to the commonly employed flat functionalized silicon devices. Graphical Abstract An efficient way to increase the binding capability of microcantilever biosensors is represented by growing zinc oxide nanowires (ZnO NWs) on their active surface. ZnO NWs grown on silicon microcantilever arrays and activated with an innovative immobilization strategy enhanced the sensor binding capability of nearly 1 order of magnitude, with respect to the commonly employed flat functionalized silicon devices.

  13. Missing observations in multiyear rotation sampling designs

    NASA Technical Reports Server (NTRS)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  14. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    NASA Astrophysics Data System (ADS)

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A. G.; Sellergren, Börje; Reubsaet, Léon

    2017-03-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting.

  15. Arbiters of Effectiveness and Efficiency: The Frames and Strategies of Management Consulting Firms in US Higher Education Reform

    ERIC Educational Resources Information Center

    McClure, Kevin R.

    2017-01-01

    A growing number of public colleges and universities in the United States have hired management consulting firms to help develop strategies aimed at increasing institutional effectiveness and efficiency. The purpose of this paper is to explore the frames and strategies of consultants in US public higher education reform efforts. Drawing upon a…

  16. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.

    PubMed

    Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas

    2010-06-25

    Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  17. Efficient Monte Carlo Estimation of the Expected Value of Sample Information Using Moment Matching.

    PubMed

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2018-02-01

    The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. We have developed a new calculation method for the EVSI which is computationally efficient and accurate. This novel method relies on some additional simulation so can be expensive in models with a large computational cost.

  18. Financial evaluation of different vaccination strategies for controlling the bluetongue virus serotype 8 epidemic in The Netherlands in 2008.

    PubMed

    Velthuis, Annet G J; Mourits, Monique C M; Saatkamp, Helmut W; de Koeijer, Aline A; Elbers, Armin R W

    2011-05-04

    Bluetongue (BT) is a vector-borne disease of ruminants caused by bluetongue virus that is transmitted by biting midges (Culicoides spp.). In 2006, the introduction of BTV serotype 8 (BTV-8) caused a severe epidemic in Western and Central Europe. The principal effective veterinary measure in response to BT was believed to be vaccination accompanied by other measures such as movement restrictions and surveillance. As the number of vaccine doses available at the start of the vaccination campaign was rather uncertain, the Dutch Ministry of Agriculture, Nature and Food Quality and the Dutch agricultural industry wanted to evaluate several different vaccination strategies. This study aimed to rank eight vaccination strategies based on their efficiency (i.e. net costs in relation to prevented losses or benefits) for controlling the bluetongue virus serotype 8 epidemic in 2008. An economic model was developed that included the Dutch professional cattle, sheep and goat sectors together with the hobby farms. Strategies were evaluated based on the least cost - highest benefit frontier, the benefit-cost ratio and the total net returns. Strategy F, where all adult sheep at professional farms in The Netherlands would be vaccinated was very efficient at lowest costs, whereas strategy D, where additional to all adult sheep at professional farms also all adult cattle in the four Northern provinces would be vaccinated, was also very efficient but at a little higher costs. Strategy C, where all adult sheep and cattle at professional farms in the whole of The Netherlands would be vaccinated was also efficient but again at higher costs. This study demonstrates that a financial analysis differentiates between vaccination strategies and indicates important decision rules based on efficiency.

  19. Financial Evaluation of Different Vaccination Strategies for Controlling the Bluetongue Virus Serotype 8 Epidemic in the Netherlands in 2008

    PubMed Central

    Velthuis, Annet G. J.; Mourits, Monique C. M.; Saatkamp, Helmut W.; de Koeijer, Aline A.; Elbers, Armin R. W.

    2011-01-01

    Background Bluetongue (BT) is a vector-borne disease of ruminants caused by bluetongue virus that is transmitted by biting midges (Culicoides spp.). In 2006, the introduction of BTV serotype 8 (BTV-8) caused a severe epidemic in Western and Central Europe. The principal effective veterinary measure in response to BT was believed to be vaccination accompanied by other measures such as movement restrictions and surveillance. As the number of vaccine doses available at the start of the vaccination campaign was rather uncertain, the Dutch Ministry of Agriculture, Nature and Food Quality and the Dutch agricultural industry wanted to evaluate several different vaccination strategies. This study aimed to rank eight vaccination strategies based on their efficiency (i.e. net costs in relation to prevented losses or benefits) for controlling the bluetongue virus serotype 8 epidemic in 2008. Methodology/Principal Findings An economic model was developed that included the Dutch professional cattle, sheep and goat sectors together with the hobby farms. Strategies were evaluated based on the least cost - highest benefit frontier, the benefit-cost ratio and the total net returns. Strategy F, where all adult sheep at professional farms in the Netherlands would be vaccinated was very efficient at lowest costs, whereas strategy D, where additional to all adult sheep at professional farms also all adult cattle in the four Northern provinces would be vaccinated, was also very efficient but at a little higher costs. Strategy C, where all adult sheep and cattle at professional farms in the whole of the Netherlands would be vaccinated was also efficient but again at higher costs. Conclusions/Significance This study demonstrates that a financial analysis differentiates between vaccination strategies and indicates important decision rules based on efficiency. PMID:21573195

  20. SSPARR: Development of an efficient autonomous sampling strategy

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.

    2013-12-01

    The Seafloor Sounding in Polar and Remote Regions (SSPARR) effort was launched in 2004 with funding from the US National Science Foundation (Anderson et al. 2005.) Experiments with a prototype were encouraging (Greenspan et al., 2012, Chayes et al. 2012) and we are proceeding toward building and testing units for deployment during the 2014 season season in ice covered parts of the Arctic ocean. The simplest operational mode for a SSPARR buoy will be to wake and sample on a fixed time interval. A slightly more complex mode will check the distance traveled since the pervious sounding and potentially return to sleep-mode if it has not traveled far enough to make a significant new measurement. We are developing a mode that will use a sampling strategy based on querying an on-board copy of the best available digital terrain model (DTM) e.g. IBCAO in the Arctic, to help decide if it is appropriate to turn on the echo sounder and make a new measurement. We anticipate that a robust strategy of this type will allow a buoy to operate substantially longer on a fixed battery size. Anderson, R., D. Chayes, et al. (2005). "Seafloor Soundings in Polar and Remote Regions - A new instrument for unattended bathymetric observations," Eos Trans. AGU 86(18): Abstract C43A-10. Greenspan, D., D. Porter, et al. (2012). "IBuoy: Expendable Echo Sounder Buoy with Satellite Telemetry." EOS Fall Meeting Supplement C13E-0660. Chayes, D. N., S. A. Goemmer, et al. (2012). "SSPARR-3: A cost-effective autonomous drifting echosounder." EOS Fall Meeting supplement C13E-0659.

  1. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research.

    PubMed

    Palinkas, Lawrence A; Horwitz, Sarah M; Green, Carla A; Wisdom, Jennifer P; Duan, Naihua; Hoagwood, Kimberly

    2015-09-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

  2. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  3. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    PubMed

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  5. Application of molecular dynamics simulations in molecular property prediction II: diffusion coefficient.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2011-12-01

    In this work, we have evaluated how well the general assisted model building with energy refinement (AMBER) force field performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, five organic compounds in aqueous solutions, four proteins in aqueous solutions, and nine organic compounds in nonaqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned errors and the root mean square errors are 0.137 and 0.171 × 10(-5) cm(-2) s(-1), respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for eight organic solvents with experimental data (R(2) = 0.784), four proteins in aqueous solutions (R(2) = 0.996), and nine organic compounds in nonaqueous solutions (R(2) = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide, and cyclohexane have been studied. The major molecular dynamics (MD) settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. Copyright © 2011 Wiley Periodicals, Inc.

  6. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  7. The Army Communications Objectives Measurement System (ACOMS): Annual Report, School Year 86/87

    DTIC Science & Technology

    1988-04-01

    assessments of advertising program effectiveness, assessments of advertising strategy efficiencies, management of the advertising program, and planning...market. ACOMS is being used for Army assessments of advertising program effectiveness, assessments of advertising strategy efficiencies, management of... advertising strategy and effectiveness and to begin the construction of an integrated model of the role of the Army’s advertising in the enlistment decision

  8. Using the multiphase optimization strategy (MOST) to optimize an HIV care continuum intervention for vulnerable populations: a study protocol.

    PubMed

    Gwadz, Marya Viorst; Collins, Linda M; Cleland, Charles M; Leonard, Noelle R; Wilton, Leo; Gandhi, Monica; Scott Braithwaite, R; Perlman, David C; Kutnick, Alexandra; Ritchie, Amanda S

    2017-05-04

    More than half of persons living with HIV (PLWH) in the United States are insufficiently engaged in HIV primary care and not taking antiretroviral therapy (ART), mainly African Americans/Blacks and Hispanics. In the proposed project, a potent and innovative research methodology, the multiphase optimization strategy (MOST), will be employed to develop a highly efficacious, efficient, scalable, and cost-effective intervention to increase engagement along the HIV care continuum. Whereas randomized controlled trials are valuable for evaluating the efficacy of multi-component interventions as a package, they are not designed to evaluate which specific components contribute to efficacy. MOST, a pioneering, engineering-inspired framework, addresses this problem through highly efficient randomized experimentation to assess the performance of individual intervention components and their interactions. We propose to use MOST to engineer an intervention to increase engagement along the HIV care continuum for African American/Black and Hispanic PLWH not well engaged in care and not taking ART. Further, the intervention will be optimized for cost-effectiveness. A similar set of multi-level factors impede both HIV care and ART initiation for African American/Black and Hispanic PLWH, primary among them individual- (e.g., substance use, distrust, fear), social- (e.g., stigma), and structural-level barriers (e.g., difficulties accessing ancillary services). Guided by a multi-level social cognitive theory, and using the motivational interviewing approach, the study will evaluate five distinct culturally based intervention components (i.e., counseling sessions, pre-adherence preparation, support groups, peer mentorship, and patient navigation), each designed to address a specific barrier to HIV care and ART initiation. These components are well-grounded in the empirical literature and were found acceptable, feasible, and promising with respect to efficacy in a preliminary study. Study aims are: 1) using a highly efficient fractional factorial experimental design, identify which of five intervention components contribute meaningfully to improvement in HIV viral suppression, and secondary outcomes of ART adherence and engagement in HIV primary care; 2) identify mediators and moderators of intervention component efficacy; and 3) using a mathematical modeling approach, build the most cost-effective and efficient intervention package from the efficacious components. A heterogeneous sample of African American/Black and Hispanic PLWH (with respect to age, substance use, and sexual minority status) will be recruited with a proven hybrid sampling method using targeted sampling in community settings and peer recruitment (N = 512). This is the first study to apply the MOST framework in the field of HIV prevention and treatment. This innovative study will produce a culturally based HIV care continuum intervention for the nation's most vulnerable PLWH, optimized for cost-effectiveness, and with exceptional levels of efficacy, efficiency, and scalability. ClinicalTrials.gov, NCT02801747 , Registered June 8, 2016.

  9. Energy efficiency design strategies for buildings with grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Yimprayoon, Chanikarn

    The building sector in the United States represents more than 40% of the nation's energy consumption. Energy efficiency design strategies and renewable energy are keys to reduce building energy demand. Grid-connected photovoltaic (PV) systems installed on buildings have been the fastest growing market in the PV industry. This growth poses challenges for buildings qualified to serve in this market sector. Electricity produced from solar energy is intermittent. Matching building electricity demand with PV output can increase PV system efficiency. Through experimental methods and case studies, computer simulations were used to investigate the priorities of energy efficiency design strategies that decreased electricity demand while producing load profiles matching with unique output profiles from PV. Three building types (residential, commercial, and industrial) of varying sizes and use patterns located in 16 climate zones were modeled according to ASHRAE 90.1 requirements. Buildings were analyzed individually and as a group. Complying with ASHRAE energy standards can reduce annual electricity consumption at least 13%. With energy efficiency design strategies, the reduction could reach up to 65%, making it possible for PV systems to meet reduced demands in residential and industrial buildings. The peak electricity demand reduction could be up to 71% with integration of strategies and PV. Reducing lighting power density was the best single strategy with high overall performances. Combined strategies such as zero energy building are also recommended. Electricity consumption reductions are the sum of the reductions from strategies and PV output. However, peak electricity reductions were less than their sum because they reduced peak at different times. The potential of grid stress reduction is significant. Investment incentives from government and utilities are necessary. The PV system sizes on net metering interconnection should not be limited by legislation existing in some states. Data from this study provides insight of impacts from applying energy efficiency design strategies in buildings with grid-connected PV systems. With the current transition from traditional electric grids to future smart grids, this information plus large database of various building conditions allow possible investigations needed by governments or utilities in large scale communities for implementing various measures and policies.

  10. Mining the human plasma proteome with three-dimensional strategies by high-resolution Quadrupole Orbitrap Mass Spectrometry.

    PubMed

    Zhao, Yan; Chang, Cheng; Qin, Peibin; Cao, Qichen; Tian, Fang; Jiang, Jing; Li, Xianyu; Yu, Wenfeng; Zhu, Yunping; He, Fuchu; Ying, Wantao; Qian, Xiaohong

    2016-01-21

    Human plasma is a readily available clinical sample that reflects the status of the body in normal physiological and disease states. Although the wide dynamic range and immense complexity of plasma proteins are obstacles, comprehensive proteomic analysis of human plasma is necessary for biomarker discovery and further verification. Various methods such as immunodepletion, protein equalization and hyper fractionation have been applied to reduce the influence of high-abundance proteins (HAPs) and to reduce the high level of complexity. However, the depth at which the human plasma proteome has been explored in a relatively short time frame has been limited, which impedes the transfer of proteomic techniques to clinical research. Development of an optimal strategy is expected to improve the efficiency of human plasma proteome profiling. Here, five three-dimensional strategies combining HAP depletion (the 1st dimension) and protein fractionation (the 2nd dimension), followed by LC-MS/MS analysis (the 3rd dimension) were developed and compared for human plasma proteome profiling. Pros and cons of the five strategies are discussed for two issues: HAP depletion and complexity reduction. Strategies A and B used proteome equalization and tandem Seppro IgY14 immunodepletion, respectively, as the first dimension. Proteome equalization (strategy A) was biased toward the enrichment of basic and low-molecular weight proteins and had limited ability to enrich low-abundance proteins. By tandem removal of HAPs (strategy B), the efficiency of HAP depletion was significantly increased, whereas more off-target proteins were subtracted simultaneously. In the comparison of complexity reduction, strategy D involved a deglycosylation step before high-pH RPLC separation. However, the increase in sequence coverage did not increase the protein number as expected. Strategy E introduced SDS-PAGE separation of proteins, and the results showed oversampling of HAPs and identification of fewer proteins. Strategy C combined single Seppro IgY14 immunodepletion, high-pH RPLC fractionation and LC-MS/MS analysis. It generated the largest dataset, containing 1544 plasma protein groups and 258 newly identified proteins in a 30-h-machine-time analysis, making it the optimum three-dimensional strategy in our study. Further analysis of the integrated data from the five strategies showed identical distribution patterns in terms of sequence features and GO functional analysis with the 1929-plasma-protein dataset, further supporting the reliability of our plasma protein identifications. The characterization of 20 cytokines in the concentration range from sub-nanograms/milliliter to micrograms/milliliter demonstrated the sensitivity of the current strategies. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Question structure impacts efficiency and performance in an interactive guessing game: implications for strategy engagement and executive functioning.

    PubMed

    Longenecker, Julia; Liu, Kristy; Chen, Eric Y H

    2012-12-30

    In an interactive guessing game, controls had higher performance and efficiency than patients with schizophrenia in correct trials. Patients' difficulties generating efficient questions suggest an increased taxation of working memory and an inability to engage an appropriate strategy, leading to impulsive behavior and reduced success. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Shifting and power sharing control of a novel dual input clutchless transmission for electric vehicles

    NASA Astrophysics Data System (ADS)

    Liang, Jiejunyi; Yang, Haitao; Wu, Jinglai; Zhang, Nong; Walker, Paul D.

    2018-05-01

    To improve the overall efficiency of electric vehicles and guarantee the driving comfort and vehicle drivability under the concept of simplifying mechanism complexity and minimizing manufacturing cost, this paper proposes a novel clutchless power-shifting transmission system with shifting control strategy and power sharing control strategy. The proposed shifting strategy takes advantage of the transmission architecture to achieve power-on shifting, which greatly improves the driving comfort compared with conventional automated manual transmission, with a bump function based shifting control method. To maximize the overall efficiency, a real-time power sharing control strategy is designed to solve the power distribution problem between the two motors. Detailed mathematical model is built to verify the effectiveness of the proposed methods. The results demonstrate the proposed strategies considerably improve the overall efficiency while achieve non-interrupted power-on shifting and maintain the vehicle jerk during shifting under an acceptable threshold.

  13. Pigeons trade efficiency for stability in response to level of challenge during confined flight.

    PubMed

    Williams, C David; Biewener, Andrew A

    2015-03-17

    Individuals traversing challenging obstacles are faced with a decision: they can adopt traversal strategies that minimally disrupt their normal locomotion patterns or they can adopt strategies that substantially alter their gait, conferring new advantages and disadvantages. We flew pigeons (Columba livia) through an array of vertical obstacles in a flight arena, presenting them with this choice. The pigeons selected either a strategy involving only a slight pause in the normal wing beat cycle, or a wings-folded posture granting reduced efficiency but greater stability should a misjudgment lead to collision. The more stable but less efficient flight strategy was not used to traverse easy obstacles with wide gaps for passage but came to dominate the postures used as obstacle challenge increased with narrower gaps and there was a greater chance of a collision. These results indicate that birds weigh potential obstacle negotiation strategies and estimate task difficulty during locomotor pattern selection.

  14. Pigeons trade efficiency for stability in response to level of challenge during confined flight

    PubMed Central

    Williams, C. David; Biewener, Andrew A.

    2015-01-01

    Individuals traversing challenging obstacles are faced with a decision: they can adopt traversal strategies that minimally disrupt their normal locomotion patterns or they can adopt strategies that substantially alter their gait, conferring new advantages and disadvantages. We flew pigeons (Columba livia) through an array of vertical obstacles in a flight arena, presenting them with this choice. The pigeons selected either a strategy involving only a slight pause in the normal wing beat cycle, or a wings-folded posture granting reduced efficiency but greater stability should a misjudgment lead to collision. The more stable but less efficient flight strategy was not used to traverse easy obstacles with wide gaps for passage but came to dominate the postures used as obstacle challenge increased with narrower gaps and there was a greater chance of a collision. These results indicate that birds weigh potential obstacle negotiation strategies and estimate task difficulty during locomotor pattern selection. PMID:25733863

  15. Efficiency and Cost-Effectiveness of Recruitment Methods for Male Latino Smokers

    ERIC Educational Resources Information Center

    Graham, Amanda L.; Lopez-Class, Maria; Mueller, Noel T.; Mota, Guadalupe; Mandelblatt, Jeanne

    2011-01-01

    Little is known about the most effective strategies to recruit male Latino smokers to cessation research studies. The purpose of this study was to identify efficient and cost-effective research recruitment strategies for this priority population. (Contains 4 tables.)

  16. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

    PubMed Central

    Palinkas, Lawrence A.; Horwitz, Sarah M.; Green, Carla A.; Wisdom, Jennifer P.; Duan, Naihua; Hoagwood, Kimberly

    2013-01-01

    Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research. PMID:24193818

  17. Longitudinal Relations Between Constructive and Destructive Conflict and Couples’ Sleep

    PubMed Central

    El-Sheikh, Mona; Koss, Kalsea J.; Kelly, Ryan J.; Rauer, Amy J.

    2016-01-01

    We examined longitudinal relations between interpartner constructive (negotiation) and destructive (psychological and physical aggression) conflict strategies and couples’ sleep over 1 year. Toward explicating processes of effects, we assessed the intervening role of internalizing symptoms in associations between conflict tactics and couples’ sleep. Participants were 135 cohabiting couples (M age = 37 years for women and 39 years for men). The sample included a large representation of couples exposed to economic adversity. Further, 68% were European American and the remainder were primarily African American. At Time 1 (T1), couples reported on their conflict and their mental health (depression, anxiety). At T1 and Time 2, sleep was examined objectively with actigraphs for 7 nights. Three sleep parameters were derived: efficiency, minutes, and latency. Actor–partner interdependence models indicated that husbands’ use of constructive conflict forecasted increases in their own sleep efficiency as well as their own and their wives’ sleep duration over time. Actor and partner effects emerged, and husbands’ and wives’ use of destructive conflict strategies generally predicted worsening of some sleep parameters over time. Several mediation and intervening effects were observed for destructive conflict strategies. Some of these relations reveal that destructive conflict is associated with internalizing symptoms, which in turn are associated with some sleep parameters longitudinally. These findings build on a small, albeit growing, literature linking sleep with marital functioning, and illustrate that consideration of relationship processes including constructive conflict holds promise for gaining a better understanding of factors that influence the sleep of men and women. PMID:25915089

  18. Are hospitals ready to response to disasters? Challenges, opportunities and strategies of Hospital Emergency Incident Command System (HEICS).

    PubMed

    Yarmohammadian, Mohammad Hossein; Atighechian, Golrokh; Shams, Lida; Haghshenas, Abbas

    2011-08-01

    Applying an effective management system in emergency incidents provides maximum efficiency with using minimum facilities and human resources. Hospital Emergency Incident Command System (HEICS) is one of the most reliable emergency incident command systems to make hospitals more efficient and to increase patient safety. This research was to study requirements, barriers, and strategies of HEICS in hospitals affiliated to Isfahan University of Medical Sciences (IUMS). This was a qualitative research carried out in Isfahan Province, Iran during 2008-09. The study population included senior hospital managers of IUMS and key informants in emergency incident management across Isfahan Province. Sampling method was in non-random purposeful form and snowball technique was used. The research instrument for data collection was semi-structured interview; collected data was analyzed by Colaizzi Technique. Findings of study were categorized into three general categories including requirements (organizational and sub-organizational), barriers (internal and external) of HEICS establishment, and providing short, mid and long term strategies. These categories are explained in details in the main text. Regarding the existing barriers in establishment of HEICS, it is recommended that responsible authorities in different levels of health care system prepare necessary conditions for implementing such system as soon as possible via encouraging and supporting systems. This paper may help health policy makers to get reasonable framework and have comprehensive view for establishing HEICS in hospitals. It is necessary to consider requirements and viewpoints of stakeholders before any health policy making or planning.

  19. A diversity-oriented synthesis strategy enabling the combinatorial-type variation of macrocyclic peptidomimetic scaffolds.

    PubMed

    Isidro-Llobet, Albert; Hadje Georgiou, Kathy; Galloway, Warren R J D; Giacomini, Elisa; Hansen, Mette R; Méndez-Abt, Gabriela; Tan, Yaw Sing; Carro, Laura; Sore, Hannah F; Spring, David R

    2015-04-21

    Macrocyclic peptidomimetics are associated with a broad range of biological activities. However, despite such potentially valuable properties, the macrocyclic peptidomimetic structural class is generally considered as being poorly explored within drug discovery. This has been attributed to the lack of general methods for producing collections of macrocyclic peptidomimetics with high levels of structural, and thus shape, diversity. In particular, there is a lack of scaffold diversity in current macrocyclic peptidomimetic libraries; indeed, the efficient construction of diverse molecular scaffolds presents a formidable general challenge to the synthetic chemist. Herein we describe a new, advanced strategy for the diversity-oriented synthesis (DOS) of macrocyclic peptidomimetics that enables the combinatorial variation of molecular scaffolds (core macrocyclic ring architectures). The generality and robustness of this DOS strategy is demonstrated by the step-efficient synthesis of a structurally diverse library of over 200 macrocyclic peptidomimetic compounds, each based around a distinct molecular scaffold and isolated in milligram quantities, from readily available building-blocks. To the best of our knowledge this represents an unprecedented level of scaffold diversity in a synthetically derived library of macrocyclic peptidomimetics. Cheminformatic analysis indicated that the library compounds access regions of chemical space that are distinct from those addressed by top-selling brand-name drugs and macrocyclic natural products, illustrating the value of our DOS approach to sample regions of chemical space underexploited in current drug discovery efforts. An analysis of three-dimensional molecular shapes illustrated that the DOS library has a relatively high level of shape diversity.

  20. Analysis of food polyphenols by ultra high-performance liquid chromatography coupled to mass spectrometry: an overview.

    PubMed

    Motilva, Maria-José; Serra, Aida; Macià, Alba

    2013-05-31

    Phenolic compounds, which are widely distributed in plant-derived foods, recently attracted much attention due to their health benefits, so their determination in food samples is a topic of increasing interest. In the last few years, the development of chromatographic columns packed with sub-2μm particles and the modern high resolution mass spectrometry (MS) have opened up new possibilities for improving the analytical methods for complex sample matrices, such as ingredients, foods and biological samples. In addition, they have emerged as an ideal tool for profiling complex samples due to its speed, efficiency, sensitivity and selectivity. The present review addresses the use of the improved liquid chromatography (LC), ultra-high performance LC (UHPLC), coupled to MS or tandem MS (MS/MS) as the detector system for the determination of phenolic compounds in food samples. Additionally, the different strategies to extract, quantify the phenolic compounds and to reduce the matrix effect (%ME) are also reviewed. Finally, a briefly outline future trends of UHPLC-MS methods is commented. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A simple, rapid and novel method based on salting-out assisted liquid-liquid extraction for ochratoxin A determination in beer samples prior to ultra-high performance liquid chromatography coupled to tandem mass spectrometry.

    PubMed

    Mariño-Repizo, Leonardo; Goicoechea, Hector; Raba, Julio; Cerutti, Soledad

    2018-06-07

    A novel, simple, easy and cheap sample treatment strategy based on salting-out assisted liquid-liquid extraction (SALLE) for ochratoxin A (OTA) ultra-trace analysis in beer samples using ultra-high performance liquid chromatography-tandem mass spectrometry determination was developed. The factors involved in the efficiency of pretreatment were studied employing factorial design in the screening phase and the optimal conditions of the significant variables on the analytical response were evaluated using a central composite face-centred design (CCF). Consequently, the amount of salt ((NH 4 ) 2 SO 4 ), together with the volumes of sample, hydrophilic (acetone) and nonpolar (toluene) solvents, and times of vortexing and centrifugation were optimized. Under optimized conditions, the limits of detection (LOD) and quantification (LOQ) were 0.02 µg l -1 and 0.08 µg l -1 respectively. OTA extraction recovery by SALLE was approximately 90% (0.2 µg l -1 ). Furthermore, the methodology was in agreement with EU Directive requirements and was successfully applied for analysis of beer samples.

  2. Accelerating root system phenotyping of seedlings through a computer-assisted processing pipeline.

    PubMed

    Dupuy, Lionel X; Wright, Gladys; Thompson, Jacqueline A; Taylor, Anna; Dekeyser, Sebastien; White, Christopher P; Thomas, William T B; Nightingale, Mark; Hammond, John P; Graham, Neil S; Thomas, Catherine L; Broadley, Martin R; White, Philip J

    2017-01-01

    There are numerous systems and techniques to measure the growth of plant roots. However, phenotyping large numbers of plant roots for breeding and genetic analyses remains challenging. One major difficulty is to achieve high throughput and resolution at a reasonable cost per plant sample. Here we describe a cost-effective root phenotyping pipeline, on which we perform time and accuracy benchmarking to identify bottlenecks in such pipelines and strategies for their acceleration. Our root phenotyping pipeline was assembled with custom software and low cost material and equipment. Results show that sample preparation and handling of samples during screening are the most time consuming task in root phenotyping. Algorithms can be used to speed up the extraction of root traits from image data, but when applied to large numbers of images, there is a trade-off between time of processing the data and errors contained in the database. Scaling-up root phenotyping to large numbers of genotypes will require not only automation of sample preparation and sample handling, but also efficient algorithms for error detection for more reliable replacement of manual interventions.

  3. Pigment and Binder Concentrations in Modern Paint Samples Determined by IR and Raman Spectroscopy.

    PubMed

    Wiesinger, Rita; Pagnin, Laura; Anghelone, Marta; Moretto, Ligia M; Orsega, Emilio F; Schreiner, Manfred

    2018-06-18

    Knowledge of the techniques employed by artists, such as the composition of the paints, colour palette, and painting style, is of crucial importance not only to attribute works of art to the workshop or artist but also to develop strategies and measures for the conservation and restoration of the art. While much research has been devoted to investigating the composition of an artist's materials from a qualitative point of view, little effort has been made in terms of quantitative analyses. This study aims to quantify the relative concentrations of binders (acrylic and alkyd) and inorganic pigments in different paint samples by IR and Raman spectroscopies. To perform this quantitative evaluation, reference samples of known concentrations were prepared to obtain calibration plots. In a further step, the quantification method was verified by additional test samples and commercially available paint tubes. The results obtained confirm that the quantitative method developed for IR and Raman spectroscopy is able to efficiently determine different pigment and binder concentrations of paint samples with high accuracy. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  4. Coupling detergent lysis/clean-up methodology with intact protein fractionation for enhanced proteome characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Ritin; Dill, Brian; Chourey, Karuna

    2012-01-01

    The expanding use of surfactants for proteome sample preparations has prompted the need to systematically optimize the application and removal of these MS-deleterious agents prior to proteome measurements. Here we compare four different detergent clean-up methods (Trichloroacetic acid (TCA) precipitation, Chloroform/Methanol/Water (CMW) extraction, commercial detergent removal spin column method (DRS) and filter-aided sample preparation(FASP)) with respect to varying amounts of protein biomass in the samples, and provide efficiency benchmarks with respect to protein, peptide, and spectral identifications for each method. Our results show that for protein limited samples, FASP outperforms the other three clean-up methods, while at high protein amountmore » all the methods are comparable. This information was used in a dual strategy of comparing molecular weight based fractionated and unfractionated lysates from three increasingly complex samples (Escherichia coli, a five microbial isolate mixture, and a natural microbial community groundwater sample), which were all lysed with SDS and cleaned up using FASP. The two approaches complemented each other by enhancing the number of protein identifications by 8%-25% across the three samples and provided broad pathway coverage.« less

  5. Peptidylation for the determination of low-molecular-weight compounds by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Tang, Feng; Cen, Si-Ying; He, Huan; Liu, Yi; Yuan, Bi-Feng; Feng, Yu-Qi

    2016-05-23

    Determination of low-molecular-weight compounds by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has been a great challenge in the analytical research field. Here we developed a universal peptide-based derivatization (peptidylation) strategy for the sensitive analysis of low-molecular-weight compounds by MALDI-TOF-MS. Upon peptidylation, the molecular weights of target analytes increase, thus avoiding serious matrix ion interference in the low-molecular-weight region in MALDI-TOF-MS. Since peptides typically exhibit good signal response during MALDI-TOF-MS analysis, peptidylation endows high detection sensitivities of low-molecular-weight analytes. As a proof-of-concept, we analyzed low-molecular-weight compounds of aldehydes and thiols by the developed peptidylation strategy. Our results showed that aldehydes and thiols can be readily determined upon peptidylation, thus realizing the sensitive and efficient determination of low-molecular-weight compounds by MALDI-TOF-MS. Moreover, target analytes also can be unambiguously detected in biological samples using the peptidylation strategy. The established peptidylation strategy is a universal strategy and can be extended to the sensitive analysis of various low-molecular-weight compounds by MALDI-TOF-MS, which may be potentially used in areas such as metabolomics.

  6. Improving labeling efficiency in automatic quality control of MRSI data.

    PubMed

    Pedrosa de Barros, Nuno; McKinley, Richard; Wiest, Roland; Slotboom, Johannes

    2017-12-01

    To improve the efficiency of the labeling task in automatic quality control of MR spectroscopy imaging data. 28'432 short and long echo time (TE) spectra (1.5 tesla; point resolved spectroscopy (PRESS); repetition time (TR)= 1,500 ms) from 18 different brain tumor patients were labeled by two experts as either accept or reject, depending on their quality. For each spectrum, 47 signal features were extracted. The data was then used to run several simulations and test an active learning approach using uncertainty sampling. The performance of the classifiers was evaluated as a function of the number of patients in the training set, number of spectra in the training set, and a parameter α used to control the level of classification uncertainty required for a new spectrum to be selected for labeling. The results showed that the proposed strategy allows reductions of up to 72.97% for short TE and 62.09% for long TE in the amount of data that needs to be labeled, without significant impact in classification accuracy. Further reductions are possible with significant but minimal impact in performance. Active learning using uncertainty sampling is an effective way to increase the labeling efficiency for training automatic quality control classifiers. Magn Reson Med 78:2399-2405, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. An optimized magnetite microparticle-based phosphopeptide enrichment strategy for identifying multiple phosphorylation sites in an immunoprecipitated protein.

    PubMed

    Huang, Yi; Shi, Qihui; Tsung, Chia-Kuang; Gunawardena, Harsha P; Xie, Ling; Yu, Yanbao; Liang, Hongjun; Yang, Pengyuan; Stucky, Galen D; Chen, Xian

    2011-01-01

    To further improve the selectivity and throughput of phosphopeptide analysis for the samples from real-time cell lysates, here we demonstrate a highly efficient method for phosphopeptide enrichment via newly synthesized magnetite microparticles and the concurrent mass spectrometric analysis. The magnetite microparticles show excellent magnetic responsivity and redispersibility for a quick enrichment of those phosphopeptides in solution. The selectivity and sensitivity of magnetite microparticles in phosphopeptide enrichment are first evaluated by a known mixture containing both phosphorylated and nonphosphorylated proteins. Compared with the titanium dioxide-coated magnetic beads commercially available, our magnetite microparticles show a better specificity toward phosphopeptides. The selectively-enriched phosphopeptides from tryptic digests of β-casein can be detected down to 0.4 fmol μl⁻¹, whereas the recovery efficiency is approximately 90% for monophosphopeptides. This magnetite microparticle-based affinity technology with optimized enrichment conditions is then immediately applied to identify all possible phosphorylation sites on a signal protein isolated in real time from a stress-stimulated mammalian cell culture. A large fraction of peptides eluted from the magnetic particle enrichment step were identified and characterized as either single- or multiphosphorylated species by tandem mass spectrometry. With their high efficiency and utility for phosphopeptide enrichment, the magnetite microparticles hold great potential in the phosphoproteomic studies on real-time samples from cell lysates. Published by Elsevier Inc.

  8. A new strategy toward Internet of Things: structural health monitoring using a combined fiber optic and acoustic emission wireless sensor platform

    NASA Astrophysics Data System (ADS)

    Nguyen, A. D.; Page, C.; Wilson, C. L.

    2016-04-01

    This paper investigates a new low-power structural health monitoring (SHM) strategy where fiber Bragg grating (FBG) rosettes can be used to continuously monitor for changes in a host structure's principal strain direction, suggesting damage and thus enabling the immediate triggering of a higher power acoustic emissions (AE) sensor to provide for better characterization of the damage. Unlike traditional "always on" AE platforms, this strategy has the potential for low power, while the wireless communication between different sensor types supports the Internet of Things (IoT) approach. A combination of fiber-optic sensor rosettes for strain monitoring and a fiber-optic sensor for acoustic emissions monitoring was attached to a sample and used to monitor crack initiation. The results suggest that passive principal strain direction monitoring could be used as a damage initiation trigger for other active sensing elements such as acoustic emissions. In future work, additional AE sensors can be added to provide for damage location; and a strategy where these sensors can be powered on periodically to further establish reliability while preserving an energy efficient scheme can be incorporated.

  9. Strategies for Efficient Charge Separation and Transfer in Artificial Photosynthesis of Solar Fuels.

    PubMed

    Xu, Yuxing; Li, Ailong; Yao, Tingting; Ma, Changtong; Zhang, Xianwen; Shah, Jafar Hussain; Han, Hongxian

    2017-11-23

    Converting sunlight to solar fuels by artificial photosynthesis is an innovative science and technology for renewable energy. Light harvesting, photogenerated charge separation and transfer (CST), and catalytic reactions are the three primary steps in the processes involved in the conversion of solar energy to chemical energy (SE-CE). Among the processes, CST is the key "energy pump and delivery" step in determining the overall solar-energy conversion efficiency. Efficient CST is always high priority in designing and assembling artificial photosynthesis systems for solar-fuel production. This Review not only introduces the fundamental strategies for CST but also the combinatory application of these strategies to five types of the most-investigated semiconductor-based artificial photosynthesis systems: particulate, Z-scheme, hybrid, photoelectrochemical, and photovoltaics-assisted systems. We show that artificial photosynthesis systems with high SE-CE efficiency can be rationally designed and constructed through combinatory application of these strategies, setting a promising blueprint for the future of solar fuels. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Automated storm water sampling on small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; King, K.W.; Slade, R.M.

    2003-01-01

    Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.

  11. Integrated Sampling Strategy (ISS) Guide

    Treesearch

    Robert E. Keane; Duncan C. Lutes

    2006-01-01

    What is an Integrated Sampling Strategy? Simply put, it is the strategy that guides how plots are put on the landscape. FIREMON’s Integrated Sampling Strategy assists fire managers as they design their fire monitoring project by answering questions such as: What statistical approach is appropriate for my sample design? How many plots can I afford? How many plots do I...

  12. Solid-phase microextraction followed by gas chromatography-mass spectrometry for the determination of ink photo-initiators in packed milk.

    PubMed

    Negreira, N; Rodríguez, I; Rubí, E; Cela, R

    2010-06-30

    A novel, single step method for the determination of seven ink photo-initiators in carton packed milk samples is described. Solid-phase microextraction (SPME) and gas chromatography (GC), combined with mass spectrometry (MS), were used as sample preparation and determination techniques, respectively. Parameters affecting the performance of the microextraction process were thoroughly evaluated using uni- and multivariate optimization strategies, based on the use of experimental factorial designs. The coating of the SPME fibre, together with the sampling mode and the temperature were the factors playing a major influence on the efficiency of the extraction. Under final conditions, 1.5 mL of milk and 8.5 mL of ultrapure water were poured in a glass vessel, which was closed and immersed in a water boiling bath. A poly(dimethylsiloxane)-divinylbenzene (PDMS-DVB) coated fibre was exposed directly to the diluted sample for 40 min. After that, the fibre was desorbed in the injector of the GC-MS system for 3 min. The optimized method provided limits of quantification (LOQs) between 0.2 and 1 microg L(-1) and a good linearity in the range between 1 and 250 microg L(-1). The inter-day precision remained below 15% for all compounds in spiked whole milk. The efficiency of the extraction changed for whole, semi-skimmed and skimmed milk; however, no differences were noticed among the relative recoveries achieved for milk samples, from different brands, with the same fat content. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Measuring efficiency among US federal hospitals.

    PubMed

    Harrison, Jeffrey P; Meyer, Sean

    2014-01-01

    This study evaluates the efficiency of federal hospitals, specifically those hospitals administered by the US Department of Veterans Affairs and the US Department of Defense. Hospital executives, health care policymakers, taxpayers, and federal hospital beneficiaries benefit from studies that improve hospital efficiency. This study uses data envelopment analysis to evaluate a panel of 165 federal hospitals in 2007 and 157 of the same hospitals again in 2011. Results indicate that overall efficiency in federal hospitals improved from 81% in 2007 to 86% in 2011. The number of federal hospitals operating on the efficiency frontier decreased slightly from 25 in 2007 to 21 in 2011. The higher efficiency score clearly documents that federal hospitals are becoming more efficient in the management of resources. From a policy perspective, this study highlights the economic importance of encouraging increased efficiency throughout the health care industry. This research examines benchmarking strategies to improve the efficiency of hospital services to federal beneficiaries. Through the use of strategies such as integrated information systems, consolidation of services, transaction-cost economics, and focusing on preventative health care, these organizations have been able to provide quality service while maintaining fiscal responsibility. In addition, the research documented the characteristics of those federal hospitals that were found to be on the Efficiency Frontier. These hospitals serve as benchmarks for less efficient federal hospitals as they develop strategies for improvement.

  14. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    PubMed

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  15. Nebula: reconstruction and visualization of scattering data in reciprocal space

    PubMed Central

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.

    2015-01-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time­scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083

  16. A new strategy for accelerated extraction of target compounds using molecularly imprinted polymer particles embedded in a paper-based disk.

    PubMed

    Zarejousheghani, Mashaalah; Schrader, Steffi; Möder, Monika; Schmidt, Matthias; Borsdorf, Helko

    2018-03-01

    In this study, a general simple and inexpensive method is introduced for the preparation of a paper-based selective disk-type solid phase extraction (SPE) technique, appropriate for fast and high throughput monitoring of target compounds. An ion exchange molecularly imprinted polymer (MIP) was synthesized for the extraction and analysis of acesulfame, an anthropogenic water quality marker. Acesulfame imprinting was used as an example for demonstrating the benefits of a nanosized, swellable MIP extraction sorbents integrated in an on-site compatible concept for water quality monitoring. Compared with an 8 mL standard SPE cartridge, the paper-based MIP disk (47 mm ø) format allowed (1) high sample flow rates up to 30 mL•min -1 without losing extraction efficiency (2) extracting sample volumes up to 500 mL in much shorter times than with standard SPE, (3) the reuse of the disks (up to 3 times more than SPE cartridge) due to high robustness and an efficient post-cleaning, and (4) reducing the sampling time from 100 minutes (using the standard SPE format) to about 2 minutes with the MIP paper disk for 50 mL water sample. Different parameters like cellulose fiber/polymer ratios, sample volume, sample flow-rate, washing, and elution conditions were evaluated and optimized. Using developed extraction technique with high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS-MS) analysis, a new protocol was established that provides detection and quantification limits of 0.015 μg•L -1 and 0.05 μg•L -1 , respectively. The developed paper disks were used in-field for the selective extraction of target compounds and transferred to the laboratory for further analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Deficit irrigation effects on yield and yield components of grain sorghum

    USDA-ARS?s Scientific Manuscript database

    Development of sustainable and efficient irrigation strategies is a priority for producers faced with water shortages. A promising management strategy for improving water use efficiency (WUE) is managed deficit irrigation (MDI), which attempts to optimize yield and WUE by synchronizing crop water u...

  18. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE PAGES

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai; ...

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  19. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Tyler; D’Azevedo, Ed F.; Li, Ying Wai

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is therefore formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with applicationmore » of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. Here this procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi- core CPUs and GPUs.« less

  20. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo.

    PubMed

    McDaniel, T; D'Azevedo, E F; Li, Y W; Wong, K; Kent, P R C

    2017-11-07

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  1. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  2. Costs, equity, efficiency and feasibility of identifying the poor in Ghana's National Health Insurance Scheme: empirical analysis of various strategies.

    PubMed

    Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2012-01-01

    To analyse the costs and evaluate the equity, efficiency and feasibility of four strategies to identify poor households for premium exemptions in Ghana's National Health Insurance Scheme (NHIS): means testing (MT), proxy means testing (PMT), participatory wealth ranking (PWR) and geographic targeting (GT) in urban, rural and semi-urban settings in Ghana. We conducted the study in 145-147 households per setting with MT as our gold standard strategy. We estimated total costs that included costs of household surveys and cost of premiums paid to the poor, efficiency (cost per poor person identified), equity (number of true poor excluded) and the administrative feasibility of implementation. The cost of exempting one poor individual ranged from US$15.87 to US$95.44; exclusion of the poor ranged between 0% and 73%. MT was most efficient and equitable in rural and urban settings with low-poverty incidence; GT was efficient and equitable in the semi-urban setting with high-poverty incidence. PMT and PWR were less equitable and inefficient although feasible in some settings. We recommend MT as optimal strategy in low-poverty urban and rural settings and GT as optimal strategy in high-poverty semi-urban setting. The study is relevant to other social and developmental programmes that require identification and exemptions of the poor in low-income countries. © 2011 Blackwell Publishing Ltd.

  3. A Comparison of Erosion and Water Pollution Control Strategies for an Agricultural Watershed

    NASA Astrophysics Data System (ADS)

    Prato, Tony; Shi, Hongqi

    1990-02-01

    The effectiveness and efficiency of two erosion control strategies and one water pollution control (riparian) strategy are compared for Idaho's Tom Beall watershed. Erosion control strategies maximize annualized net returns per hectare on each field and restrict field erosion rates to no more than 11.2 or 16.8 tons per hectare. The riparian strategy uses good vegetative cover on all fields adjacent to the creek and in noncropland areas and the resource management system that maximizes annualized net returns per hectare on remaining fields. The Agricultural Nonpoint Source Pollution model is used to simulate the levels and concentrations of sediment, nitrogen, phosphorus, and chemical oxygen demand at the outlet of the watershed. Erosion control strategies generate less total erosion and water pollution but are less efficient than the riparian strategy. The riparian strategy is less equitable for farmers than the erosion control strategies.

  4. Influence of various water quality sampling strategies on load estimates for small streams

    USGS Publications Warehouse

    Robertson, Dale M.; Roerish, Eric D.

    1999-01-01

    Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.

  5. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    PubMed

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  6. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    NASA Astrophysics Data System (ADS)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  7. Efficiency, equity and feasibility of strategies to identify the poor: an application to premium exemptions under National Health Insurance in Ghana.

    PubMed

    Jehu-Appiah, Caroline; Aryeetey, Genevieve; Spaan, Ernst; Agyepong, Irene; Baltussen, Rob

    2010-05-01

    This paper outlines the potential strategies to identify the poor, and assesses their feasibility, efficiency and equity. Analyses are illustrated for the case of premium exemptions under National Health Insurance (NHI) in Ghana. A literature search in Medline search was performed to identify strategies to identify the poor. Models were developed including information on demography and poverty, and costs and errors of in- and exclusion of these strategies in two regions in Ghana. Proxy means testing (PMT), participatory welfare ranking (PWR), and geographic targeting (GT) are potentially useful strategies to identify the poor, and vary in terms of their efficiency, equity and feasibility. Costs to exempt one poor individual range between US$11.63 and US$66.67, and strategies may exclude up to 25% of the poor. Feasibility of strategies is dependent on their aptness in rural/urban settings, and administrative capacity to implement. A decision framework summarizes the above information to guide policy making. We recommend PMT as an optimal strategy in relative low poverty incidence urbanized settings, PWR as an optimal strategy in relative low poverty incidence rural settings, and GT as an optimal strategy in high incidence poverty settings. This paper holds important lessons not only for NHI in Ghana but also for other countries implementing exemption policies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  8. Strategies for efficiently selecting PHA producing mixed microbial cultures using complex feedstocks: Feast and famine regime and uncoupled carbon and nitrogen availabilities.

    PubMed

    Oliveira, Catarina S S; Silva, Carlos E; Carvalho, Gilda; Reis, Maria A

    2017-07-25

    Production of polyhydroxyalkanoates (PHAs) by open mixed microbial cultures (MMCs) has been attracting increasing interest as an alternative technology to PHA production by pure cultures, due to the potential for lower costs associated with the use of open systems (eliminating the requirement for sterile conditions) and the utilisation of cheap feedstock (industrial and agricultural wastes). Such technology relies on the efficient selection of an MMC enriched in PHA-accumulating organisms. Fermented cheese whey, a protein-rich complex feedstock, has been used previously to produce PHA using the feast and famine regime for selection of PHA accumulating cultures. While this selection strategy was found efficient when operated at relatively low organic loading rate (OLR, 2g-CODL -1 d -1 ), great instability and low selection efficiency of PHA accumulating organisms were observed when higher OLR (ca. 6g-CODL -1 d -1 ) was applied. High organic loading is desirable as a means to enhance PHA productivity. In the present study, a new selection strategy was tested with the aim of improving selection for high OLR. It was based on uncoupling carbon and nitrogen supply and was implemented and compared with the conventional feast and famine strategy. For this, two selection reactors were fed with fermented cheese whey applying an OLR of ca. 8.5g-CODL -1 (with 3.8g-CODL -1 resulting from organic acids and ethanol), and operated in parallel under similar conditions, except for the timing of nitrogen supplementation. Whereas in the conventional strategy nitrogen and carbon substrates were added simultaneously at the beginning of the cycle, in the uncoupled substrates strategy, nitrogen addition was delayed to the end of the feast phase (i.e. after exogenous carbon was exhausted). The two different strategies selected different PHA-storing microbial communities, dominated by Corynebacterium and a Xantomonadaceae, respectively with the conventional and the new approaches. The new strategy originated a more efficient PHA-production process than the conventional one (global PHA productivity of 6.09g-PHAL -1 d -1 and storage yield of 0.96 versus 2.55g-PHAL -1 d -1 and 0.86, respectively). Dissociation between the feast to famine length ratio (F/F) and storage efficiency was shown to be possible with the new strategy, allowing selection of an efficient PHA-storing culture with complex feedstock under high organic loading rates. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Air sampling workshop: October 24-25, 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-06-01

    A two-day workshop was held in October 1978 on air sampling strategies for the occupational environment. Strategies comprise the elements of implementing an air sampling program including deciding on the extent of sampling, selecting appropriate types of measurement, placing sampling instruments properly, and interpreting sample results correctly. All of these elements are vital in the reliable assessment of occupational exposures yet their coverage in the industrial hygiene literature is meager. Although keyed to a few introductory topics, the agenda was sufficiently informal to accommodate extemporaneous discussion on any subject related to sampling strategies. Questions raised during the workshop mirror themore » status of air sampling strategy as much as the factual information that was presented. It may be concluded from the discussion and questions that air sampling strategy is an elementary state and urgently needs concerted attention from the industrial hygiene profession.« less

  10. Modeling efficiency at the process level: an examination of the care planning process in nursing homes.

    PubMed

    Lee, Robert H; Bott, Marjorie J; Gajewski, Byron; Taunton, Roma Lee

    2009-02-01

    To examine the efficiency of the care planning process in nursing homes. We collected detailed primary data about the care planning process for a stratified random sample of 107 nursing homes from Kansas and Missouri. We used these data to calculate the average direct cost per care plan and used data on selected deficiencies from the Online Survey Certification and Reporting System to measure the quality of care planning. We then analyzed the efficiency of the assessment process using corrected ordinary least squares (COLS) and data envelopment analysis (DEA). Both approaches suggested that there was considerable inefficiency in the care planning process. The average COLS score was 0.43; the average DEA score was 0.48. The correlation between the two sets of scores was quite high, and there was no indication that lower costs resulted in lower quality. For-profit facilities were significantly more efficient than not-for-profit facilities. Multiple studies of nursing homes have found evidence of inefficiency, but virtually all have had measurement problems that raise questions about the results. This analysis, which focuses on a process with much simpler measurement issues, finds evidence of inefficiency that is largely consistent with earlier studies. Making nursing homes more efficient merits closer attention as a strategy for improving care. Increasing efficiency by adopting well-designed, reliable processes can simultaneously reduce costs and improve quality.

  11. Comprehensive Metabolite Identification Strategy Using Multiple Two-Dimensional NMR Spectra of a Complex Mixture Implemented in the COLMARm Web Server

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bingol, Kerem; Li, Da-Wei; Zhang, Bo

    Identification of metabolites in complex mixtures represents a key step in metabolomics. A new strategy is introduced, which is implemented in a new public web server, COLMARm, that permits the co-analysis of up to three 2D NMR spectra, namely 13C-1H HSQC, 1H-1H TOCSY, and 13C-1H HSQC-TOCSY for the comprehensive, accurate, and efficient performance of this task. The highly versatile and interactive nature of COLMARm permits its application to a wide range of metabolomics samples independent of the magnetic field. Database query is performed using the HSQC spectrum and the top metabolite hits are then validated against the TOCSY-type experiment(s) bymore » superimposing the expected cross-peaks on the mixture spectrum. In this way the user can directly accept or reject candidate metabolites by taking advantage of the complementary spectral information offered by these experiments and their different sensitivities. The power of COLMARm is demonstrated for a human serum sample uncovering the existence of 14 metabolites that hitherto were not identified by NMR.« less

  12. The application of an occupational therapy nutrition education programme for children who are obese.

    PubMed

    Munguba, Marilene Calderaro; Valdés, Maria Teresa Moreno; da Silva, Carlos Antonio Bruno

    2008-01-01

    The aim of this study was to evaluate an occupational therapy nutrition education programme for children who are obese with the use of two interactive games. A quasi-experimental study was carried out at a municipal school in Fortaleza, Brazil. A convenient sample of 200 children ages 8-10 years old participated in the study. Data collection comprised a semi-structured interview, direct and structured observation, and focus group, comparing two interactive games based on the food pyramid (video game and board game) used individually and then combined. Both play activities were efficient in the mediation of nutritional concepts, with a preference for the board game. In the learning strategies, intrinsic motivation and metacognition were analysed. The attention strategy was most applied at the video game. We concluded that both games promoted the learning of nutritional concepts. We confirmed the effectiveness of the simultaneous application of interactive games in an interdisciplinary health environment. It is recommended that a larger sample should be used in evaluating the effectiveness of play and video games in teaching healthy nutrition to children in a school setting. (c) 2008 John Wiley & Sons, Ltd.

  13. Laser micro-machining strategies for transparent brittle materials using ultrashort pulsed lasers

    NASA Astrophysics Data System (ADS)

    Bernard, Benjamin; Matylitsky, Victor

    2017-02-01

    Cutting and drilling of transparent materials using short pulsed laser systems are important industrial production processes. Applications ranging from sapphire cutting, hardened glass processing, and flat panel display cutting, to diamond processing are possible. The ablation process using a Gaussian laser beam incident on the topside of a sample with several parallel overlapping lines leads to a V-shaped structured groove. This limits the structuring depth for a given kerf width. The unique possibility for transparent materials to start the ablation process from the backside of the sample is a well-known strategy to improve the aspect ratio of the ablated features. This work compares the achievable groove depth depending on the kerf width for front-side and back-side ablation and presents the best relation between the kerf width and number of overscans. Additionally, the influence of the number of pulses in one burst train on the ablation efficiency is investigated. The experiments were carried out using Spirit HE laser from Spectra-Physics, with the features of adjustable pulse duration from <400 fs to 10 ps, three different repetition rates (100 kHz, 200 kHz and 400 kHz) and average output powers of >16 W ( at 1040 nm wavelength).

  14. Multidimensionally encoded magnetic resonance imaging.

    PubMed

    Lin, Fa-Hsuan

    2013-07-01

    Magnetic resonance imaging (MRI) typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here, we propose the multidimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel radiofrequency coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. Copyright © 2012 Wiley Periodicals, Inc.

  15. Soft and Robust Identification of Body Fluid Using Fourier Transform Infrared Spectroscopy and Chemometric Strategies for Forensic Analysis.

    PubMed

    Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko; Ozawa, Takeaki

    2018-05-31

    Body fluid (BF) identification is a critical part of a criminal investigation because of its ability to suggest how the crime was committed and to provide reliable origins of DNA. In contrast to current methods using serological and biochemical techniques, vibrational spectroscopic approaches provide alternative advantages for forensic BF identification, such as non-destructivity and versatility for various BF types and analytical interests. However, unexplored issues remain for its practical application to forensics; for example, a specific BF needs to be discriminated from all other suspicious materials as well as other BFs, and the method should be applicable even to aged BF samples. Herein, we describe an innovative modeling method for discriminating the ATR FT-IR spectra of various BFs, including peripheral blood, saliva, semen, urine and sweat, to meet the practical demands described above. Spectra from unexpected non-BF samples were efficiently excluded as outliers by adopting the Q-statistics technique. The robustness of the models against aged BFs was significantly improved by using the discrimination scheme of a dichotomous classification tree with hierarchical clustering. The present study advances the use of vibrational spectroscopy and a chemometric strategy for forensic BF identification.

  16. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  17. An efficient routing strategy for traffic dynamics on two-layer complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Wang, Huiling; Zhang, Zhuxi; Zhang, Yi; Duan, Congwen; Qi, Zhaohui; Liu, Yu

    2018-05-01

    In order to alleviate traffic congestion on multilayer networks, designing an efficient routing strategy is one of the most important ways. In this paper, a novel routing strategy is proposed to reduce traffic congestion on two-layer networks. In the proposed strategy, the optimal paths in the physical layer are chosen by comprehensively considering the roles of nodes’ degrees of the two layers. Both numerical and analytical results indicate that our routing strategy can reasonably redistribute the traffic load of the physical layer, and thus the traffic capacity of two-layer complex networks are significantly enhanced compared with the shortest path routing (SPR) and the global awareness routing (GAR) strategies. This study may shed some light on the optimization of networked traffic dynamics.

  18. Strategies of Production Control as Tools of Efficient Management of Production Enterprises

    NASA Astrophysics Data System (ADS)

    Budynek, Mateusz; Celińska, Elżbieta; Dybikowska, Adrianna; Kozak, Monika; Ratajczak, Joanna; Urban, Jagoda; Materne, Karolina

    2016-03-01

    The paper discusses the problem of principle methods of production control as a strategy supporting the production system and stimulating efficient solutions in respect management in production enterprises. The article describes MRP, ERP, JIT, KANBAN and TOC methods and focuses on their main goals, principles of functioning as well as benefits resulting from their application. The methods represent two diverse strategies of production control, i.e. pull and push strategies. Push strategies are used when the plans apply to the first and principle part of production and are based on the demand forecasts. Pull strategies are used when all planning decisions apply to the final stage and depend on the actual demand or orders from customers.

  19. Sunlight Intensity Based Global Positioning System for Near-Surface Underwater Sensors

    PubMed Central

    Gómez, Javier V.; Sandnes, Frode E.; Fernández, Borja

    2012-01-01

    Water monitoring is important in domains including documenting climate change, weather prediction and fishing. This paper presents a simple and energy efficient localization strategy for near surface buoy based sensors. Sensors can be dropped randomly in the ocean and thus self-calibrate in terms of geographic location such that geo-tagged observations of water quality can be made without the need for costly and energy consuming GPS-hardware. The strategy is based on nodes with an accurate clock and light sensors that can regularly sample the level of light intensity. The measurements are fitted into a celestial model of the earth motion around the sun. By identifying the trajectory of the sun across the skies one can accurately determine sunrise and sunset times, and thus extract the longitude and latitude of the sensor. Unlike previous localization techniques for underwater sensors, the current approach does not rely on stationary or mobile reference points. PMID:22438746

  20. Sunlight intensity based global positioning system for near-surface underwater sensors.

    PubMed

    Gómez, Javier V; Sandnes, Frode E; Fernández, Borja

    2012-01-01

    Water monitoring is important in domains including documenting climate change, weather prediction and fishing. This paper presents a simple and energy efficient localization strategy for near surface buoy based sensors. Sensors can be dropped randomly in the ocean and thus self-calibrate in terms of geographic location such that geo-tagged observations of water quality can be made without the need for costly and energy consuming GPS-hardware. The strategy is based on nodes with an accurate clock and light sensors that can regularly sample the level of light intensity. The measurements are fitted into a celestial model of the earth motion around the sun. By identifying the trajectory of the sun across the skies one can accurately determine sunrise and sunset times, and thus extract the longitude and latitude of the sensor. Unlike previous localization techniques for underwater sensors, the current approach does not rely on stationary or mobile reference points.

  1. A dynamic re-partitioning strategy based on the distribution of key in Spark

    NASA Astrophysics Data System (ADS)

    Zhang, Tianyu; Lian, Xin

    2018-05-01

    Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.

  2. Freight efficiency strategies : a white paper series to inform the California Sustainable Freight Action Plan.

    DOT National Transportation Integrated Search

    2016-03-01

    A number of stakeholders met with the ultimate goal of identifying inefficiencies faced by the freight system and putting forward a set of : strategies to achieve a more efficient freight system. In doing so, a key first step was to provide insight a...

  3. Designing Strategies for an Efficient Language MOOC

    ERIC Educational Resources Information Center

    Perifanou, Maria

    2016-01-01

    The advent of Massive Open Online Courses (MOOCs) has dramatically changed the way people learn a language. But how can we design an efficient language learning environment for a massive number of learners? Are there any good practices that showcase successful Massive Open Online Language Course (MOOLC) design strategies? According to recent…

  4. Transfer, Informational Feedback, and Instructional Systems Development.

    ERIC Educational Resources Information Center

    Howard, Charles W.

    As part of a project to convert Army training programs into self instructional sets of materials, this study was conducted to determine the relative efficiency of five types of instructional strategies. Efficiency, measured in terms of achievement and teaching time, and development time were considered. The five strategies studied include: (1)…

  5. Oral cholera vaccine coverage in hard-to-reach fishermen communities after two mass Campaigns, Malawi, 2016.

    PubMed

    Sauvageot, Delphine; Saussier, Christel; Gobeze, Abebe; Chipeta, Sikhona; Mhango, Innocent; Kawalazira, Gift; Mengel, Martin A; Legros, Dominique; Cavailler, Philippe; M'bang'ombe, Maurice

    2017-09-12

    From December 2015 to August 2016, a large epidemic of cholera affected the fishermen of Lake Chilwa in Malawi. A first reactive Oral Cholera Vaccines (OCV) campaign was organized, in February, in a 2km radius of the lake followed by a preemptive one, conducted in November, in a 25km radius. We present the vaccine coverage reached in hard-to-reach population using simplified delivery strategies. We conducted two-stage random-sampling cross-sectional surveys among individuals living in a 2km and 25km radius of Lake Chilwa (islands and floating homes included). Individuals aged 12months and older from Machinga and Zomba districts were sampled: 43 clusters of 14 households were surveyed. Simplified strategies were used for those living in islands and floating homes: self- delivery and community-supervised delivery of the second dose. Vaccine coverage (VC) for at-least-two-doses was estimated taking into account sampling weights and design effects. A total of 1176 households were surveyed (2.7% of non-response). Among the 2833 individuals living in the 2km radius of Lake and the 2915 in the 25km radius: 457 (16.1%) and 239 (8.2%) lived in floating homes or on islands at some point in the year, respectively. For the overall population, VC was 75.6% and 54.2%, respectively. In the 2km radius, VC was 92.2% for those living on the lake at some point of the year: 271 (64.8%) used the simplified strategies. The main reasons for non-vaccination were absence during the campaign and vaccine shortage. Few adverse events occurring in the 24h following vaccination was reported. We reached a high two-dose coverage of the most at-risk population using simplified delivery strategies. Because of the high fishermen mobility, regular catch-up campaigns or another strategy specifically targeting fishermen need to be assessed for more efficient vaccines use. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Cluster Sampling with Referral to Improve the Efficiency of Estimating Unmet Needs among Pregnant and Postpartum Women after Disasters

    PubMed Central

    Horney, Jennifer; Zotti, Marianne E.; Williams, Amy; Hsia, Jason

    2015-01-01

    Introduction and Background Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. Methods From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Results Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Conclusion and Discussion Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. PMID:22365134

  7. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.

  8. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  9. Sampling and Homogenization Strategies Significantly Influence the Detection of Foodborne Pathogens in Meat.

    PubMed

    Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha

    2015-01-01

    Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.

  10. Sampling and Homogenization Strategies Significantly Influence the Detection of Foodborne Pathogens in Meat

    PubMed Central

    Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha

    2015-01-01

    Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety. PMID:26539462

  11. Evaluating the efficiency of a one-square-meter quadrat sampler for riffle-dwelling fish

    USGS Publications Warehouse

    Peterson, J.T.; Rabeni, C.F.

    2001-01-01

    We evaluated the efficacy of a 1-m2 quadrat sampler for collecting riffle-dwelling fishes in an Ozark stream. We used a dual-gear approach to evaluate sampler efficiency in relation to species, fish size, and habitat variables. Quasi-likelihood regression showed sampling efficiency to differ significantly (P 0.05). Sampling efficiency was significantly influenced by physical habitat characteristics. Mean current velocity negatively influenced sampling efficiencies for Cyprinidae (P = 0.009), Cottidae (P = 0.006), and Percidae (P < 0.001), and the amount of cobble substrate negatively influenced sampling efficiencies for Cyprinidae (P = 0.025), Ictaluridae (P < 0.001), and Percidae (P < 0.001). Water temperature negatively influenced sampling efficiency for Cyprinidae (P = 0.009) and Ictaluridae (P = 0.006). Species-richness efficiency was positively influenced (P = 0.002) by percentage of riffle sampled. Under average habitat conditions encountered in stream riffles, the 1-m2 quadrat sampler was most efficient at estimating the densities of Cyprinidae (84%) and Cottidae (80%) and least efficient for Percidae (57%) and Ictaluridae (31%).

  12. An easily fabricated three-dimensional threaded lemniscate-shaped micromixer for a wide range of flow rates

    PubMed Central

    Rafeie, Mehdi; Welleweerd, Marcel; Hassanzadeh-Barforoushi, Amin; Asadnia, Mohsen; Olthuis, Wouter; Ebrahimi Warkiani, Majid

    2017-01-01

    Mixing fluid samples or reactants is a paramount function in the fields of micro total analysis system (μTAS) and microchemical processing. However, rapid and efficient fluid mixing is difficult to achieve inside microchannels because of the difficulty of diffusive mass transfer in the laminar regime of the typical microfluidic flows. It has been well recorded that the mixing efficiency can be boosted by migrating from two-dimensional (2D) to three-dimensional (3D) geometries. Although several 3D chaotic mixers have been designed, most of them offer a high mixing efficiency only in a very limited range of Reynolds numbers (Re). In this work, we developed a 3D fine-threaded lemniscate-shaped micromixer whose maximum numerical and empirical efficiency is around 97% and 93%, respectively, and maintains its high performance (i.e., >90%) over a wide range of 1 < Re < 1000 which meets the requirements of both the μTAS and microchemical process applications. The 3D micromixer was designed based on two distinct mixing strategies, namely, the inducing of chaotic advection by the presence of Dean flow and diffusive mixing through thread-like grooves around the curved body of the mixers. First, a set of numerical simulations was performed to study the physics of the flow and to determine the essential geometrical parameters of the mixers. Second, a simple and cost-effective method was exploited to fabricate the convoluted structure of the micromixers through the removal of a 3D-printed wax structure from a block of cured polydimethylsiloxane. Finally, the fabricated mixers with different threads were tested using a fluorescent microscope demonstrating a good agreement with the results of the numerical simulation. We envisage that the strategy used in this work would expand the scope of the micromixer technology by broadening the range of efficient working flow rate and providing an easy way to the fabrication of 3D convoluted microstructures. PMID:28798843

  13. An easily fabricated three-dimensional threaded lemniscate-shaped micromixer for a wide range of flow rates.

    PubMed

    Rafeie, Mehdi; Welleweerd, Marcel; Hassanzadeh-Barforoushi, Amin; Asadnia, Mohsen; Olthuis, Wouter; Ebrahimi Warkiani, Majid

    2017-01-01

    Mixing fluid samples or reactants is a paramount function in the fields of micro total analysis system (μTAS) and microchemical processing. However, rapid and efficient fluid mixing is difficult to achieve inside microchannels because of the difficulty of diffusive mass transfer in the laminar regime of the typical microfluidic flows. It has been well recorded that the mixing efficiency can be boosted by migrating from two-dimensional (2D) to three-dimensional (3D) geometries. Although several 3D chaotic mixers have been designed, most of them offer a high mixing efficiency only in a very limited range of Reynolds numbers ( Re ). In this work, we developed a 3D fine-threaded lemniscate-shaped micromixer whose maximum numerical and empirical efficiency is around 97% and 93%, respectively, and maintains its high performance (i.e., >90%) over a wide range of 1 <  Re  < 1000 which meets the requirements of both the μTAS and microchemical process applications. The 3D micromixer was designed based on two distinct mixing strategies, namely, the inducing of chaotic advection by the presence of Dean flow and diffusive mixing through thread-like grooves around the curved body of the mixers. First, a set of numerical simulations was performed to study the physics of the flow and to determine the essential geometrical parameters of the mixers. Second, a simple and cost-effective method was exploited to fabricate the convoluted structure of the micromixers through the removal of a 3D-printed wax structure from a block of cured polydimethylsiloxane. Finally, the fabricated mixers with different threads were tested using a fluorescent microscope demonstrating a good agreement with the results of the numerical simulation. We envisage that the strategy used in this work would expand the scope of the micromixer technology by broadening the range of efficient working flow rate and providing an easy way to the fabrication of 3D convoluted microstructures.

  14. Radiolytic degradation of a new diglycol-diamide ligand for actinide and lanthanide co-extraction from spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Ossola, Annalisa; Macerata, Elena; Tinonin, Dario A.; Faroldi, Federica; Giola, Marco; Mariani, Mario; Casnati, Alessandro

    2016-07-01

    Within the Partitioning and Transmutation strategies, great efforts have been devoted in the last decades to the development of lipophilic ligands able to co-extract trivalent Lanthanides (Ln) and Actinides (An) from spent nuclear fuel. Because of the harsh working conditions these ligands undergo, it is important to prove their chemical and radiolytic stability during the counter-current multi-stage extraction process. In the present work the hydrolytic and radiolytic resistance of the freshly prepared and aged organic solutions containing the new ligand (2,6-bis[(N-methyl-N-dodecyl)carboxamide]-4-methoxy-tetrahydro-pyran) were investigated in order to evaluate the impact on the safety and efficiency of the process. Liquid-liquid extraction tests with spiked solutions showed that the ligand extracting performances are strongly impaired by storing the samples at room temperature and in the light. Moreover, the extracting efficiency of the irradiated samples resulted to be influenced by gamma irradiation, while selectivity remains unchanged. Preliminary mass spectrometric data showed that degradation is mainly due to the acid-catalysed reaction of the ligand carboxamide and ether groups with the 1-octanol present in the diluent.

  15. A data acquisition protocol for a reactive wireless sensor network monitoring application.

    PubMed

    Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K

    2015-04-30

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.

  16. A Data Acquisition Protocol for a Reactive Wireless Sensor Network Monitoring Application

    PubMed Central

    Aderohunmu, Femi A.; Brunelli, Davide; Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain. PMID:25942642

  17. Multiscale Molecular Dynamics Simulations of Beta-Amyloid Interactions with Neurons

    NASA Astrophysics Data System (ADS)

    Qiu, Liming; Vaughn, Mark; Cheng, Kelvin

    2012-10-01

    Early events of human beta-amyloid protein interactions with cholesterol-containing membranes are critical to understanding the pathogenesis of Alzheimer's disease (AD) and to exploring new therapeutic interventions of AD. Atomistic molecular dynamics (AMD) simulations have been extensively used to study the protein-lipid interaction at high atomic resolutions. However, traditional MD simulations are not efficient in sampling the phase space of complex lipid/protein systems with rugged free energy landscapes. Meanwhile, coarse-grained MD (CGD) simulations are efficient in the phase space sampling but suffered from low spatial resolutions and from the fact that the energy landscapes are not identical to those of the AMD. Here, a multiscale approach was employed to simulate the protein-lipid interactions of beta-amyloid upon its release from proteolysis residing in the neuronal membranes. We utilized a forward (AMD to CGD) and reverse (CGD-AMD) strategy to explore new transmembrane and surface protein configuration and evaluate the stabilization mechanisms by measuring the residue-specific protein-lipid or protein conformations. The detailed molecular interactions revealed in this multiscale MD approach will provide new insights into understanding the early molecular events leading to the pathogenesis of AD.

  18. Comparison of two temperature control techniques in a forced water heater solar system

    NASA Astrophysics Data System (ADS)

    Hernández, E.; E Guzmán, R.; Santos, A.; Cordoba, E.

    2017-12-01

    a study on the performance of a forced solar heating system in which a comparative analysis of two control strategies, including the classic on-off control and PID control is presented. From the experimental results it was found that the two control strategies show a similar behaviour in the solar heating system forced an approximate settling time of 60 min and over-elongation 2°C for the two control strategies. Furthermore, the maximum temperature in the storage tank was 46°C and the maximum efficiency of flat plate collector was 76.7% given that this efficiency is the ratio of the energy of the radiation on the collector and the energy used to heat water. The efficiency obtained is a fact well accepted because the business efficiencies of flat plate collectors are approximately 70%.

  19. A novel strategy for spectrophotometric simultaneous determination of amitriptyline and nortriptyline based on derivation with a quinonoid compound in serum samples

    NASA Astrophysics Data System (ADS)

    Farnoudian-Habibi, Amir; Massoumi, Bakhshali; Jaymand, Mehdi

    2016-11-01

    A novel and efficient strategy for the simultaneous determination of two tricyclic antidepressant (TCA) drugs [amitriptyline (AT), and its main metabolite (nortriptyline; NT)] via a combination of magnetic solid phase extraction (MSPE), and spectrophotometric techniques in serum is suggested. For this purpose, the imidazolium ionic liquid (Imz)-modified Fe3O4@SiO2 nanoparticles (Fe3O4@SiO2-Imz) was employed as an adsorbent for the MSPE. Preconcentration (loading-desorption) studies were performed under optimized conditions including pH, adsorbent amount, contact time, eluent volume, and desorption time. Afterward, determination of each drug was carried out by specific strategy. Acetaldehyde (AC), and 2,3,5,6-tetrachloro-1,4-benzoquinone (chloranil; CL) were used as chemical reagents for reaction with NT, while AT did not react with these reagents. This method is based on the condensation reaction between secondary amine group of NT and AC to afford an enamine, and subsequently reaction with CL to produce a chlorinated quinone-substituted enamine. The final product exhibited maximum absorption at 556 nm, while the AT was determined at 240 nm. The limits of detections (LODs) for NT and AT in serum sample were obtained as 0.19 and 0.90 ng mL- 1, respectively. The limits of quantifications (LOQs) were obtained to be 0.63 and 2.93 ng mL- 1 for NT and AT, respectively. A linear range was obtained to be 1 to 5 ng mL- 1. Results indicated that the suggested method is applicable for simultaneous determination of NT and AT in serum samples.

  20. Evaluation of different approaches for identifying optimal sites to predict mean hillslope soil moisture content

    NASA Astrophysics Data System (ADS)

    Liao, Kaihua; Zhou, Zhiwen; Lai, Xiaoming; Zhu, Qing; Feng, Huihui

    2017-04-01

    The identification of representative soil moisture sampling sites is important for the validation of remotely sensed mean soil moisture in a certain area and ground-based soil moisture measurements in catchment or hillslope hydrological studies. Numerous approaches have been developed to identify optimal sites for predicting mean soil moisture. Each method has certain advantages and disadvantages, but they have rarely been evaluated and compared. In our study, surface (0-20 cm) soil moisture data from January 2013 to March 2016 (a total of 43 sampling days) were collected at 77 sampling sites on a mixed land-use (tea and bamboo) hillslope in the hilly area of Taihu Lake Basin, China. A total of 10 methods (temporal stability (TS) analyses based on 2 indices, K-means clustering based on 6 kinds of inputs and 2 random sampling strategies) were evaluated for determining optimal sampling sites for mean soil moisture estimation. They were TS analyses based on the smallest index of temporal stability (ITS, a combination of the mean relative difference and standard deviation of relative difference (SDRD)) and based on the smallest SDRD, K-means clustering based on soil properties and terrain indices (EFs), repeated soil moisture measurements (Theta), EFs plus one-time soil moisture data (EFsTheta), and the principal components derived from EFs (EFs-PCA), Theta (Theta-PCA), and EFsTheta (EFsTheta-PCA), and global and stratified random sampling strategies. Results showed that the TS based on the smallest ITS was better (RMSE = 0.023 m3 m-3) than that based on the smallest SDRD (RMSE = 0.034 m3 m-3). The K-means clustering based on EFsTheta (-PCA) was better (RMSE <0.020 m3 m-3) than these based on EFs (-PCA) and Theta (-PCA). The sampling design stratified by the land use was more efficient than the global random method. Forty and 60 sampling sites are needed for stratified sampling and global sampling respectively to make their performances comparable to the best K-means method (EFsTheta-PCA). Overall, TS required only one site, but its accuracy was limited. The best K-means method required <8 sites and yielded high accuracy, but extra soil and terrain information is necessary when using this method. The stratified sampling strategy can only be used if no pre-knowledge about soil moisture variation is available. This information will help in selecting the optimal methods for estimation the area mean soil moisture.

  1. Real time lobster posture estimation for behavior research

    NASA Astrophysics Data System (ADS)

    Yan, Sheng; Alfredsen, Jo Arve

    2017-02-01

    In animal behavior research, the main task of observing the behavior of an animal is usually done manually. The measurement of the trajectory of an animal and its real-time posture description is often omitted due to the lack of automatic computer vision tools. Even though there are many publications for pose estimation, few are efficient enough to apply in real-time or can be used without the machine learning algorithm to train a classifier from mass samples. In this paper, we propose a novel strategy for the real-time lobster posture estimation to overcome those difficulties. In our proposed algorithm, we use the Gaussian mixture model (GMM) for lobster segmentation. Then the posture estimation is based on the distance transform and skeleton calculated from the segmentation. We tested the algorithm on a serials lobster videos in different size and lighting conditions. The results show that our proposed algorithm is efficient and robust under various conditions.

  2. Public perceptions of energy consumption and savings

    PubMed Central

    Attari, Shahzeen Z.; DeKay, Michael L.; Davidson, Cliff I.; Bruine de Bruin, Wändi

    2010-01-01

    In a national online survey, 505 participants reported their perceptions of energy consumption and savings for a variety of household, transportation, and recycling activities. When asked for the most effective strategy they could implement to conserve energy, most participants mentioned curtailment (e.g., turning off lights, driving less) rather than efficiency improvements (e.g., installing more efficient light bulbs and appliances), in contrast to experts’ recommendations. For a sample of 15 activities, participants underestimated energy use and savings by a factor of 2.8 on average, with small overestimates for low-energy activities and large underestimates for high-energy activities. Additional estimation and ranking tasks also yielded relatively flat functions for perceived energy use and savings. Across several tasks, participants with higher numeracy scores and stronger proenvironmental attitudes had more accurate perceptions. The serious deficiencies highlighted by these results suggest that well-designed efforts to improve the public's understanding of energy use and savings could pay large dividends. PMID:20713724

  3. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  4. [Analysis of productivity, quality and cost of first grade laboratories: blood biometry].

    PubMed

    Avila, L; Hernández, P; Cruz, A; Zurita, B; Terres, A M; Cruz, C

    1999-04-01

    Assessment of productivity, quality and production costs and determination of the efficiency of top grade clinical laboratories in Mexico. Ten laboratories were selected from among the total number (52) existing in Mexico City, and the Donabedian model of structure, process and results were applied. Blood count was selected as a tracer. The principal problems found were: inadequate distribution of trained human resources, poor glass material, inadequate analytic process and low productivity. These factors are reflected in the unit costs, which exceed reference laboratory costs by 200%. Only 50% of the laboratories analyzed generate reliable results. Only 20% of the laboratories studied operate efficiently. To solve the problems identified requires integral strategies at different levels. A specific recomendation for the improvement of quality and productivity is an assessment of the cost/benefit of creating a central laboratory and using the remaining sites exclusively for the collection of samples.

  5. Physical Practice and Wellness Courses Reduce Distress and Improve Wellbeing in Police Officers.

    PubMed

    Acquadro Maran, Daniela; Zedda, Massimo; Varetto, Antonella

    2018-03-23

    The aim of this work was to evaluate a course to reduce distress in an Italian police force. Based on the findings from the first investigations on this population, courses to improve the ability to manage distress were tailored by management. Several free courses were proposed, including physical efficiency (e.g., total body conditioning) and wellness (e.g., autogenic training) classes. The goal of this research was to evaluate the courses and their impact on the perceived distress and general health of the participants, as well as the effectiveness in increasing the use of adaptive coping strategies. A descriptive investigation was conducted involving a sample of 105 police officers before (time 1) and after (time 2) they had participated in the courses. Findings confirmed both physical and wellness courses affected, in participants, the perceived distress, thereby increasing the perception of wellbeing. The participants expressed having mental health benefits, the use of adaptive coping strategies increased, while the maladaptive coping strategies decreased. This study confirms that these courses could effectively reduce the risk of chronic disease, a consequence of persistent exposure to distress.

  6. A strategy to load balancing for non-connectivity MapReduce job

    NASA Astrophysics Data System (ADS)

    Zhou, Huaping; Liu, Guangzong; Gui, Haixia

    2017-09-01

    MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.

  7. Entropy Beacon: A Hairpin-Free DNA Amplification Strategy for Efficient Detection of Nucleic Acids

    PubMed Central

    2015-01-01

    Here, we propose an efficient strategy for enzyme- and hairpin-free nucleic acid detection called an entropy beacon (abbreviated as Ebeacon). Different from previously reported DNA hybridization/displacement-based strategies, Ebeacon is driven forward by increases in the entropy of the system, instead of free energy released from new base-pair formation. Ebeacon shows high sensitivity, with a detection limit of 5 pM target DNA in buffer and 50 pM in cellular homogenate. Ebeacon also benefits from the hairpin-free amplification strategy and zero-background, excellent thermostability from 20 °C to 50 °C, as well as good resistance to complex environments. In particular, based on the huge difference between the breathing rate of a single base pair and two adjacent base pairs, Ebeacon also shows high selectivity toward base mutations, such as substitution, insertion, and deletion and, therefore, is an efficient nucleic acid detection method, comparable to most reported enzyme-free strategies. PMID:26505212

  8. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  9. Steroid hormones in environmental matrices: extraction method comparison.

    PubMed

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  10. Sampling strategies for estimating brook trout effective population size

    Treesearch

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  11. Control of Plasmodium knowlesi malaria

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Hasan, Yahya Abu; Abdullah, Farah Aini

    2015-10-01

    The most significant and efficient measures against Plasmodium knowlesi outbreaks are efficient anti malaria drug, biological control in form of predatory mosquitoes and culling control strategies. In this paper optimal control theory is applied to a system of ordinary differential equation. It describes the disease transmission and Pontryagin's Maximum Principle is applied for analysis of the control. To this end, three control strategies representing biological control, culling and treatment were incorporated into the disease transmission model. The simulation results show that the implementation of the combination strategy during the epidemic is the most cost-effective strategy for disease transmission.

  12. Studies toward brevisulcenal F via convergent strategies for marine ladder polyether synthesis.

    PubMed

    Katcher, Matthew; Jamison, Timothy F

    2018-03-15

    Shortly after the initial isolation of marine ladder polyether natural products, biomimetic epoxide-opening cascade reactions were proposed as an efficient strategy for the synthesis of these compounds. However, difficulties in assembling the cascade precursors have limited the realization of these cascades. In this report, we describe strategies that provide convergent access to cascade precursors via regioselective allylation and efficient fragment coupling. We then investigate epoxide-opening cascades promoted by strong bases for the formation of fused tetrahydropyrans. These strategies are evaluated in the context of the synthesis of rings CDEFG of brevisulcenal F.

  13. Chirosurveillance: The use of native bats to detect invasive agricultural pests.

    PubMed

    Maslo, Brooke; Valentin, Rafael; Leu, Karen; Kerwin, Kathleen; Hamilton, George C; Bevan, Amanda; Fefferman, Nina H; Fonseca, Dina M

    2017-01-01

    Invasive insect pests cost the agricultural industry billions of dollars annually in crop losses. Timely detection of pests is critical for management efficiency. Innovative pest detection strategies, such as environmental DNA (eDNA) techniques, combined with efficient predators, maximize sampling resolution across space and time and may improve surveillance. We tested the hypothesis that temperate insectivorous bats can be important sentinels of agricultural insect pest surveillance. Specifically, we used a new high-sensitivity molecular assay for invasive brown marmorated stink bugs (Halyomorpha halys) to examine the extent to which big brown bats (Eptesicus fuscus) detect agricultural pests in the landscape. We documented consistent seasonal predation of stink bugs by big brown bats. Importantly, bats detected brown marmorated stink bugs 3-4 weeks earlier than the current standard monitoring tool, blacklight traps, across all sites. We highlight here the previously unrecognized potential ecosystem service of bats as agents of pest surveillance (or chirosurveillance). Additional studies examining interactions between other bat and insect pest species, coupled with comparisons of detectability among various conventional monitoring methods, are needed to verify the patterns extracted from this study. Ultimately, robust economic analyses will be needed to assess the cost-effectiveness of chirosurveillance as a standard strategy for integrated pest management.

  14. Efficiency of a turbidity-based, real-time control strategy applied to a retention tank: a simulation study.

    PubMed

    Lacour, C; Joannis, C; Schuetze, M; Chebbo, G

    2011-01-01

    This paper compares several real-time control (RTC) strategies for a generic configuration consisting of a storage tank with two overflow facilities. Two of the strategies only make use of flow rate data, while the third also introduces turbidity data in order to exercise dynamic control between two overflow locations. The efficiency of each strategy is compared over a wide range of system setups, described by two parameters. This assessment is performed by simulating the application of control strategies to actual measurements time series recorded on two sites. Adding turbidity measurements into an RTC strategy leads to a significant reduction in the annual overflow pollutant load. The pollutant spills spared by such a control strategy strongly depend on the site and on the flow rate based strategy considered as a reference. With the datasets used in this study, values ranging from 5 to 50% were obtained.

  15. Practice makes proficient: pigeons (Columba livia) learn efficient routes on full-circuit navigational traveling salesperson problems.

    PubMed

    Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L

    2015-01-01

    Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.

  16. A dilute-and-shoot sample preparation strategy for new and used lubricating oils for Ca, P, S and Zn determination by total reflection X-ray fluorescence

    NASA Astrophysics Data System (ADS)

    Mota, Mariana F. B.; Gama, Ednilton M.; Rodrigues, Gabrielle de C.; Rodrigues, Guilherme D.; Nascentes, Clésia C.; Costa, Letícia M.

    2018-01-01

    In this work, a dilute-and-shoot method was developed for Ca, P, S and Zn determination in new and used lubricating oil samples by total reflection X-ray fluorescence (TXRF). The oil samples were diluted with organic solvents followed by addition of yttrium as internal standard and the TXRF measurements were performed after solvent evaporation. The method was optimized using an interlaboratorial reference material. The experimental parameters evaluated were sample volume (50 or 100 μL), measurement time (250 or 500 s) and volume deposited on the quartz glass sample carrier (5 or 10 μL). All of them were evaluated and optimized using xylene, kerosene and hexane. Analytical figures of merit (accuracy, precision, limit of detection and quantification) were used to evaluate the performance of the analytical method for all solvents. The recovery rates varied from 99 to 111% and the relative standard deviation remained between 1.7% and 10% (n = 8). For all elements, the results obtained by applying the new method were in agreement with the certified value. After the validation step, the method was applied for Ca, P, S and Zn quantification in eight new and four used lubricating oil samples, for all solvents. The concentration of the elements in the samples varied in the ranges of 1620-3711 mg L- 1 for Ca, 704-1277 mg L- 1 for P, 2027-9147 mg L- 1 for S, and 898-1593 mg L- 1 for Zn. The association of TXRF with a dilute-and-shoot sample preparation strategy was efficient for Ca, P, S and Zn determination in lubricating oils, presenting accurate results. Additionally, the time required for analysis is short, the reagent volumes are low minimizing waste generation, and the technique does not require calibration curves.

  17. Intermittent search strategies

    NASA Astrophysics Data System (ADS)

    Bénichou, O.; Loverdo, C.; Moreau, M.; Voituriez, R.

    2011-01-01

    This review examines intermittent target search strategies, which combine phases of slow motion, allowing the searcher to detect the target, and phases of fast motion during which targets cannot be detected. It is first shown that intermittent search strategies are actually widely observed at various scales. At the macroscopic scale, this is, for example, the case of animals looking for food; at the microscopic scale, intermittent transport patterns are involved in a reaction pathway of DNA-binding proteins as well as in intracellular transport. Second, generic stochastic models are introduced, which show that intermittent strategies are efficient strategies that enable the minimization of search time. This suggests that the intrinsic efficiency of intermittent search strategies could justify their frequent observation in nature. Last, beyond these modeling aspects, it is proposed that intermittent strategies could also be used in a broader context to design and accelerate search processes.

  18. Human Finger-Prick Induced Pluripotent Stem Cells Facilitate the Development of Stem Cell Banking

    PubMed Central

    Tan, Hong-Kee; Toh, Cheng-Xu Delon; Ma, Dongrui; Yang, Binxia; Liu, Tong Ming; Lu, Jun; Wong, Chee-Wai; Tan, Tze-Kai; Li, Hu; Syn, Christopher; Tan, Eng-Lee; Lim, Bing; Lim, Yoon-Pin; Cook, Stuart A.

    2014-01-01

    Induced pluripotent stem cells (iPSCs) derived from somatic cells of patients can be a good model for studying human diseases and for future therapeutic regenerative medicine. Current initiatives to establish human iPSC (hiPSC) banking face challenges in recruiting large numbers of donors with diverse diseased, genetic, and phenotypic representations. In this study, we describe the efficient derivation of transgene-free hiPSCs from human finger-prick blood. Finger-prick sample collection can be performed on a “do-it-yourself” basis by donors and sent to the hiPSC facility for reprogramming. We show that single-drop volumes of finger-prick samples are sufficient for performing cellular reprogramming, DNA sequencing, and blood serotyping in parallel. Our novel strategy has the potential to facilitate the development of large-scale hiPSC banking worldwide. PMID:24646489

  19. 3D resolved mapping of optical aberrations in thick tissues

    PubMed Central

    Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine

    2012-01-01

    We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353

  20. Strategy for Sensitive and Specific Detection of Yersinia pestis in Skeletons of the Black Death Pandemic

    PubMed Central

    Seifert, Lisa; Harbeck, Michaela; Thomas, Astrid; Hoke, Nadja; Zöller, Lothar; Wiechmann, Ingrid; Grupe, Gisela; Scholz, Holger C.; Riehm, Julia M.

    2013-01-01

    Yersinia pestis has been identified as the causative agent of the Black Death pandemic in the 14th century. However, retrospective diagnostics in human skeletons after more than 600 years are critical. We describe a strategy following a modern diagnostic algorithm and working under strict ancient DNA regime for the identification of medieval human plague victims. An initial screening and DNA quantification assay detected the Y. pestis specific pla gene of the high copy number plasmid pPCP1. Results were confirmed by conventional PCR and sequence analysis targeting both Y. pestis specific virulence plasmids pPCP1 and pMT1. All assays were meticulously validated according to human clinical diagnostics requirements (ISO 15189) regarding efficiency, sensitivity, specificity, and limit of detection (LOD). Assay specificity was 100% tested on 41 clinically relevant bacteria and 29 Y. pseudotuberculosis strains as well as for DNA of 22 Y. pestis strains and 30 previously confirmed clinical human plague samples. The optimized LOD was down to 4 gene copies. 29 individuals from three different multiple inhumations were initially assessed as possible victims of the Black Death pandemic. 7 samples (24%) were positive in the pPCP1 specific screening assay. Confirmation through second target pMT1 specific PCR was successful for 4 of the positive individuals (14%). A maximum of 700 and 560 copies per µl aDNA were quantified in two of the samples. Those were positive in all assays including all repetitions, and are candidates for future continuative investigations such as whole genome sequencing. We discuss that all precautions taken here for the work with aDNA are sufficient to prevent external sample contamination and fulfill the criteria of authenticity. With regard to retrospective diagnostics of a human pathogen and the uniqueness of ancient material we strongly recommend using a careful strategy and validated assays as presented in our study. PMID:24069445

  1. Strategy for sensitive and specific detection of Yersinia pestis in skeletons of the black death pandemic.

    PubMed

    Seifert, Lisa; Harbeck, Michaela; Thomas, Astrid; Hoke, Nadja; Zöller, Lothar; Wiechmann, Ingrid; Grupe, Gisela; Scholz, Holger C; Riehm, Julia M

    2013-01-01

    Yersinia pestis has been identified as the causative agent of the Black Death pandemic in the 14(th) century. However, retrospective diagnostics in human skeletons after more than 600 years are critical. We describe a strategy following a modern diagnostic algorithm and working under strict ancient DNA regime for the identification of medieval human plague victims. An initial screening and DNA quantification assay detected the Y. pestis specific pla gene of the high copy number plasmid pPCP1. Results were confirmed by conventional PCR and sequence analysis targeting both Y. pestis specific virulence plasmids pPCP1 and pMT1. All assays were meticulously validated according to human clinical diagnostics requirements (ISO 15189) regarding efficiency, sensitivity, specificity, and limit of detection (LOD). Assay specificity was 100% tested on 41 clinically relevant bacteria and 29 Y. pseudotuberculosis strains as well as for DNA of 22 Y. pestis strains and 30 previously confirmed clinical human plague samples. The optimized LOD was down to 4 gene copies. 29 individuals from three different multiple inhumations were initially assessed as possible victims of the Black Death pandemic. 7 samples (24%) were positive in the pPCP1 specific screening assay. Confirmation through second target pMT1 specific PCR was successful for 4 of the positive individuals (14%). A maximum of 700 and 560 copies per µl aDNA were quantified in two of the samples. Those were positive in all assays including all repetitions, and are candidates for future continuative investigations such as whole genome sequencing. We discuss that all precautions taken here for the work with aDNA are sufficient to prevent external sample contamination and fulfill the criteria of authenticity. With regard to retrospective diagnostics of a human pathogen and the uniqueness of ancient material we strongly recommend using a careful strategy and validated assays as presented in our study.

  2. Enhanced Genetic Analysis of Single Human Bioparticles Recovered by Simplified Micromanipulation from Forensic ‘Touch DNA’ Evidence

    PubMed Central

    Farash, Katherine; Hanson, Erin K.; Ballantyne, Jack

    2015-01-01

    DNA profiles can be obtained from ‘touch DNA’ evidence, which comprises microscopic traces of human biological material. Current methods for the recovery of trace DNA employ cotton swabs or adhesive tape to sample an area of interest. However, such a ‘blind-swabbing’ approach will co-sample cellular material from the different individuals, even if the individuals’ cells are located in geographically distinct locations on the item. Thus, some of the DNA mixtures encountered in touch DNA samples are artificially created by the swabbing itself. In some instances, a victim’s DNA may be found in significant excess thus masking any potential perpetrator’s DNA. In order to circumvent the challenges with standard recovery and analysis methods, we have developed a lower cost, ‘smart analysis’ method that results in enhanced genetic analysis of touch DNA evidence. We describe an optimized and efficient micromanipulation recovery strategy for the collection of bio-particles present in touch DNA samples, as well as an enhanced amplification strategy involving a one-step 5 µl microvolume lysis/STR amplification to permit the recovery of STR profiles from the bio-particle donor(s). The use of individual or few (i.e., “clumps”) bioparticles results in the ability to obtain single source profiles. These procedures represent alternative enhanced techniques for the isolation and analysis of single bioparticles from forensic touch DNA evidence. While not necessary in every forensic investigation, the method could be highly beneficial for the recovery of a single source perpetrator DNA profile in cases involving physical assault (e.g., strangulation) that may not be possible using standard analysis techniques. Additionally, the strategies developed here offer an opportunity to obtain genetic information at the single cell level from a variety of other non-forensic trace biological material. PMID:25867046

  3. Virtual quantification of metabolites by capillary electrophoresis-electrospray ionization-mass spectrometry: predicting ionization efficiency without chemical standards.

    PubMed

    Chalcraft, Kenneth R; Lee, Richard; Mills, Casandra; Britz-McKibbin, Philip

    2009-04-01

    A major obstacle in metabolomics remains the identification and quantification of a large fraction of unknown metabolites in complex biological samples when purified standards are unavailable. Herein we introduce a multivariate strategy for de novo quantification of cationic/zwitterionic metabolites using capillary electrophoresis-electrospray ionization-mass spectrometry (CE-ESI-MS) based on fundamental molecular, thermodynamic, and electrokinetic properties of an ion. Multivariate calibration was used to derive a quantitative relationship between the measured relative response factor (RRF) of polar metabolites with respect to four physicochemical properties associated with ion evaporation in ESI-MS, namely, molecular volume (MV), octanol-water distribution coefficient (log D), absolute mobility (mu(o)), and effective charge (z(eff)). Our studies revealed that a limited set of intrinsic solute properties can be used to predict the RRF of various classes of metabolites (e.g., amino acids, amines, peptides, acylcarnitines, nucleosides, etc.) with reasonable accuracy and robustness provided that an appropriate training set is validated and ion responses are normalized to an internal standard(s). The applicability of the multivariate model to quantify micromolar levels of metabolites spiked in red blood cell (RBC) lysates was also examined by CE-ESI-MS without significant matrix effects caused by involatile salts and/or major co-ion interferences. This work demonstrates the feasibility for virtual quantification of low-abundance metabolites and their isomers in real-world samples using physicochemical properties estimated by computer modeling, while providing deeper insight into the wide disparity of solute responses in ESI-MS. New strategies for predicting ionization efficiency in silico allow for rapid and semiquantitative analysis of newly discovered biomarkers and/or drug metabolites in metabolomics research when chemical standards do not exist.

  4. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  5. The Ability of Analysts' Recommendations to Predict Optimistic and Pessimistic Forecasts

    PubMed Central

    Biglari, Vahid; Alfan, Ervina Binti; Ahmad, Rubi Binti; Hajian, Najmeh

    2013-01-01

    Previous researches show that buy (growth) companies conduct income increasing earnings management in order to meet forecasts and generate positive forecast Errors (FEs). This behavior however, is not inherent in sell (non-growth) companies. Using the aforementioned background, this research hypothesizes that since sell companies are pressured to avoid income increasing earnings management, they are capable, and in fact more inclined, to pursue income decreasing Forecast Management (FM) with the purpose of generating positive FEs. Using a sample of 6553 firm-years of companies that are listed in the NYSE between the years 2005–2010, the study determines that sell companies conduct income decreasing FM to generate positive FEs. However, the frequency of positive FEs of sell companies does not exceed that of buy companies. Using the efficiency perspective, the study suggests that even though buy and sell companies have immense motivation in avoiding negative FEs, they exploit different but efficient strategies, respectively, in order to meet forecasts. Furthermore, the findings illuminated the complexities behind informative and opportunistic forecasts that falls under the efficiency versus opportunistic theories in literature. PMID:24146741

  6. The ability of analysts' recommendations to predict optimistic and pessimistic forecasts.

    PubMed

    Biglari, Vahid; Alfan, Ervina Binti; Ahmad, Rubi Binti; Hajian, Najmeh

    2013-01-01

    Previous researches show that buy (growth) companies conduct income increasing earnings management in order to meet forecasts and generate positive forecast Errors (FEs). This behavior however, is not inherent in sell (non-growth) companies. Using the aforementioned background, this research hypothesizes that since sell companies are pressured to avoid income increasing earnings management, they are capable, and in fact more inclined, to pursue income decreasing Forecast Management (FM) with the purpose of generating positive FEs. Using a sample of 6553 firm-years of companies that are listed in the NYSE between the years 2005-2010, the study determines that sell companies conduct income decreasing FM to generate positive FEs. However, the frequency of positive FEs of sell companies does not exceed that of buy companies. Using the efficiency perspective, the study suggests that even though buy and sell companies have immense motivation in avoiding negative FEs, they exploit different but efficient strategies, respectively, in order to meet forecasts. Furthermore, the findings illuminated the complexities behind informative and opportunistic forecasts that falls under the efficiency versus opportunistic theories in literature.

  7. Breaking through the uncertainty ceiling in LA-ICP-MS U-Pb geochronology

    NASA Astrophysics Data System (ADS)

    Horstwood, M.

    2016-12-01

    Sources of systematic uncertainty associated with session-to-session bias are the dominant contributor to the 2% (2s) uncertainty ceiling that currently limits the accuracy of LA-ICP-MS U-Pb geochronology. Sources include differential downhole fractionation (LIEF), `matrix effects' and ablation volume differences, which result in irreproducibility of the same reference material across sessions. Current mitigation methods include correcting for LIEF mathematically, using matrix-matched reference materials, annealing material to reduce or eliminate radiation damage effects and tuning for robust plasma conditions. Reducing the depth and volume of ablation can also mitigate these problems and should contribute to the reduction of the uncertainty ceiling. Reducing analysed volume leads to increased detection efficiency, reduced matrix-effects, eliminates LIEF, obviates ablation rate differences and reduces the likelihood of intercepting complex growth zones with depth, thereby apparently improving material homogeneity. High detection efficiencies (% level) and low sampling volumes (20um box, 1-2um deep) can now be achieved using MC-ICP-MS such that low volume ablations should be considered part of the toolbox of methods targeted at improving the reproducibility of LA-ICP-MS U-Pb geochronology. In combination with other strategies these improvements should be feasible on any ICP platform. However, reducing the volume of analysis reduces detected counts and requires a change of analytical approach in order to mitigate this. Appropriate strategies may include the use of high efficiency cell and torch technologies and the optimisation of acquisition protocols and data handling techniques such as condensing signal peaks, using log ratios and total signal integration. The tools required to break the 2% (2s) uncertainty ceiling in LA-ICP-MS U-Pb geochronology are likely now known but require a coherent strategy and change of approach to combine their implementation and realise this goal. This study will highlight these changes and efforts towards reducing the uncertainty contribution for LA-ICP-MS U-Pb geochronology.

  8. Evaluating energy efficient strategies and product quality for distillers' dried grains with solubles (DDGS) in dry-grind ethanol plants

    NASA Astrophysics Data System (ADS)

    Lan, Tian

    The drying of distillers dried grains with solubles (DDGS), a coproduct of dry-grind corn processing to ethanol utilizes about 30% of the total energy required for the production of a liter of fuel ethanol. Therefore, improving DDGS drying energy efficiency could have significant impact on the economics of the dry-grind corn-to-ethanol process. Drying process improvements must take account into the effects of various drying strategies on the final quality of DDGS which is primarily utilized as a feed ingredient. Previous studies in the literature have shown that physical and chemical properties of DDGS vary according to the ratio of the two primarily feed streams, wet distillers grains (WDG) and condensed distillers solubles (CDS) which make up DDGS. Extensive research using plant-scale and bench-scale experiments have been conducted on the effect of process variables (ratios of WDG, CDS and DDGS add-back) during drying on the physical and chemical properties of DDGS. However, these investigations did not correlate the product characteristics data to drying efficiency. Additionally, it cannot be clearly determined from the literature on DDGS drying that processes used in the industry are optimized for both product quality and energy efficiency. A bench-scale rotary drum dryer heated by an electrically powered heat gun was used to investigate the effects of WDG, CDS and add-back ratios on both energy efficiency, drying performance and DDGS physical and chemical properties. A two stage drying process with the bench-scale rotary dryer was used to simulate the drying of DDGS using ICM (ICM, Inc., Colwich, KS) dry-grind process technology for DDGS drying which uses two rotary drum dryers in series. Effects of drying process variables, CDS content (0, 10, 20 and 40% by mass) and percent DDGS add-back (0, 20, 40 and 60% by mass) on energy performance and product quality were determined. Sixteen different drying strategies based on drying process variable ratios were tested and the response variables were measured which included energy performance (specific power consumption, energy efficiency, drying efficiency, drying rate), physical properties [particle size distribution (PSD), geometric mean particle size (dwg), bulk density, tapped bulk density, true density, color, compressibility index (CI), Hausner ratio (HR)], and chemical properties [acid detergent fiber (ADF), neutral detergent fiber (NDF), oil, crude protein, starch, ash, etc]. The results of the bench-scale study were also compared with data from a previous plant-scale DDGS production process investigation that used similar drying strategies. Results from the experiments indicated that among all 16 drying strategies, the 10% CDS content and 60% DDGS add-back strategy achieved the least specific power consumption (SPC) while the 40% CDS content and 20% DDGS add-back strategy had the highest SPC. The energy efficiency and drying efficiency of the bench-scale data in both drying stage I and drying stage II presented similar trends as process parameters changed. The highest energy and drying efficiencies were achieved in strategies with 10% CDS content while the lowest were in strategies with 40% CDS content. A comparison of the energy and drying efficiencies for the bench-scale strategies conducted in this study with those of similar plant-scale strategies from a previous study showed a similar trend in the data for drying stage 1, even though the actual numbers were quite different for the two experimental scales. On average, the energy and drying efficiencies for the bench-scale study was 40% less than the corresponding plant-scale strategy. CDS content had the most influence on the energy performance during DDGS drying, while percent DDGS add-back had more impact on the SPC given a constant CDS content level. By comparing both the physical properties, bulk density in particular which relates to logistics, and energy performance data, the drying strategy with 20% CDS and 60% add-back performed the best. Therefore, it is not surprising why this is the strategy used by ICM drying process technology for DDGS. The particle size (dwg) and particle size distribution (PSD) of DDGS varied with the drying strategies; by varying CDS content and percent DDGS add-back. It was determined that the percent DDGS add-back had no effect on either PSD or dgw. Under the same drying strategy, drying stage I always had a higher drying rate than stage II. Also, the drying curves under the same CDS content showed similar shapes. As CDS content increased, the color of DDGS became darker; both DDGS bulk density and tapped bulk density increased. In addition, CI and HR values decreased, ADF and NDF contents decreased and oil and ash contents increased with increased CDS content. Changes in percent DDGS add-back had a negligible effect on the DDGS chemical composition. Overall, the physical and chemical composition analysis of DDGS for both bench-scale and plant-scale studies followed similar trends.

  9. K-targeted strategy for isolation of phenolic alkaloids of Nelumbo nucifera Gaertn by counter-current chromatography using lysine as a pH regulator.

    PubMed

    Wang, Yanyan; Zhang, Lihong; Zhou, Hui; Guo, Xiuyun; Wu, Shihua

    2017-03-24

    Counter-current chromatography (CCC) is an efficient liquid-liquid partition chromatography technique without support matrix. Despite there are many significant advancements in the CCC separation of natural products especially for non-ionic neutral compounds, CCC isolation of ionic compounds including alkaloids is still a challenging process guide by classical partition coefficients (K) or distribution ratio (K C ) because their partition coefficient could not be equal to distribution ratio in common ionic conditions. Here, taking the extract of embryo of the seed of Nelumbo nucifera Gaertn as sample, we introduced a modified K-targeted strategy for isolation of phenolic alkaloids by use of lysine as a pH regulator. The results indicated that if the mass of basic regulators such as aqueous ammonia and lysine added into the solvent system were high enough to inhibit the ionization of the targeted alkaloids, the distribution ratio of targets with ionic and non-ionic molecular forms got stable and might not been changed as the concentration of the pH regulator. In this case, the distribution ratio of target was almost equal to the partition coefficient. Thus, the targets could be isolated by K-targeted CCC separation through adding a certain amount pH regulators into the solvent system. Further experiments also showed that the sample concentration was an important factor on the distribution ratio of targets. Meanwhile, CCC experiments indicated that lysine was more suitable than aqueous ammonia for the separation of phenolic alkaloids because the chemical property of lysine-target complex in the CCC fractions was more stable. Therefore, the preparative CCC separation was performed using 20mM lysine as a pH regulator with more than 800mg injection mass. After simple back-extraction with dichloromethane, the lysine in the CCC fraction was removed completely and pure isoliensinine and neferine were obtained. In summary, the whole results indicated that the modified K-targeted CCC strategy using lysine as the pH regulator was efficient for isolation of phenolic alkaloids from crude plant extracts. It not only provided a practical strategy for the isolation of neferine and its analogues, but also introduced a powerful method to resolve the peak skewing (leading or tailing) in CCC separation of ionic compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The Cost-Effectiveness of Cervical Self-Sampling to Improve Routine Cervical Cancer Screening: The Importance of Respondent Screening History and Compliance.

    PubMed

    Burger, Emily A; Sy, Stephen; Nygård, Mari; Kim, Jane J

    2017-01-01

    Human papillomavirus (HPV) testing allows women to self-collect cervico-vaginal cells at home (i.e., self-sampling). Using primary data from a randomized pilot study, we evaluated the long-term consequences and cost-effectiveness of using self-sampling to improve participation to routine cervical cancer screening in Norway. We compared a strategy reflecting screening participation (using reminder letters) to strategies that involved mailing self-sampling device kits to women noncompliant to screening within a 5- or 10-year period under two scenarios: (A) self-sampling respondents had moderate under-screening histories, or (B) respondents to self-sampling had moderate and severe under-screening histories. Model outcomes included quality-adjusted life-years (QALY) and lifetime costs. The "most cost-effective" strategy was identified as the strategy just below $100,000 per QALY gained. Mailing self-sampling device kits to all women noncompliant to screening within a 5- or 10-year period can be more effective and less costly than the current reminder letter policy; however, the optimal self-sampling strategy was dependent on the profile of self-sampling respondents. For example, "10-yearly self-sampling" is preferred ($95,500 per QALY gained) if "5-yearly self-sampling" could only attract moderate under-screeners; however, "5-yearly self-sampling" is preferred if this strategy could additionally attract severe under-screeners. Targeted self-sampling of noncompliers likely represents good value-for-money; however, the preferred strategy is contingent on the screening histories and compliance of respondents. The magnitude of the health benefit and optimal self-sampling strategy is dependent on the profile and behavior of respondents. Health authorities should understand these factors prior to selecting and implementing a self-sampling policy. Cancer Epidemiol Biomarkers Prev; 26(1); 95-103. ©2016 AACR. ©2016 American Association for Cancer Research.

  11. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  12. Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.

    PubMed

    Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie

    2017-01-01

    Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.

  13. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  14. Liquid chromatography with diode array detection and multivariate curve resolution for the selective and sensitive quantification of estrogens in natural waters.

    PubMed

    Pérez, Rocío L; Escandar, Graciela M

    2014-07-04

    Following the green analytical chemistry principles, an efficient strategy involving second-order data provided by liquid chromatography (LC) with diode array detection (DAD) was applied for the simultaneous determination of estriol, 17β-estradiol, 17α-ethinylestradiol and estrone in natural water samples. After a simple pre-concentration step, LC-DAD matrix data were rapidly obtained (in less than 5 min) with a chromatographic system operating isocratically. Applying a second-order calibration algorithm based on multivariate curve resolution with alternating least-squares (MCR-ALS), successful resolution was achieved in the presence of sample constituents that strongly coelute with the analytes. The flexibility of this multivariate model allowed the quantification of the four estrogens in tap, mineral, underground and river water samples. Limits of detection in the range between 3 and 13 ng L(-1), and relative prediction errors from 2 to 11% were achieved. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Advances in ultrasensitive mass spectrometry of organic molecules.

    PubMed

    Kandiah, Mathivathani; Urban, Pawel L

    2013-06-21

    Ultrasensitive mass spectrometric analysis of organic molecules is important for various branches of chemistry, and other fields including physics, earth and environmental sciences, archaeology, biomedicine, and materials science. It finds applications--as an enabling tool--in systems biology, biological imaging, clinical analysis, and forensics. Although there are a number of technical obstacles associated with the analysis of samples by mass spectrometry at ultratrace level (for example analyte losses during sample preparation, insufficient sensitivity, ion suppression), several noteworthy developments have been made over the years. They include: sensitive ion sources, loss-free interfaces, ion optics components, efficient mass analyzers and detectors, as well as "smart" sample preparation strategies. Some of the mass spectrometric methods published to date can achieve sensitivity which is by several orders of magnitude higher than that of alternative approaches. Femto- and attomole level limits of detection are nowadays common, while zepto- and yoctomole level limits of detection have also been reported. We envision that the ultrasensitive mass spectrometric assays will soon contribute to new discoveries in bioscience and other areas.

  16. Strategies to improve industrial energy efficiency

    NASA Astrophysics Data System (ADS)

    O'Rielly, Kristine M.

    A lack of technical expertise, fueled by a lack of positive examples, can lead to companies opting not to implement energy reduction projects unless mandated by legislation. As a result, companies are missing out on exceptional opportunities to improve not only their environmental record but also save considerably on fuel costs. This study investigates the broad topic of energy efficiency within the context of the industrial sector by means of a thorough review of existing energy reduction strategies and a demonstration of their successful implementation. The study begins by discussing current industrial energy consumption trends around the globe and within the Canadian manufacturing sector. This is followed by a literature review which outlines 3 prominent energy efficiency improvement strategies currently available to companies: 1) Waste heat recovery, 2) Idle power loss reduction and production rate optimization, and lastly 3) Auxiliary equipment operational performance. Next, a broad overview of the resources and tools available to organizations looking to improve their industrial energy efficiency is provided. Following this, several case studies are presented which demonstrate the potential benefits that are available to Canadian organizations looking to improve their energy efficiency. Lastly, a discussion of a number of issues and barriers pertaining to the wide-scale implementation of industrial efficiency strategies is presented. It discusses a number of potential roadblocks, including a lack of energy consumption monitoring and data transparency. While this topic has been well researched in the past in terms of the losses encountered during various general manufacturing process streams, practically no literature exists which attempts to provide real data from companies who have implemented energy efficiency strategies. By obtaining original data directly from companies, this thesis demonstrates the potential for companies to save money and reduce GHG (greenhouse gas) emissions through the implementation of energy efficiency projects and publishes numbers which are almost impossible to find directly. By publishing success stories, it is hoped that other companies, especially SMEs (small and medium enterprises) will be able to learn from these case studies and be inspired to embark on energy efficiency projects of their own.

  17. Self-Learning Adaptive Umbrella Sampling Method for the Determination of Free Energy Landscapes in Multiple Dimensions

    PubMed Central

    Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon

    2013-01-01

    The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508

  18. Instrument for Real-Time Digital Nucleic Acid Amplification on Custom Microfluidic Devices

    PubMed Central

    Selck, David A.

    2016-01-01

    Nucleic acid amplification tests that are coupled with a digital readout enable the absolute quantification of single molecules, even at ultralow concentrations. Digital methods are robust, versatile and compatible with many amplification chemistries including isothermal amplification, making them particularly invaluable to assays that require sensitive detection, such as the quantification of viral load in occult infections or detection of sparse amounts of DNA from forensic samples. A number of microfluidic platforms are being developed for carrying out digital amplification. However, the mechanistic investigation and optimization of digital assays has been limited by the lack of real-time kinetic information about which factors affect the digital efficiency and analytical sensitivity of a reaction. Commercially available instruments that are capable of tracking digital reactions in real-time are restricted to only a small number of device types and sample-preparation strategies. Thus, most researchers who wish to develop, study, or optimize digital assays rely on the rate of the amplification reaction when performed in a bulk experiment, which is now recognized as an unreliable predictor of digital efficiency. To expand our ability to study how digital reactions proceed in real-time and enable us to optimize both the digital efficiency and analytical sensitivity of digital assays, we built a custom large-format digital real-time amplification instrument that can accommodate a wide variety of devices, amplification chemistries and sample-handling conditions. Herein, we validate this instrument, we provide detailed schematics that will enable others to build their own custom instruments, and we include a complete custom software suite to collect and analyze the data retrieved from the instrument. We believe assay optimizations enabled by this instrument will improve the current limits of nucleic acid detection and quantification, improving our fundamental understanding of single-molecule reactions and providing advancements in practical applications such as medical diagnostics, forensics and environmental sampling. PMID:27760148

  19. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  20. The Next Breakthrough for Organic Photovoltaics?

    PubMed

    Jackson, Nicholas E; Savoie, Brett M; Marks, Tobin J; Chen, Lin X; Ratner, Mark A

    2015-01-02

    While the intense focus on energy level tuning in organic photovoltaic materials has afforded large gains in device performance, we argue here that strategies based on microstructural/morphological control are at least as promising in any rational design strategy. In this work, a meta-analysis of ∼150 bulk heterojunction devices fabricated with different materials combinations is performed and reveals strong correlations between power conversion efficiency and morphology-dominated properties (short-circuit current, fill factor) and surprisingly weak correlations between efficiency and energy level positioning (open-circuit voltage, enthalpic offset at the interface, optical gap). While energy level positioning should in principle provide the theoretical maximum efficiency, the optimization landscape that must be navigated to reach this maximum is unforgiving. Thus, research aimed at developing understanding-based strategies for more efficient optimization of an active layer microstructure and morphology are likely to be at least as fruitful.

  1. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    PubMed Central

    Lin, Cheng

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697

  2. A traction control strategy with an efficiency model in a distributed driving electric vehicle.

    PubMed

    Lin, Cheng; Cheng, Xingqun

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention.

  3. Strongyle infections and parasitic control strategies in German horses - a risk assessment.

    PubMed

    Schneider, Stephanie; Pfister, Kurt; Becher, Anne M; Scheuerle, Miriam C

    2014-11-12

    As a consequence of the increasing levels of anthelmintic resistance in cyathostomes, new strategies for equine parasite control are being implemented. To assess the potential risks of these, the occurrence of strongyles was evaluated in a group of 1887 horses. The distribution of fecal egg counts (FECs), the frequency of anthelmintic drug use, and the deworming intervals were also analyzed. Between June 2012 and May 2013, 1887 fecal samples from either selectively or strategically dewormed horses were collected at 195 horse farms all over Germany and analyzed quantitatively with a modified McMaster technique. All samples with FEC ≥20 eggs per gram (EPG) were subjected to coproculture to generate third-stage larvae (LIII) for species differentiation. Egg counts were below the limit of detection (20 EPG) in 1046 (55.4%) samples and above it in 841 (44.6%) samples. Strongylus vulgaris larvae were identified in two of the 841 positive samples. Infections with cyathostomes were found on every farm. The most frequently applied anthelmintic was ivermectin (788/50.8%), followed by pyrantel (336/21.6%). The mean time since last treatment was 6.3 months. High-egg-shedding (>500 EPG) strategically dewormed horses (183/1357) were treated, on average, three times/year. The planned treatment date was already exceeded by 72.5% of the high egg-shedders and by 58.1% of the moderate (200-500 EPG) and low egg-shedders (20-199 EPG). S. vulgaris seems to be rare in Germany and no difference in its frequency has yet been found between selectively treated horses and horses receiving treatment in strategic intervals. However, inconsistent parasite control has been observed. Therefore, to minimize the risks for disease, consistent and efficient parasite control should be implemented.

  4. Partial least squares model and design of experiments toward the analysis of the metabolome of Jatropha gossypifolia leaves: Extraction and chromatographic fingerprint optimization.

    PubMed

    Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian

    2016-03-01

    A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Energy Efficiency and Importance of Renewable Energy Sources in Latvia

    NASA Astrophysics Data System (ADS)

    Skapare, I.; Kreslins, A.

    2007-10-01

    The main goal of Latvian energy policy is to ensure safe and environmentally friendly long-term energy supply at cost-effective prices, contributing to enhance competitiveness, and to ensure safe energy transit. The Latvian Parliament approved an Energy Efficiency Strategy in 2000. Its objective is to decrease energy consumption per unit of GDP by 25% by 2010. Awareness raising, implementation of standards and economic incentives for self financing are the main instruments to increase energy efficiency, mentioned in the strategy. Latvia, as many other European Union member states, is dependent on the import of primary energy resources. The Latvian Renewable Energy strategy is still under development. The only recent study on RES was developed in the framework of a PHARE program in year 2000: "Renewable energy resource program", where three main objectives for a future RES strategy were proposed: 1. To increase the use of wood waste and low value wood and forest residues. 2. To improve efficiency of combustion technologies and to replace outdated plants. 3. To increase the use of renewables in Combined Heat and Power plants (CHP). Through the Renewable Energy and Energy Efficiency Partnership, partners will develop a set of new shared activities, and coordinate and strengthen existing efforts in this area.

  6. Longitudinal relations between constructive and destructive conflict and couples' sleep.

    PubMed

    El-Sheikh, Mona; Kelly, Ryan J; Koss, Kalsea J; Rauer, Amy J

    2015-06-01

    We examined longitudinal relations between interpartner constructive (negotiation) and destructive (psychological and physical aggression) conflict strategies and couples' sleep over 1 year. Toward explicating processes of effects, we assessed the intervening role of internalizing symptoms in associations between conflict tactics and couples' sleep. Participants were 135 cohabiting couples (M age = 37 years for women and 39 years for men). The sample included a large representation of couples exposed to economic adversity. Further, 68% were European American and the remainder were primarily African American. At Time 1 (T1), couples reported on their conflict and their mental health (depression, anxiety). At T1 and Time 2, sleep was examined objectively with actigraphs for 7 nights. Three sleep parameters were derived: efficiency, minutes, and latency. Actor-partner interdependence models indicated that husbands' use of constructive conflict forecasted increases in their own sleep efficiency as well as their own and their wives' sleep duration over time. Actor and partner effects emerged, and husbands' and wives' use of destructive conflict strategies generally predicted worsening of some sleep parameters over time. Several mediation and intervening effects were observed for destructive conflict strategies. Some of these relations reveal that destructive conflict is associated with internalizing symptoms, which in turn are associated with some sleep parameters longitudinally. These findings build on a small, albeit growing, literature linking sleep with marital functioning, and illustrate that consideration of relationship processes including constructive conflict holds promise for gaining a better understanding of factors that influence the sleep of men and women. (c) 2015 APA, all rights reserved).

  7. Strategic search from long-term memory: an examination of semantic and autobiographical recall.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2014-01-01

    Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.

  8. Hierarchical Control Strategy for the Cooperative Braking System of Electric Vehicle.

    PubMed

    Peng, Jiankun; He, Hongwen; Liu, Wei; Guo, Hongqiang

    2015-01-01

    This paper provides a hierarchical control strategy for cooperative braking system of an electric vehicle with separated driven axles. Two layers are defined: the top layer is used to optimize the braking stability based on two sliding mode control strategies, namely, the interaxle control mode and signal-axle control strategies; the interaxle control strategy generates the ideal braking force distribution in general braking condition, and the single-axle control strategy can ensure braking safety in emergency braking condition; the bottom layer is used to maximize the regenerative braking energy recovery efficiency with a reallocated braking torque strategy; the reallocated braking torque strategy can recovery braking energy as much as possible in the premise of meeting battery charging power. The simulation results show that the proposed hierarchical control strategy is reasonable and can adapt to different typical road surfaces and load cases; the vehicle braking stability and safety can be guaranteed; furthermore, the regenerative braking energy recovery efficiency can be improved.

  9. Hierarchical Control Strategy for the Cooperative Braking System of Electric Vehicle

    PubMed Central

    Peng, Jiankun; He, Hongwen; Guo, Hongqiang

    2015-01-01

    This paper provides a hierarchical control strategy for cooperative braking system of an electric vehicle with separated driven axles. Two layers are defined: the top layer is used to optimize the braking stability based on two sliding mode control strategies, namely, the interaxle control mode and signal-axle control strategies; the interaxle control strategy generates the ideal braking force distribution in general braking condition, and the single-axle control strategy can ensure braking safety in emergency braking condition; the bottom layer is used to maximize the regenerative braking energy recovery efficiency with a reallocated braking torque strategy; the reallocated braking torque strategy can recovery braking energy as much as possible in the premise of meeting battery charging power. The simulation results show that the proposed hierarchical control strategy is reasonable and can adapt to different typical road surfaces and load cases; the vehicle braking stability and safety can be guaranteed; furthermore, the regenerative braking energy recovery efficiency can be improved. PMID:26236772

  10. Parity modifies endocrine hormones in urine and problem-solving strategies of captive owl monkeys (Aotus spp.).

    PubMed

    Bardi, Massimo; Eckles, Meredith; Kirk, Emily; Landis, Timothy; Evans, Sian; Lambert, Kelly G

    2014-12-01

    Parental behavior modifies neural, physiologic, and behavioral characteristics of both maternal and paternal mammals. These parenting-induced modifications extend to brain regions not typically associated with parental responses themselves but that enhance ancillary responses, such as foraging efficiency and predator avoidance. Here we hypothesized that male and female owl monkeys (Aotus spp.) with reproductive experience (RE) would demonstrate more adaptive ancillary behavioral and neuroendocrine responses than those of their nonRE counterparts. To assess cognitive skills and coping flexibility, we introduced a foraging strategy task, including a set of novel objects (coin holders) marked with different symbols representing different food rewards, to the animals. To assess endocrine responses, urine samples were assayed for cortisol and dehydroepiandrosterone (DHEA) levels and their ratios to determine physiologic measures of emotional regulation in RE and nonRE owl monkeys. Compared with nonRE monkeys, experienced parents had higher DHEA:cortisol ratios after exposure to habituation training and on the first day of testing in the foraging task. Both hormones play critical roles in the stress response and coping mechanisms, and a high DHEA:cortisol ratio usually indicates increased coping skills. In addition, RE monkeys exhibited more efficient foraging responses (by 4-fold) than did the nonRE mating pairs. We conclude that RE modifies relevant behavioral and hormonal responses of both maternal and paternal owl monkeys exposed to a challenging cognitive paradigm. Corroborating previous research demonstrating adaptive modifications in foraging efficiency and emotional responses in reproductively experienced rodents, the current results extend these findings to a monogamous primate species.

  11. Parity Modifies Endocrine Hormones in Urine and Problem-Solving Strategies of Captive Owl Monkeys (Aotus spp.)

    PubMed Central

    Eckles, Meredith; Kirk, Emily; Landis, Timothy; Evans, Sian; Lambert, Kelly G

    2014-01-01

    Parental behavior modifies neural, physiologic, and behavioral characteristics of both maternal and paternal mammals. These parenting-induced modifications extend to brain regions not typically associated with parental responses themselves but that enhance ancillary responses, such as foraging efficiency and predator avoidance. Here we hypothesized that male and female owl monkeys (Aotus spp.) with reproductive experience (RE) would demonstrate more adaptive ancillary behavioral and neuroendocrine responses than those of their nonRE counterparts. To assess cognitive skills and coping flexibility, we introduced a foraging strategy task, including a set of novel objects (coin holders) marked with different symbols representing different food rewards, to the animals. To assess endocrine responses, urine samples were assayed for cortisol and dehydroepiandrosterone (DHEA) levels and their ratios to determine physiologic measures of emotional regulation in RE and nonRE owl monkeys. Compared with nonRE monkeys, experienced parents had higher DHEA:cortisol ratios after exposure to habituation training and on the first day of testing in the foraging task. Both hormones play critical roles in the stress response and coping mechanisms, and a high DHEA:cortisol ratio usually indicates increased coping skills. In addition, RE monkeys exhibited more efficient foraging responses (by 4-fold) than did the nonRE mating pairs. We conclude that RE modifies relevant behavioral and hormonal responses of both maternal and paternal owl monkeys exposed to a challenging cognitive paradigm. Corroborating previous research demonstrating adaptive modifications in foraging efficiency and emotional responses in reproductively experienced rodents, the current results extend these findings to a monogamous primate species. PMID:25527030

  12. Spatial and Temporal Distribution of Multiple Cropping Indices in the North China Plain Using a Long Remote Sensing Data Time Series.

    PubMed

    Zhao, Yan; Bai, Linyan; Feng, Jianzhong; Lin, Xiaosong; Wang, Li; Xu, Lijun; Ran, Qiyun; Wang, Kui

    2016-04-19

    Multiple cropping provides China with a very important system of intensive cultivation, and can effectively enhance the efficiency of farmland use while improving regional food production and security. A multiple cropping index (MCI), which represents the intensity of multiple cropping and reflects the effects of climate change on agricultural production and cropping systems, often serves as a useful parameter. Therefore, monitoring the dynamic changes in the MCI of farmland over a large area using remote sensing data is essential. For this purpose, nearly 30 years of MCIs related to dry land in the North China Plain (NCP) were efficiently extracted from remotely sensed leaf area index (LAI) data from the Global LAnd Surface Satellite (GLASS). Next, the characteristics of the spatial-temporal change in MCI were analyzed. First, 2162 typical arable sample sites were selected based on a gridded spatial sampling strategy, and then the LAI information was extracted from the samples. Second, the Savizky-Golay filter was used to smooth the LAI time-series data of the samples, and then the MCIs of the samples were obtained using a second-order difference algorithm. Finally, the geo-statistical Kriging method was employed to map the spatial distribution of the MCIs and to obtain a time-series dataset of the MCIs of dry land over the NCP. The results showed that all of the MCIs in the NCP showed an increasing trend over the entire study period and increased most rapidly from 1982 to 2002. Spatially, MCIs decreased from south to north; also, high MCIs were mainly concentrated in the relatively flat areas. In addition, the partial spatial changes of MCIs had clear geographical characteristics, with the largest change in Henan Province.

  13. Spatial and Temporal Distribution of Multiple Cropping Indices in the North China Plain Using a Long Remote Sensing Data Time Series

    PubMed Central

    Zhao, Yan; Bai, Linyan; Feng, Jianzhong; Lin, Xiaosong; Wang, Li; Xu, Lijun; Ran, Qiyun; Wang, Kui

    2016-01-01

    Multiple cropping provides China with a very important system of intensive cultivation, and can effectively enhance the efficiency of farmland use while improving regional food production and security. A multiple cropping index (MCI), which represents the intensity of multiple cropping and reflects the effects of climate change on agricultural production and cropping systems, often serves as a useful parameter. Therefore, monitoring the dynamic changes in the MCI of farmland over a large area using remote sensing data is essential. For this purpose, nearly 30 years of MCIs related to dry land in the North China Plain (NCP) were efficiently extracted from remotely sensed leaf area index (LAI) data from the Global LAnd Surface Satellite (GLASS). Next, the characteristics of the spatial-temporal change in MCI were analyzed. First, 2162 typical arable sample sites were selected based on a gridded spatial sampling strategy, and then the LAI information was extracted from the samples. Second, the Savizky-Golay filter was used to smooth the LAI time-series data of the samples, and then the MCIs of the samples were obtained using a second-order difference algorithm. Finally, the geo-statistical Kriging method was employed to map the spatial distribution of the MCIs and to obtain a time-series dataset of the MCIs of dry land over the NCP. The results showed that all of the MCIs in the NCP showed an increasing trend over the entire study period and increased most rapidly from 1982 to 2002. Spatially, MCIs decreased from south to north; also, high MCIs were mainly concentrated in the relatively flat areas. In addition, the partial spatial changes of MCIs had clear geographical characteristics, with the largest change in Henan Province. PMID:27104536

  14. Extreme ultraviolet reflection efficiencies of diamond-turned aluminum, polished nickel, and evaporated gold surfaces. [for telescope mirrors

    NASA Technical Reports Server (NTRS)

    Malina, R. F.; Cash, W.

    1978-01-01

    Measured reflection efficiencies are presented for flat samples of diamond-turned aluminum, nickel, and evaporated gold surfaces fabricated by techniques suited for EUV telescopes. The aluminum samples were 6.2-cm-diameter disks of 6061-T6, the electroless nickel samples were formed by plating beryllium disks with 7.5-microns of Kanigen. Gold samples were produced by coating the aluminum and nickel samples with 5 strips of evaporated gold. Reflection efficiencies are given for grazing angles in the 5-75 degree range. The results indicate that for wavelengths over about 100 A, the gold-coated nickel samples yield highest efficiencies. For shorter wavelengths, the nickel samples yield better efficiencies. 500 A is found to be the optimal gold thickness.

  15. Developing the design of a continuous national health survey for New Zealand

    PubMed Central

    2013-01-01

    Background A continuously operating survey can yield advantages in survey management, field operations, and the provision of timely information for policymakers and researchers. We describe the key features of the sample design of the New Zealand (NZ) Health Survey, which has been conducted on a continuous basis since mid-2011, and compare to a number of other national population health surveys. Methods A number of strategies to improve the NZ Health Survey are described: implementation of a targeted dual-frame sample design for better Māori, Pacific, and Asian statistics; movement from periodic to continuous operation; use of core questions with rotating topic modules to improve flexibility in survey content; and opportunities for ongoing improvements and efficiencies, including linkage to administrative datasets. Results and discussion The use of disproportionate area sampling and a dual frame design resulted in reductions of approximately 19%, 26%, and 4% to variances of Māori, Pacific and Asian statistics respectively, but at the cost of a 17% increase to all-ethnicity variances. These were broadly in line with the survey’s priorities. Respondents provided a high degree of cooperation in the first year, with an adult response rate of 79% and consent rates for data linkage above 90%. Conclusions A combination of strategies tailored to local conditions gives the best results for national health surveys. In the NZ context, data from the NZ Census of Population and Dwellings and the Electoral Roll can be used to improve the sample design. A continuously operating survey provides both administrative and statistical advantages. PMID:24364838

  16. Counterfeit analysis strategy illustrated by a case study.

    PubMed

    Dégardin, Klara; Roggo, Yves

    2016-01-01

    Medicine counterfeiting is a current problem that the whole pharmaceutical field has to deal with. In 2014, counterfeits entered the legitimate supply chain in Europe. Quick and efficient action had to be taken. The aim of this paper is to explain which analytical strategy was chosen to deal with six of the cases concerned and which criteria have to be considered to provide quick and thorough information about the counterfeits. The evaluation of the packaging was performed in a first step, based on a comparison with genuine samples and evaluation of manipulation signs. Chemical methods were then used, consisting of near infrared and infrared spectroscopy, capillary zone electrophoresis and ultraviolet-visible spectrophotometry, in order to authenticate the samples and provide the chemical composition of the confirmed counterfeits. Among the 20 samples analyzed, 17 were confirmed as counterfeits. The counterfeits were the results of the manipulation of genuine samples, and one contained totally counterfeited parts. Several manipulation signs were asserted, like the addition of glue on the boxes and the vials. Genuine stolen goods had been diluted with water, while for an isolated case, a different active ingredient had been introduced in a vial. The analytical data generated were further investigated from a forensic intelligence perspective. Links could be revealed between the analyzed counterfeits, together with some interesting information about the modus operandi of the counterfeiters. The study was performed on a limited number of cases, and therefore encourages chemical and packaging profiling of counterfeits at a bigger scale. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Methods for estimating the amount of vernal pool habitat in the northeastern United States

    USGS Publications Warehouse

    Van Meter, R.; Bailey, L.L.; Grant, E.H.C.

    2008-01-01

    The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.

  18. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.

    PubMed

    Bornstein, Marc H; Jager, Justin; Putnick, Diane L

    2013-12-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study's target population, whether they yield representative and generalizable estimates of subsamples within a study's target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce "noise" related to variation in subsamples and whether that "noise" can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting.

  19. Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards

    PubMed Central

    Bornstein, Marc H.; Jager, Justin; Putnick, Diane L.

    2014-01-01

    Sampling is a key feature of every study in developmental science. Although sampling has far-reaching implications, too little attention is paid to sampling. Here, we describe, discuss, and evaluate four prominent sampling strategies in developmental science: population-based probability sampling, convenience sampling, quota sampling, and homogeneous sampling. We then judge these sampling strategies by five criteria: whether they yield representative and generalizable estimates of a study’s target population, whether they yield representative and generalizable estimates of subsamples within a study’s target population, the recruitment efforts and costs they entail, whether they yield sufficient power to detect subsample differences, and whether they introduce “noise” related to variation in subsamples and whether that “noise” can be accounted for statistically. We use sample composition of gender, ethnicity, and socioeconomic status to illustrate and assess the four sampling strategies. Finally, we tally the use of the four sampling strategies in five prominent developmental science journals and make recommendations about best practices for sample selection and reporting. PMID:25580049

  20. Analysis of complex network performance and heuristic node removal strategies

    NASA Astrophysics Data System (ADS)

    Jahanpour, Ehsan; Chen, Xin

    2013-12-01

    Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.

Top