Science.gov

Sample records for define optimal sampling

  1. Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions

    NASA Astrophysics Data System (ADS)

    Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus

    2015-05-01

    Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.

  2. Annotating user-defined abstractions for optimization

    SciTech Connect

    Quinlan, D; Schordan, M; Vuduc, R; Yi, Q

    2005-12-05

    This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.

  3. Metamodel defined multidimensional embedded sequential sampling criteria.

    SciTech Connect

    Turner, C. J.; Campbell, M. I.; Crawford, R. H.

    2004-01-01

    Collecting data to characterize an unknown space presents a series of challenges. Where in the space should data be collected? What regions are more valuable than others to sample? When have sufficient samples been acquired to characterize the space with some level of confidence? Sequential sampling techniques offer an approach to answering these questions by intelligently sampling an unknown space. Sampling decisions are made with criteria intended to preferentially search the space for desirable features. However, N-dimensional applications need efficient and effective criteria. This paper discusses the evolution of several such criteria based on an understanding of the behaviors of existing criteria, and desired criteria properties. The resulting criteria are evaluated with a variety of planar functions, and preliminary results for higher dimensional applications are also presented. In addition, a set of convergence criteria, intended to evaluate the effectiveness of further sampling are implemented. Using these sampling criteria, an effective metamodel representation of the unknown space can be generated at reasonable sampling costs. Furthermore, the use of convergence criteria allows conclusions to be drawn about the level of confidence in the metamodel, and forms the basis for evaluating the adequacy of the original sampling budget.

  4. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  5. Defining a region of optimization based on engine usage data

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  6. Defining Low-Dimensional Projections to Guide Protein Conformational Sampling.

    PubMed

    Novinskaya, Anastasia; Devaurs, Didier; Moll, Mark; Kavraki, Lydia E

    2017-01-01

    Exploring the conformational space of proteins is critical to characterize their functions. Numerous methods have been proposed to sample a protein's conformational space, including techniques developed in the field of robotics and known as sampling-based motion-planning algorithms (or sampling-based planners). However, these algorithms suffer from the curse of dimensionality when applied to large proteins. Many sampling-based planners attempt to mitigate this issue by keeping track of sampling density to guide conformational sampling toward unexplored regions of the conformational space. This is often done using low-dimensional projections as an indirect way to reduce the dimensionality of the exploration problem. However, how to choose an appropriate projection and how much it influences the planner's performance are still poorly understood issues. In this article, we introduce two methodologies defining low-dimensional projections that can be used by sampling-based planners for protein conformational sampling. The first method leverages information about a protein's flexibility to construct projections that can efficiently guide conformational sampling, when expert knowledge is available. The second method builds similar projections automatically, without expert intervention. We evaluate the projections produced by both methodologies on two conformational search problems involving three middle-size proteins. Our experiments demonstrate that (i) defining projections based on expert knowledge can benefit conformational sampling and (ii) automatically constructing such projections is a reasonable alternative.

  7. Defining Predictive Probability Functions for Species Sampling Models.

    PubMed

    Lee, Jaeyong; Quintana, Fernando A; Müller, Peter; Trippa, Lorenzo

    2013-01-01

    We review the class of species sampling models (SSM). In particular, we investigate the relation between the exchangeable partition probability function (EPPF) and the predictive probability function (PPF). It is straightforward to define a PPF from an EPPF, but the converse is not necessarily true. In this paper we introduce the notion of putative PPFs and show novel conditions for a putative PPF to define an EPPF. We show that all possible PPFs in a certain class have to define (unnormalized) probabilities for cluster membership that are linear in cluster size. We give a new necessary and sufficient condition for arbitrary putative PPFs to define an EPPF. Finally, we show posterior inference for a large class of SSMs with a PPF that is not linear in cluster size and discuss a numerical method to derive its PPF.

  8. Initial data sampling in design optimization

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Terry H.

    2011-06-01

    Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.

  9. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  10. Software defined noise radar with low sampling rate

    NASA Astrophysics Data System (ADS)

    Lukin, K.; Vyplavin, P.; Savkovich, Elena; Lukin, S.

    2011-10-01

    Preliminary results of our investigations of Software Defined Noise Radar are presented; namely, results on the design and implementation of FPGA-based Noise Radar with digital generation of random signal and coherent reception of radar returns. Parallelization of computations in FPGA enabled realization of algorithm in time domain for evaluation of the cross-correlations, comparable with the frequency-domain algorithm in efficiency. Moreover, implementation of relay-type correlator algorithm enabled realizing of the cross-correlation algorithm which might operate much faster. We present comparison of performance and limitations of different considered designs. Digital correlator has been implemented in the Altera/Stratix evaluation board having 1 million gates and up to 300 MHz clock frequency. We also realized a software defined CW noise radar on the basis of RVI Development Board from ICTP M-LAB.

  11. SEARCH, blackbox optimization, and sample complexity

    SciTech Connect

    Kargupta, H.; Goldberg, D.E.

    1996-05-01

    The SEARCH (Search Envisioned As Relation and Class Hierarchizing) framework developed elsewhere (Kargupta, 1995; Kargupta and Goldberg, 1995) offered an alternate perspective toward blackbox optimization -- optimization in presence of little domain knowledge. The SEARCH framework investigates the conditions essential for transcending the limits of random enumerative search using a framework developed in terms of relations, classes and partial ordering. This paper presents a summary of some of the main results of that work. A closed form bound on the sample complexity in terms of the cardinality of the relation space, class space, desired quality of the solution and the reliability is presented. This also leads to the identification of the class of order-k delineable problems that can be solved in polynomial sample complexity. These results are applicable to any blackbox search algorithms, including evolutionary optimization techniques.

  12. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  13. (Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement

    NASA Astrophysics Data System (ADS)

    Lowery, C.; Fraass, A. J.

    2015-12-01

    Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.

  14. Defining the Mars Ascent Problem for Sample Return

    SciTech Connect

    Whitehead, J

    2008-07-31

    Lifting geology samples off of Mars is both a daunting technical problem for propulsion experts and a cultural challenge for the entire community that plans and implements planetary science missions. The vast majority of science spacecraft require propulsive maneuvers that are similar to what is done routinely with communication satellites, so most needs have been met by adapting hardware and methods from the satellite industry. While it is even possible to reach Earth from the surface of the moon using such traditional technology, ascending from the surface of Mars is beyond proven capability for either solid or liquid propellant rocket technology. Miniature rocket stages for a Mars ascent vehicle would need to be over 80 percent propellant by mass. It is argued that the planetary community faces a steep learning curve toward nontraditional propulsion expertise, in order to successfully accomplish a Mars sample return mission. A cultural shift may be needed to accommodate more technical risk acceptance during the technology development phase.

  15. Ecological and sampling constraints on defining landscape fire severity

    USGS Publications Warehouse

    Key, C.H.

    2006-01-01

    Ecological definition and detection of fire severity are influenced by factors of spatial resolution and timing. Resolution determines the aggregation of effects within a sampling unit or pixel (alpha variation), hence limiting the discernible ecological responses, and controlling the spatial patchiness of responses distributed throughout a burn (beta variation). As resolution decreases, alpha variation increases, extracting beta variation and complexity from the spatial model of the whole burn. Seasonal timing impacts the quality of radiometric data in terms of transmittance, sun angle, and potential contrast between responses within burns. Detection sensitivity candegrade toward the end of many fire seasons when low sun angles, vegetation senescence, incomplete burning, hazy conditions, or snow are common. Thus, a need exists to supersede many rapid response applications when remote sensing conditions improve. Lag timing, or timesince fire, notably shapes the ecological character of severity through first-order effects that only emerge with time after fire, including delayed survivorship and mortality. Survivorship diminishes the detected magnitude of severity, as burned vegetation remains viable and resprouts, though at first it may appear completely charred or consumed above ground. Conversely, delayed mortality increases the severity estimate when apparently healthy vegetation is in fact damaged by heat to the extent that it dies over time. Both responses dependon fire behavior and various species-specific adaptations to fire that are unique to the pre-firecomposition of each burned area. Both responses can lead initially to either over- or underestimating severity. Based on such implications, three sampling intervals for short-term burn severity are identified; rapid, initial, and extended assessment, sampled within about two weeks, two months, and depending on the ecotype, from three months to one year after fire, respectively. Spatial and temporal

  16. Optimal timber harvest scheduling with spatially defined sediment objectives

    Treesearch

    Jon Hof; Michael Bevers

    2000-01-01

    This note presents a simple model formulation that focuses on the spatial relationships over time between timber harvesting and sediment levels in water runoff courses throughout the watershed being managed. A hypothetical example is developed to demonstrate the formulation and show how sediment objectives can be spatially defined anywhere in the watershed. Spatial...

  17. Optimization of a chemically defined, minimal medium for Clostridium thermosaccharolyticum

    SciTech Connect

    Baskaran, S.; Hogsett, D.A.L.; Lynd, L.R.

    1995-12-31

    This article presents results from a systematic study aimed at formulating a defined, minimal medium for the growth of Clostridium thermosaccharolyticum in batch and in continuous culture. At least one vitamin appears to be essential, and there is no demonstrable requirement for trace minerals. The defined medium is shown to support growth on high substrate concentrations with scaled nutrient levels and is expected to permit complete utilization when nutrient limitation(s) are overcome. The observed elemental requirements are compared with cell mass fraction measurements and with a typical cell composition. The maximum growth rate ({mu}max) for batch growth of C. thermosaccharolyticum on the minimal medium is 0.27 h{sup -1} as compared with values of {approximately}0.4 h{sup -1} typically reported for growth on complex media. However, exponential growth terminates at an optical density of about 0.22 corresponding to about 40% of the final value attained. Greater understanding of nutrient requirements and interactions is needed to address this issue.

  18. Defining Optimal Health Range for Thyroid Function Based on the Risk of Cardiovascular Disease.

    PubMed

    Chaker, Layal; Korevaar, Tim I M; Rizopoulos, Dimitris; Collet, Tinh-Hai; Völzke, Henry; Hofman, Albert; Rodondi, Nicolas; Cappola, Anne R; Peeters, Robin P; Franco, Oscar H

    2017-08-01

    Reference ranges of thyroid-stimulating hormone (TSH) and free thyroxine (FT4) are defined by their distribution in apparently healthy populations (2.5th and 97.5th percentiles), irrespective of disease risk, and are used as cutoffs for defining and clinically managing thyroid dysfunction. To provide proof of concept in defining optimal health ranges of thyroid function based on cardiovascular disease (CVD) mortality risk. In all, 9233 participants from the Rotterdam Study (mean age, 65.0 years) were followed up (median, 8.8 years) from baseline to date of death or end of follow-up period (2012), whichever came first (689 cases of CVD mortality). We calculated 10-year absolute risks of CVD mortality (defined according to the SCORE project) using a Fine and Gray competing risk model per percentiles of TSH and FT4, modeled nonlinearly and with sex and age adjustments. Overall, FT4 level >90th percentile was associated with a predicted 10-year CVD mortality risk >7.5% (P = 0.005). In men, FT4 level >97th percentile was associated with a risk of 10.8% (P < 0.001). In participants aged ≥65 years, absolute risk estimates were <10.0% below the 30th percentile (∼14.5 pmol/L or 1.10 ng/dL) and ≥15.0% above the 97th percentile of FT4 (∼22 pmol/L or 1.70 ng/dL). We describe absolute 10-year CVD mortality risks according to thyroid function (TSH and FT4) and suggest that optimal health ranges for thyroid function can be defined according to disease risk and are possibly sex and age dependent. These results need to be replicated with sufficient samples and representative populations.

  19. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  20. Optimization of sampling parameters for standardized exhaled breath sampling.

    PubMed

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample

  1. Optimal Sampling Strategies for Oceanic Applications

    DTIC Science & Technology

    2009-01-01

    Bluelink ocean data assimilation system ( BODAS ; Oke et al. 2005; 2008) that underpins BRAN is based on Ensemble Optimal Interpolation (EnOI). EnOI is well...Brassington, D. A. Griffin and A. Schiller, 2008: The Bluelink Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. Oke, P. R., M...1017. [published, refereed] Ocean Data Assimilation System ( BODAS ). Ocean Modelling, 20, 46-70. [published, refereed] Sakov, P., and P. R. Oke 2008

  2. Sample size and optimal sample design in tuberculosis surveys

    PubMed Central

    Sánchez-Crespo, J. L.

    1967-01-01

    Tuberculosis surveys sponsored by the World Health Organization have been carried out in different communities during the last few years. Apart from the main epidemiological findings, these surveys have provided basic statistical data for use in the planning of future investigations. In this paper an attempt is made to determine the sample size desirable in future surveys that include one of the following examinations: tuberculin test, direct microscopy, and X-ray examination. The optimum cluster sizes are found to be 100-150 children under 5 years of age in the tuberculin test, at least 200 eligible persons in the examination for excretors of tubercle bacilli (direct microscopy) and at least 500 eligible persons in the examination for persons with radiological evidence of pulmonary tuberculosis (X-ray). Modifications of the optimum sample size in combined surveys are discussed. PMID:5300008

  3. Sample preparation optimization in fecal metabolic profiling.

    PubMed

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (wf/vs) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Optimal waist circumference cutoff value for defining the metabolic syndrome in postmenopausal Latin American women.

    PubMed

    Blümel, Juan E; Legorreta, Deborah; Chedraui, Peter; Ayala, Felix; Bencosme, Ascanio; Danckers, Luis; Lange, Diego; Espinoza, Maria T; Gomez, Gustavo; Grandia, Elena; Izaguirre, Humberto; Manriquez, Valentin; Martino, Mabel; Navarro, Daysi; Ojeda, Eliana; Onatra, William; Pozzo, Estela; Prada, Mariela; Royer, Monique; Saavedra, Javier M; Sayegh, Fabiana; Tserotas, Konstantinos; Vallejo, Maria S; Zuñiga, Cristina

    2012-04-01

    The aim of this study was to determine an optimal waist circumference (WC) cutoff value for defining the metabolic syndrome (METS) in postmenopausal Latin American women. A total of 3,965 postmenopausal women (age, 45-64 y), with self-reported good health, attending routine consultation at 12 gynecological centers in major Latin American cities were included in this cross-sectional study. Modified guidelines of the US National Cholesterol Education Program, Adult Treatment Panel III were used to assess METS risk factors. Receiver operator characteristic curve analysis was used to obtain an optimal WC cutoff value best predicting at least two other METS components. Optimal cutoff values were calculated by plotting the true-positive rate (sensitivity) against the false-positive rate (1 - specificity). In addition, total accuracy, distance to receiver operator characteristic curve, and the Youden Index were calculated. Of the participants, 51.6% (n = 2,047) were identified as having two or more nonadipose METS risk components (excluding a positive WC component). These women were older, had more years since menopause onset, used hormone therapy less frequently, and had higher body mass indices than women with fewer metabolic risk factors. The optimal WC cutoff value best predicting at least two other METS components was determined to be 88 cm, equal to that defined by the Adult Treatment Panel III. A WC cutoff value of 88 cm is optimal for defining METS in this postmenopausal Latin American series.

  5. The optimal sampling strategy for unfamiliar prey.

    PubMed

    Sherratt, Thomas N

    2011-07-01

    Precisely how predators solve the problem of sampling unfamiliar prey types is central to our understanding of the evolution of a variety of antipredator defenses, ranging from Müllerian mimicry to polymorphism. When predators encounter a novel prey item then they must decide whether to take a risk and attack it, thereby gaining a potential meal and valuable information, or avoid such prey altogether. Moreover, if predators initially attack the unfamiliar prey, then at some point(s) they should decide to cease sampling if evidence mounts that the type is on average unprofitable to attack. Here, I cast this problem as a "two-armed bandit," the standard metaphor for exploration-exploitation trade-offs. I assume that as predators encounter and attack unfamiliar prey they use Bayesian inference to update both their beliefs as to the likelihood that individuals of this type are chemically defended, and the probability of seeing the prey type in the future. I concurrently use dynamic programming to identify the critical informational states at which predator should cease sampling. The model explains why predators sample more unprofitable prey before complete rejection when the prey type is common and explains why predators exhibit neophobia when the unfamiliar prey type is perceived to be rare. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  6. Screening and optimization of chemically defined media and feeds with integrated and statistical approaches.

    PubMed

    Xiao, Zhihua; Sabourin, Michelle; Piras, Graziella; Gorfien, Stephen F

    2014-01-01

    The majority of therapeutic proteins are expressed in mammalian cells, predominantly in Chinese Hamster Ovary cells. While cell culture media and feed supplements are crucial to protein productivity, medium optimization can be labor intensive and time-consuming. In this chapter, we describe some basic concepts in medium development and introduce a rational and rapid workflow to screen and optimize media and feeds. The major goal of medium screening is to select a base formulation as the foundation for further optimization, but ironically, the most conventional screening method may actually rule out ideal chemically defined medium candidates. Appropriate cell adaptation is the key to identifying an optimal base medium, particularly when cells were originally cultured in serum-free medium containing recombinant proteins and/or undefined hydrolysates. The efficient workflow described herein integrates the optimization of both medium and feed simultaneously using a Design-of-Experiment (DOE) approach. The feasibility of the workflow is then demonstrated with a case study, in which chemically defined medium and feed were optimized in a single fed-batch study using a high-throughput microbioreactor system (SimCell™), which resulted in improving protein titers three- to sixfold.

  7. OpenMSI Arrayed Analysis Toolkit: Analyzing Spatially Defined Samples Using Mass Spectrometry Imaging.

    PubMed

    de Raad, Markus; de Rond, Tristan; Rübel, Oliver; Keasling, Jay D; Northen, Trent R; Bowen, Benjamin P

    2017-06-06

    Mass spectrometry imaging (MSI) has primarily been applied in localizing biomolecules within biological matrices. Although well-suited, the application of MSI for comparing thousands of spatially defined spotted samples has been limited. One reason for this is a lack of suitable and accessible data processing tools for the analysis of large arrayed MSI sample sets. The OpenMSI Arrayed Analysis Toolkit (OMAAT) is a software package that addresses the challenges of analyzing spatially defined samples in MSI data sets. OMAAT is written in Python and is integrated with OpenMSI ( http://openmsi.nersc.gov ), a platform for storing, sharing, and analyzing MSI data. By using a web-based python notebook (Jupyter), OMAAT is accessible to anyone without programming experience yet allows experienced users to leverage all features. OMAAT was evaluated by analyzing an MSI data set of a high-throughput glycoside hydrolase activity screen comprising 384 samples arrayed onto a NIMS surface at a 450 μm spacing, decreasing analysis time >100-fold while maintaining robust spot-finding. The utility of OMAAT was demonstrated for screening metabolic activities of different sized soil particles, including hydrolysis of sugars, revealing a pattern of size dependent activities. These results introduce OMAAT as an effective toolkit for analyzing spatially defined samples in MSI. OMAAT runs on all major operating systems, and the source code can be obtained from the following GitHub repository: https://github.com/biorack/omaat .

  8. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.

  9. The complexities of defining optimal sleep: empirical and theoretical considerations with a special emphasis on children.

    PubMed

    Blunden, Sarah; Galland, Barbara

    2014-10-01

    The main aim of this paper is to consider relevant theoretical and empirical factors defining optimal sleep, and assess the relative importance of each in developing a working definition for, or guidelines about, optimal sleep, particularly in children. We consider whether optimal sleep is an issue of sleep quantity or of sleep quality. Sleep quantity is discussed in terms of duration, timing, variability and dose-response relationships. Sleep quality is explored in relation to continuity, sleepiness, sleep architecture and daytime behaviour. Potential limitations of sleep research in children are discussed, specifically the loss of research precision inherent in sleep deprivation protocols involving children. We discuss which outcomes are the most important to measure. We consider the notion that insufficient sleep may be a totally subjective finding, is impacted by the age of the reporter, driven by socio-cultural patterns and sleep-wake habits, and that, in some individuals, the driver for insufficient sleep can be viewed in terms of a cost-benefit relationship, curtailing sleep in order to perform better while awake. We conclude that defining optimal sleep is complex. The only method of capturing this elusive concept may be by somnotypology, taking into account duration, quality, age, gender, race, culture, the task at hand, and an individual's position in both sleep-alert and morningness-eveningness continuums. At the experimental level, a unified approach by researchers to establish standardized protocols to evaluate optimal sleep across paediatric age groups is required.

  10. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  11. The cognitive mechanisms of optimal sampling.

    PubMed

    Lea, Stephen E G; McLaren, Ian P L; Dow, Susan M; Graft, Donald A

    2012-02-01

    How can animals learn the prey densities available in an environment that changes unpredictably from day to day, and how much effort should they devote to doing so, rather than exploiting what they already know? Using a two-armed bandit situation, we simulated several processes that might explain the trade-off between exploring and exploiting. They included an optimising model, dynamic backward sampling; a dynamic version of the matching law; the Rescorla-Wagner model; a neural network model; and ɛ-greedy and rule of thumb models derived from the study of reinforcement learning in artificial intelligence. Under conditions like those used in published studies of birds' performance under two-armed bandit conditions, all models usually identified the more profitable source of reward, and did so more quickly when the reward probability differential was greater. Only the dynamic programming model switched from exploring to exploiting more quickly when available time in the situation was less. With sessions of equal length presented in blocks, a session-length effect was induced in some of the models by allowing motivational, but not memory, carry-over from one session to the next. The rule of thumb model was the most successful overall, though the neural network model also performed better than the remaining models. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Realization theory and quadratic optimal controllers for systems defined over Banach and Frechet algebras

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1980-01-01

    It is noted that recent work by Kamen (1979) on the stability of half-plane digital filters shows that the problem of the existence of a feedback law also arises for other Banach algebras in applications. This situation calls for a realization theory and stabilizability criteria for systems defined over Banach for Frechet algebra A. Such a theory is developed here, with special emphasis placed on the construction of finitely generated realizations, the existence of coprime factorizations for T(s) defined over A, and the solvability of the quadratic optimal control problem and the associated algebraic Riccati equation over A.

  13. A Source-to-Source Architecture for User-Defined Optimizations

    SciTech Connect

    Schordan, M; Quinlan, D

    2003-02-06

    The performance of object-oriented applications often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are user-defined and not part of the programming language itself only limited information on their high-level semantics can be leveraged through program analysis by the compiler and thus most often no appropriate high-level optimizations are performed. In this paper we outline an approach based on source-to-source transformation to allow users to define optimizations which are not performed by the compiler they use. These techniques are intended to be as easy and intuitive as possible for potential users; i.e. for designers of object-oriented libraries, people most often only with basic compiler expertise.

  14. Towards optimal sampling schedules for integral pumping tests

    NASA Astrophysics Data System (ADS)

    Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario

    2011-06-01

    Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.

  15. Towards optimal sampling schedules for integral pumping tests.

    PubMed

    Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario

    2011-06-01

    Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations C(av) and mass flow rates M(CP). Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the C(av) estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.

  16. Optimization of sampled imaging system with baseband response squeeze model

    NASA Astrophysics Data System (ADS)

    Yang, Huaidong; Chen, Kexin; Huang, Xingyue; He, Qingsheng; Jin, Guofan

    2008-03-01

    When evaluating or designing a sampled imager, a comprehensive analysis is necessary and a trade-off among optics, photoelectric detector and display technique is inevitable. A new method for sampled imaging system evaluation and optimization is developed in this paper. By extension of MTF in sampled imaging system, inseparable parameters of a detector are taken into account and relations among optics, detector and display are revealed. To measure the artifacts of sampling, the Baseband Response Squeeze model, which will impose a penalty for undersampling, is clarified. Taken the squeezed baseband response and its cutoff frequency for favorable criterion, the method is competent not only for evaluating but also for optimizing sampled imaging system oriented either to single task or to multi-task. The method is applied to optimize a typical sampled imaging system. a sensitivity analysis of various detector parameters is performed and the resulted guidelines are given.

  17. Optimizing sparse sampling for 2D electronic spectroscopy

    NASA Astrophysics Data System (ADS)

    Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias

    2017-02-01

    We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.

  18. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    PubMed

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  19. Optimal Sampling to Provide User-Specific Climate Information.

    NASA Astrophysics Data System (ADS)

    Panturat, Suwanna

    examined given information from the optimal sampling network as defined by the study.

  20. Sampling optimization for printer characterization by direct search.

    PubMed

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  1. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  2. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  3. In-depth analysis of sampling optimization methods

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Han, Sangjun; Kim, Myoungsoo; Habets, Boris; Buhl, Stefan; Guhlemann, Steffen; Rößiger, Martin; Bellmann, Enrico; Kim, Seop

    2016-03-01

    High order overlay and alignment models require good coverage of overlay or alignment marks on the wafer. But dense sampling plans are not possible for throughput reasons. Therefore, sampling plan optimization has become a key issue. We analyze the different methods for sampling optimization and discuss the different knobs to fine-tune the methods to constraints of high volume manufacturing. We propose a method to judge sampling plan quality with respect to overlay performance, run-to-run stability and dispositioning criteria using a number of use cases from the most advanced lithography processes.

  4. Design optimality for models defined by a system of ordinary differential equations.

    PubMed

    Rodríguez-Díaz, Juan M; Sánchez-León, Guillermo

    2014-09-01

    Many scientific processes, specially in pharmacokinetics (PK) and pharmacodynamics (PD) studies, are defined by a system of ordinary differential equations (ODE). If there are unknown parameters that need to be estimated, the optimal experimental design approach offers quality estimators for the different objectives of the practitioners. When computing optimal designs the standard procedure uses the linearization of the analytical expression of the ODE solution, which is not feasible when this analytical form does not exist. In this work some methods to solve this problem are described and discussed. Optimal designs for two well-known example models, Iodine and Michaelis-Menten, have been computed using the proposed methods. A thorough study has been done for a specific two-parameter PK model, the biokinetic model of ciprofloxacin and ofloxacin, computing the best designs for different optimality criteria and numbers of points. The designs have been compared according to their efficiency, and the goodness of the designs for the estimation of each parameter has been checked. Although the objectives of the paper are focused on the optimal design field, the methodology can be used as well for a sensitivity analysis of ordinary differential equation systems.

  5. Defining a new standard for IVUS optimized drug eluting stent implantation: the PRAVIO study.

    PubMed

    Gerber, R T; Latib, A; Ielasi, A; Cosgrave, J; Qasim, A; Airoldi, F; Chieffo, A; Montorfano, M; Carlino, M; Michev, I; Tobis, J; Colombo, A

    2009-08-01

    Preliminary Investigation to the Angiographic Versus IVUS Optimization Trial is a single center prospective observational intravascular ultrasound (IVUS) guided stent implantation study assessing new criteria for optimal drug eluting stent (DES) deployment. IVUS assessment of DES often reveals underexpansion and malapposition. Optimal stent deployment is currently poorly defined and previous criteria may not be suitable in long and complex lesions. Optimization was defined as achieving >/or 70% of the cross-sectional area (CSA) of the postdilation balloon. This criterion was applied in 113 complex lesions. The size of this balloon was calculated according to vessel media-to-media diameters at various sites inside the stented segment. The IVUS guided treated lesions were matched according to diabetes, vessel type, reference vessel diameter, minimum lumen diameter (MLD), and lesion length with a group of angiographic treated lesions to compare final MLD achieved. Mean minimum stent CSA according to the postdilation balloon utilized was 4.62 mm(2), 6.26 mm(2), 7.87 mm(2), and 9.87 mm(2) for 2.5 mm, 3.0 mm, 3.5 mm, and 4 mm balloons, respectively. Final MLD (mm) was significantly larger in the IVUS compared to the angiographic-guided group (3.09 +/- 0.50 vs. 2.67 +/- 0.54; P < 0.0001). There were no procedural complications related to IVUS use. We propose new IVUS criteria based on vessel remodeling that results in an increment in the final MLD, compared to angiographic guidance, which is much larger than any previously published study. This criterion seems to be safely achievable. A proposed randomized study (angiographic vs. IVUS optimization trial) has been launched to test these concepts. (c) 2008 Wiley-Liss, Inc.

  6. Optimal Sampling Strategies for Detecting Zoonotic Disease Epidemics

    PubMed Central

    Ferguson, Jake M.; Langebrake, Jessica B.; Cannataro, Vincent L.; Garcia, Andres J.; Hamman, Elizabeth A.; Martcheva, Maia; Osenberg, Craig W.

    2014-01-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests. PMID:24968100

  7. Optimal sampling strategies for detecting zoonotic disease epidemics.

    PubMed

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  8. An optimization-based sampling scheme for phylogenetic trees.

    PubMed

    Misra, Navodit; Blelloch, Guy; Ravi, R; Schwartz, Russell

    2011-11-01

    Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for heated versions of some important special cases. We demonstrate the efficiency and versatility of the method by an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.

  9. An Optimization-Based Sampling Scheme for Phylogenetic Trees

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.

  10. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  11. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  12. Optimized angle selection for radial sampled NMR experiments

    NASA Astrophysics Data System (ADS)

    Gledhill, John M.; Joshua Wand, A.

    2008-12-01

    Sparse sampling offers tremendous potential for overcoming the time limitations imposed by traditional Cartesian sampling of indirectly detected dimensions of multidimensional NMR data. Unfortunately, several otherwise appealing implementations are accompanied by spectral artifacts that have the potential to contaminate the spectrum with false peak intensity. In radial sampling of linked time evolution periods, the artifacts are easily identified and removed from the spectrum if a sufficient set of radial sampling angles is employed. Robust implementation of the radial sampling approach therefore requires optimization of the set of radial sampling angles collected. Here we describe several methods for such optimization. The approaches described take advantage of various aspects of the general simultaneous multidimensional Fourier transform in the analysis of multidimensional NMR data. Radially sampled data are primarily contaminated by ridges extending from authentic peaks. Numerical methods are described that definitively identify artifactual intensity and the optimal set of sampling angles necessary to eliminate it under a variety of scenarios. The algorithms are tested with both simulated and experimentally obtained triple resonance data.

  13. Optimal Food Safety Sampling Under a Budget Constraint.

    PubMed

    Powell, Mark R

    2014-01-01

    Much of the literature regarding food safety sampling plans implicitly assumes that all lots entering commerce are tested. In practice, however, only a fraction of lots may be tested due to a budget constraint. In such a case, there is a tradeoff between the number of lots tested and the number of samples per lot. To illustrate this tradeoff, a simple model is presented in which the optimal number of samples per lot depends on the prevalence of sample units that do not conform to microbiological specifications and the relative costs of sampling a lot and of drawing and testing a sample unit from a lot. The assumed objective is to maximize the number of nonconforming lots that are rejected subject to a food safety sampling budget constraint. If the ratio of the cost per lot to the cost per sample unit is substantial, the optimal number of samples per lot increases as prevalence decreases. However, if the ratio of the cost per lot to the cost per sample unit is sufficiently small, the optimal number of samples per lot reduces to one (i.e., simple random sampling), regardless of prevalence. In practice, the cost per sample unit may be large relative to the cost per lot due to the expense of laboratory testing and other factors. Designing effective compliance assurance measures depends on economic, legal, and other factors in addition to microbiology and statistics. © 2013 Society for Risk Analysis Published 2013. This article is a U.S. Government work and is in the public domain for the U.S.A.

  14. Optimized method for dissolved hydrogen sampling in groundwater.

    PubMed

    Alter, Marcus D; Steiof, Martin

    2005-06-01

    Dissolved hydrogen concentrations are used to characterize redox conditions of contaminated aquifers. The currently accepted and recommended bubble strip method for hydrogen sampling (Wiedemeier et al., 1998) requires relatively long sampling times and immediate field analysis. In this study we present methods for optimized sampling and for sample storage. The bubble strip sampling method was examined for various flow rates, bubble sizes (headspace volume in the sampling bulb) and two different H2 concentrations. The results were compared to a theoretical equilibration model. Turbulent flow in the sampling bulb was optimized for gas transfer by reducing the inlet diameter. Extraction with a 5 mL headspace volume and flow rates higher than 100 mL/min resulted in 95-100% equilibrium within 10-15 min. In order to investigate the storage of samples from the gas sampling bulb gas samples were kept in headspace vials for varying periods. Hydrogen samples (4.5 ppmv, corresponding to 3.5 nM in liquid phase) could be stored up to 48 h and 72 h with a recovery rate of 100.1+/-2.6% and 94.6+/-3.2%, respectively. These results are promising and prove the possibility of storage for 2-3 days before laboratory analysis. The optimized method was tested at a field site contaminated with chlorinated solvents. Duplicate gas samples were stored in headspace vials and analyzed after 24 h. Concentrations were measured in the range of 2.5-8.0 nM corresponding to known concentrations in reduced aquifers.

  15. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  16. Systematic development and optimization of chemically defined medium supporting high cell density growth of Bacillus coagulans.

    PubMed

    Chen, Yu; Dong, Fengqing; Wang, Yonghong

    2016-09-01

    With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.

  17. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    NASA Astrophysics Data System (ADS)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  18. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure.

    PubMed

    Hatjimihail, Aristides T

    2009-06-09

    An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.

  19. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  20. Defined media optimization for in vitro culture of bovine somatic cell nuclear transfer (SCNT) embryos.

    PubMed

    Wang, Li-Jun; Xiong, Xian-Rong; Zhang, Hui; Li, Yan-Yan; Li, Qian; Wang, Yong-Sheng; Xu, Wen-Bing; Hua, Song; Zhang, Yong

    2012-12-01

    The objective was to establish an efficient defined culture medium for bovine somatic cell nuclear transfer (SCNT) embryos. In this study, modified synthetic oviductal fluid (mSOF) without bovine serum albumin (BSA) was used as the basic culture medium (BCM), whereas the control medium was BCM with BSA. In Experiment 1, adding polyvinyl alcohol (PVA) to BCM supported development of SCNT embryos to blastocyst stage, but blastocyst formation rate and blastocyst cell number were both lower (P < 0.05) compared to the undefined group (6.1 vs. 32.6% and 67.3 ± 3.4 vs. 109.3 ± 4.5, respectively). In Experiment 2, myo-inositol, a combination of insulin, transferrin and selenium (ITS), and epidermal growth factor (EGF) were added separately to PVA-supplemented BCM. The blastocyst formation rate and blastocyst cell number of those three groups were dramatically improved compared with that of PVA-supplemented group in Experiment 1 (18.5, 23.0, 24.1 vs. 6.1% and 82.7 ± 2.0, 84.3 ± 4.2, 95.3 ± 3.8 vs. 67.3 ± 3.4, respectively, P < 0.05), but were still lower compared with that of undefined group (33.7% and 113.8 ± 3.4, P < 0.05). In Experiment 3, when a combination of myo-inositol, ITS and EGF were added to PVA-supplemented BCM, blastocyst formation rate and blastocyst cell number were similar to that of undefined group (30.4 vs. 31.1% and 109.3 ± 4.4 vs. 112.0 ± 3.6, P > 0.05). In Experiment 4, when blastocysts were cryopreserved and subsequently thawed, there were no significant differences between the optimized defined group (Experiment 3) and undefined group in survival rate and 24 and 48 h hatching blastocyst rates. Furthermore, there were no significant differences in expression levels of H19, HSP70 and BAX in blastocysts derived from optimized defined medium and undefined medium, although the relative expression abundance of IGF-2 was significantly decreased in the former. In conclusion, a defined culture medium containing PVA, myo-inositol, ITS, and EGF

  1. Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.

    PubMed

    Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V

    2016-05-03

    Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g

  2. Investigation of Archean microfossil preservation for defining science objectives for Mars sample return missions

    NASA Astrophysics Data System (ADS)

    Lorber, K.; Czaja, A. D.

    2014-12-01

    Recent studies suggest that Mars contains more potentially life-supporting habitats (either in the present or past), than once thought. The key to finding life on Mars, whether extinct or extant, is to first understand which biomarkers and biosignatures are strictly biogenic in origin. Studying ancient habitats and fossil organisms of the early Earth can help to characterize potential Martian habitats and preserved life. This study, which focuses on the preservation of fossil microorganisms from the Archean Eon, aims to help define in part the science methods needed for a Mars sample return mission, of which, the Mars 2020 rover mission is the first step.Here is reported variations in the geochemical and morphological preservation of filamentous fossil microorganisms (microfossils) collected from the 2.5-billion-year-old Gamohaan Formation of the Kaapvaal Craton of South Africa. Samples of carbonaceous chert were collected from outcrop and drill core within ~1 km of each other. Specimens from each location were located within thin sections and their biologic morphologies were confirmed using confocal laser scanning microscopy. Raman spectroscopic analyses documented the carbonaceous nature of the specimens and also revealed variations in the level of geochemical preservation of the kerogen that comprises the fossils. The geochemical preservation of kerogen is principally thought to be a function of thermal alteration, but the regional geology indicates all of the specimens experienced the same thermal history. It is hypothesized that the fossils contained within the outcrop samples were altered by surface weathering, whereas the drill core samples, buried to a depth of ~250 m, were not. This differential weathering is unusual for cherts that have extremely low porosities. Through morphological and geochemical characterization of the earliest known forms of fossilized life on the earth, a greater understanding of the origin of evolution of life on Earth is gained

  3. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    PubMed

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  4. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research

    PubMed Central

    Duan, Naihua; Bhaumik, Dulal K.; Palinkas, Lawrence A.; Hoagwood, Kimberly

    2015-01-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research. PMID:25491200

  5. 'Optimal thermal range' in ectotherms: Defining criteria for tests of the temperature-size-rule.

    PubMed

    Walczyńska, Aleksandra; Kiełbasa, Anna; Sobczyk, Mateusz

    2016-08-01

    Thermal performance curves for population growth rate r (a measure of fitness) were estimated over a wide range of temperature for three species: Coleps hirtus (Protista), Lecane inermis (Rotifera) and Aeolosoma hemprichi (Oligochaeta). We measured individual body size and examined if predictions for the temperature-size rule (TSR) were valid for different temperatures. All three organisms investigated follow the TSR, but only over a specific range between minimal and optimal temperatures, while maintenance at temperatures beyond this range showed the opposite pattern in these taxa. We consider minimal and optimal temperatures to be species-specific, and moreover delineate a physiological range outside of which an ectotherm is constrained against displaying size plasticity in response to temperature. This thermal range concept has important implications for general size-temperature studies. Furthermore, the concept of 'operating thermal conditions' may provide a new approach to (i) defining criteria required for investigating and interpreting temperature effects, and (ii) providing a novel interpretation for many cases in which species do not conform to the TSR. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Optimal allocation of point-count sampling effort

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Link, W.A.

    1993-01-01

    Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.

  7. Defining the optimal sequence for the systemic treatment of metastatic breast cancer.

    PubMed

    Mestres, J A; iMolins, A B; Martínez, L C; López-Muñiz, J I C; Gil, E C; de Juan Ferré, A; Del Barco Berrón, S; Pérez, Y F; Mata, J G; Palomo, A G; Gregori, J G; Pardo, P G; Mañas, J J I; Hernández, A L; de Dueñas, E M; Jáñez, N M; Murillo, S M; Bofill, J S; Auñón, P Z; Sanchez-Rovira, P

    2017-02-01

    Metastatic breast cancer is a heterogeneous disease that presents in varying forms, and a growing number of therapeutic options makes it difficult to determine the best choice in each particular situation. When selecting a systemic treatment, it is important to consider the medication administered in the previous stages, such as acquired resistance, type of progression, time to relapse, tumor aggressiveness, age, comorbidities, pre- and post-menopausal status, and patient preferences. Moreover, tumor genomic signatures can identify different subtypes, which can be used to create patient profiles and design specific therapies. However, there is no consensus regarding the best treatment sequence for each subgroup of patients. During the SABCC Congress of 2014, specialized breast cancer oncologists from referral hospitals in Europe met to define patient profiles and to determine specific treatment sequences for each one. Conclusions were then debated in a final meeting in which a relative degree of consensus for each treatment sequence was established. Four patient profiles were defined according to established breast cancer phenotypes: pre-menopausal patients with luminal subtype, post-menopausal patients with luminal subtype, patients with triple-negative subtype, and patients with HER2-positive subtype. A treatment sequence was then defined, consisting of hormonal therapy with tamoxifen, aromatase inhibitors, fulvestrant, and mTOR inhibitors for pre- and post-menopausal patien ts; a chemotherapy sequence for the first, second, and further lines for luminal and triple-negative patients; and an optimal sequence for treatment with new antiHER2 therapies. Finally, a document detailing all treatment sequences, that had the agreement of all the oncologists, was drawn up as a guideline and advocacy tool for professionals treating patients with this disease.

  8. Defining an optimal surface chemistry for pluripotent stem cell culture in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Zonca, Michael R., Jr.

    Surface chemistry is critical for growing pluripotent stem cells in an undifferentiated state. There is great potential to engineer the surface chemistry at the nanoscale level to regulate stem cell adhesion. However, the challenge is to identify the optimal surface chemistry of the substrata for ES cell attachment and maintenance. Using a high-throughput polymerization and screening platform, a chemically defined, synthetic polymer grafted coating that supports strong attachment and high expansion capacity of pluripotent stem cells has been discovered using mouse embryonic stem (ES) cells as a model system. This optimal substrate, N-[3-(Dimethylamino)propyl] methacrylamide (DMAPMA) that is grafted on 2D synthetic poly(ether sulfone) (PES) membrane, sustains the self-renewal of ES cells (up to 7 passages). DMAPMA supports cell attachment of ES cells through integrin beta1 in a RGD-independent manner and is similar to another recently reported polymer surface. Next, DMAPMA has been able to be transferred to 3D by grafting to synthetic, polymeric, PES fibrous matrices through both photo-induced and plasma-induced polymerization. These 3D modified fibers exhibited higher cell proliferation and greater expression of pluripotency markers of mouse ES cells than 2D PES membranes. Our results indicated that desirable surfaces in 2D can be scaled to 3D and that both surface chemistry and structural dimension strongly influence the growth and differentiation of pluripotent stem cells. Lastly, the feasibility of incorporating DMAPMA into a widely used natural polymer, alginate, has been tested. Novel adhesive alginate hydrogels have been successfully synthesized by either direct polymerization of DMAPMA and methacrylic acid blended with alginate, or photo-induced DMAPMA polymerization on alginate nanofibrous hydrogels. In particular, DMAPMA-coated alginate hydrogels support strong ES cell attachment, exhibiting a concentration dependency of DMAPMA. This research provides a

  9. [Optimized sample preparation for metabolome studies on Streptomyces coelicolor].

    PubMed

    Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian

    2014-04-01

    Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to

  10. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  11. Classifier-Guided Sampling for Complex Energy System Optimization

    SciTech Connect

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  12. Optimizing performance of nonparametric species richness estimators under constrained sampling.

    PubMed

    Rajakaruna, Harshana; Drake, D Andrew R; T Chan, Farrah; Bailey, Sarah A

    2016-10-01

    Understanding the functional relationship between the sample size and the performance of species richness estimators is necessary to optimize limited sampling resources against estimation error. Nonparametric estimators such as Chao and Jackknife demonstrate strong performances, but consensus is lacking as to which estimator performs better under constrained sampling. We explore a method to improve the estimators under such scenario. The method we propose involves randomly splitting species-abundance data from a single sample into two equally sized samples, and using an appropriate incidence-based estimator to estimate richness. To test this method, we assume a lognormal species-abundance distribution (SAD) with varying coefficients of variation (CV), generate samples using MCMC simulations, and use the expected mean-squared error as the performance criterion of the estimators. We test this method for Chao, Jackknife, ICE, and ACE estimators. Between abundance-based estimators with the single sample, and incidence-based estimators with the split-in-two samples, Chao2 performed the best when CV < 0.65, and incidence-based Jackknife performed the best when CV > 0.65, given that the ratio of sample size to observed species richness is greater than a critical value given by a power function of CV with respect to abundance of the sampled population. The proposed method increases the performance of the estimators substantially and is more effective when more rare species are in an assemblage. We also show that the splitting method works qualitatively similarly well when the SADs are log series, geometric series, and negative binomial. We demonstrate an application of the proposed method by estimating richness of zooplankton communities in samples of ballast water. The proposed splitting method is an alternative to sampling a large number of individuals to increase the accuracy of richness estimations; therefore, it is appropriate for a wide range of resource

  13. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-05

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.

  14. Simultaneous beam sampling and aperture shape optimization for SPORT

    SciTech Connect

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  15. A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics.

    PubMed

    Pikulski, M; Shiroka, T; Ott, H-R; Mesot, J

    2014-09-01

    We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs.

  16. A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics

    SciTech Connect

    Pikulski, M. Shiroka, T.; Ott, H.-R.; Mesot, J.

    2014-09-15

    We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs.

  17. Optimizing passive acoustic sampling of bats in forests

    PubMed Central

    Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K

    2014-01-01

    Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km2 scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363

  18. Method optimization for fecal sample collection and fecal DNA extraction.

    PubMed

    Mathay, Conny; Hamot, Gael; Henry, Estelle; Georges, Laura; Bellora, Camille; Lebrun, Laura; de Witt, Brian; Ammerlaan, Wim; Buschart, Anna; Wilmes, Paul; Betsou, Fay

    2015-04-01

    This is the third in a series of publications presenting formal method validation for biospecimen processing in the context of accreditation in laboratories and biobanks. We report here optimization of a stool processing protocol validated for fitness-for-purpose in terms of downstream DNA-based analyses. Stool collection was initially optimized in terms of sample input quantity and supernatant volume using canine stool. Three DNA extraction methods (PerkinElmer MSM I®, Norgen Biotek All-In-One®, MoBio PowerMag®) and six collection container types were evaluated with human stool in terms of DNA quantity and quality, DNA yield, and its reproducibility by spectrophotometry, spectrofluorometry, and quantitative PCR, DNA purity, SPUD assay, and 16S rRNA gene sequence-based taxonomic signatures. The optimal MSM I protocol involves a 0.2 g stool sample and 1000 μL supernatant. The MSM I extraction was superior in terms of DNA quantity and quality when compared to the other two methods tested. Optimal results were obtained with plain Sarstedt tubes (without stabilizer, requiring immediate freezing and storage at -20°C or -80°C) and Genotek tubes (with stabilizer and RT storage) in terms of DNA yields (total, human, bacterial, and double-stranded) according to spectrophotometry and spectrofluorometry, with low yield variability and good DNA purity. No inhibitors were identified at 25 ng/μL. The protocol was reproducible in terms of DNA yield among different stool aliquots. We validated a stool collection method suitable for downstream DNA metagenomic analysis. DNA extraction with the MSM I method using Genotek tubes was considered optimal, with simple logistics in terms of collection and shipment and offers the possibility of automation. Laboratories and biobanks should ensure protocol conditions are systematically recorded in the scope of accreditation.

  19. Optimized robust plasma sampling for glomerular filtration rate studies.

    PubMed

    Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L

    2012-09-01

    In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement.

  20. Test samples for optimizing STORM super-resolution microscopy.

    PubMed

    Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E

    2013-09-06

    STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.

  1. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    SciTech Connect

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  2. Optimal temperature sampling with SPOTS to improve acoustic predictions

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.; Samuels, Brian C.

    2003-10-01

    The Modular Ocean Data Assimilation System (MODAS) uses optimal interpolation to assimilate data (e.g., XBTs), and to create temperature nowcasts and associated uncertainties. When XBTs are dropped in a uniform grid (during surveys) or in random patterns and spaced according to resources available their assimilation can lead to nowcast errors in complex, littoral regions, especially when only a few measurements are available. To mitigate, Sensor Placement for Optimal Temperature Sampling (SPOTS) [Rike and DelBalzo, Proc. IEEE Oceans (2003)] was developed to rapidly optimize placement of a few XBTs and to maximize MODAS accuracy. This work involves high-density, in situ data assimilation into MODAS to create a ground-truth temperature field from which a ground-truth transmission loss field was computed. Optimal XBT location sets were chosen by SPOTS, based on original MODAS uncertainties, and additional sets were chosen, based on subjective choices by an oceanographer. For each XBT set, a MODAS temperature nowcast and associated transmission losses were computed. This work discusses the relationship between temperature uncertainty, temperature error, and acoustic error for the objective SPOTS approach and the subjective oceanographer approach. The SPOTS approach allowed significantly more accurate acoustic calculations, especially when few XBTS were used. [Work sponsored by NAVAIR.

  3. The ReSTAGE Collaboration: defining optimal bleeding criteria for onset of early menopausal transition.

    PubMed

    Harlow, Siobán D; Mitchell, Ellen S; Crawford, Sybil; Nan, Bin; Little, Roderick; Taffe, John

    2008-01-01

    Criteria for staging the menopausal transition are not established. This article evaluates five bleeding criteria for defining early transition and provides empirically based guidance regarding optimal criteria. Prospective menstrual calendar data from four population-based cohorts: TREMIN, Melbourne Women's Midlife Health Project (MWMHP), Seattle Midlife Women's Health Study (SMWHS), and Study of Women's Health Across the Nation (SWAN) with annual serum FSH from MWMHP and SWAN. 735 TREMIN, 279 SMWHS, 216 MWMHP, and 2270 SWAN women aged 35-57 at baseline who maintained menstrual calendars. Age at and time to menopause for: standard deviation >6 and >8 days, persistent difference in consecutive segments >6 days, irregularity, and >or=45 day segment. Serum FSH concentration. Most women experienced each of the bleeding criteria. Except for a persistent >6 day difference that occurs earlier, the criteria occur at a similar age and at approximately the same age as late transition in a large proportion of women. FSH was associated with all proposed markers. The early transition may be best described by ovarian activity consistent with the persistent >6 day difference, but further study is needed, as other proposed criterion are consistent with later menstrual changes.

  4. The ReSTAGE Collaboration: Defining Optimal Bleeding Criteria for Onset of Early Menopausal Transition

    PubMed Central

    Harlow, Siobán D.; Mitchell, Ellen S.; Crawford, Sybil; Nan, Bin; Little, Roderick; Taffe, John

    2008-01-01

    Study objective Criteria for staging the menopausal transition are not established. This paper evaluates five bleeding criteria for defining early transition and provides empirically-based guidance regarding optimal criteria. Design/Setting Prospective menstrual calendar data from four population-based cohorts: TREMIN, Melbourne Women’s Midlife Health Project(MWMHP), Seattle Midlife Women’s Health Study(SMWHS), and Study of Women’s Health Across the Nation(SWAN) with annual serum follicle stimulating hormone (FSH) from MWMHP and SWAN. Participants 735 TREMIN, 279 SMWHS, 216 MWMHP, and 2270 SWAN women aged 35-57 at baseline who maintained menstrual calendars. Main outcome measure(s) Age at and time to menopause for: standard deviation >6 and >8 days, persistent difference in consecutive segments >6 days, irregularity, and >=45 day segment. Serum follicle stimulating hormone concentration. Results Most women experienced each of the bleeding criteria. Except for persistent >6 day difference which occurs earlier, the criteria occur at a similar age and at approximately the same age as late transition in a large proportion of women. FSH was associated with all proposed markers. Conclusions The early transition may be best described by ovarian activity consistent with the persistent >6 day difference, but further study is needed, as other proposed criterion are consistent with later menstrual changes. PMID:17681300

  5. Defining Adult Experiences: Perspectives of a Diverse Sample of Young Adults

    PubMed Central

    Lowe, Sarah R.; Dillon, Colleen O.; Rhodes, Jean E.; Zwiebach, Liza

    2013-01-01

    This study explored the roles and psychological experiences identified as defining adult moments using mixed methods with a racially, ethnically, and socioeconomically diverse sample of young adults both enrolled and not enrolled in college (N = 726; ages 18-35). First, we evaluated results from a single survey item that asked participants to rate how adult they feel. Consistent with previous research, the majority of participants (56.9%) reported feeling “somewhat like an adult,” and older participants had significantly higher subjective adulthood, controlling for other demographic variables. Next, we analyzed responses from an open-ended question asking participants to describe instances in which they felt like an adult. Responses covered both traditional roles (e.g., marriage, childbearing; 36.1%) and nontraditional social roles and experiences (e.g., moving out of parent’s home, cohabitation; 55.6%). Although we found no differences by age and college status in the likelihood of citing a traditional or nontraditional role, participants who had achieved more traditional roles were more likely to cite them in their responses. In addition, responses were coded for psychological experiences, including responsibility for self (19.0%), responsibility for others (15.3%), self-regulation (31.1%), and reflected appraisals (5.1%). Older participants were significantly more likely to include self-regulation and reflected appraisals, whereas younger participants were more likely to include responsibility for self. College students were more likely than noncollege students to include self-regulation and reflected appraisals. Implications for research and practice are discussed. PMID:23554545

  6. A General Investigation of Optimized Atmospheric Sample Duration

    SciTech Connect

    Eslinger, Paul W.; Miley, Harry S.

    2012-11-28

    ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.

  7. Optimal sampling frequency in recording of resistance training exercises.

    PubMed

    Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis

    2017-03-01

    The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.

  8. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  9. Fixed-sample optimization using a probability density function

    SciTech Connect

    Barnett, R.N.; Sun, Zhiwei; Lester, W.A. Jr. |

    1997-12-31

    We consider the problem of optimizing parameters in a trial function that is to be used in fixed-node diffusion Monte Carlo calculations. We employ a trial function with a Boys-Handy correlation function and a one-particle basis set of high quality. By employing sample points picked from a positive definite distribution, parameters that determine the nodes of the trial function can be varied without introducing singularities into the optimization. For CH as a test system, we find that a trial function of high quality is obtained and that this trial function yields an improved fixed-node energy. This result sheds light on the important question of how to improve the nodal structure and, thereby, the accuracy of diffusion Monte Carlo.

  10. Using simulated noise to define optimal QT intervals for computer analysis of ambulatory ECG.

    PubMed

    Tikkanen, P E; Sellin, L C; Kinnunen, H O; Huikuri, H V

    1999-01-01

    The ambulatory electrocardiogram (ECG) is an important medical tool, not only for diagnosis of adverse cardiac events, but also to predict the risk of such events occurring. The 24-hour ambulatory ECG has certain problems and drawbacks because the signal is corrupted by noise from various sources and also several other conditions which may alter the ECG morphology. We have developed a Windows based program for the computer analysis of ambulatory ECG which attempts to address these problems. The software includes options for importing ECG data, different methods of waveform analysis, data-viewing, and exporting the extracted time series. In addition, the modular structure allows for flexible maintenance and expansion of the software. The ECG was recorded using a Holter device and oversampled to enhance the fidelity of the low sampling rate of the ambulatory ECG. The influence of different sampling rates on the interval variability were studied. The noise sensitivity of the implemented algorithm was tested with several types of simulated noise and the precision of the interval measurement was reported with SD values. Our simulations showed that, in most of the cases, defining the end of QT interval at the maximum of the T wave gave the most precise measurement. The definition of the onset of the ventricular repolarization duration is most precisely made on the maximum or descending maximal slope of the R wave. We also analyzed some examples of time series from patients using power spectrum estimates in order to validate the low level QT interval variability.

  11. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality

    PubMed Central

    Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  12. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    PubMed

    Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  13. Validation of genetic algorithm-based optimal sampling for ocean data assimilation

    NASA Astrophysics Data System (ADS)

    Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.

    2016-10-01

    Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.

  14. Optimization of a Sample Processing Protocol for Recovery of ...

    EPA Pesticide Factsheets

    Journal Article Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps.

  15. Optimal blood sampling time windows for parameter estimation using a population approach: design of a phase II clinical trial.

    PubMed

    Chenel, Marylore; Ogungbenro, Kayode; Duval, Vincent; Laveille, Christian; Jochemsen, Roeline; Aarons, Leon

    2005-12-01

    The objective of this paper is to determine optimal blood sampling time windows for the estimation of pharmacokinetic (PK) parameters by a population approach within the clinical constraints. A population PK model was developed to describe a reference phase II PK dataset. Using this model and the parameter estimates, D-optimal sampling times were determined by optimising the determinant of the population Fisher information matrix (PFIM) using PFIM_ _M 1.2 and the modified Fedorov exchange algorithm. Optimal sampling time windows were then determined by allowing the D-optimal windows design to result in a specified level of efficiency when compared to the fixed-times D-optimal design. The best results were obtained when K(a) and IIV on K(a) were fixed. Windows were determined using this approach assuming 90% level of efficiency and uniform sample distribution. Four optimal sampling time windows were determined as follow: at trough between 22 h and new drug administration; between 2 and 4 h after dose for all patients; and for 1/3 of the patients only 2 sampling time windows between 4 and 10 h after dose, equal to [4 h-5 h 05] and [9 h 10-10 h]. This work permitted the determination of an optimal design, with suitable sampling time windows which was then evaluated by simulations. The sampling time windows will be used to define the sampling schedule in a prospective phase II study.

  16. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  17. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  18. A Magnetic Resonance Imaging Study to Define Optimal Needle Length for Humeral Head IO Devices.

    PubMed

    Rush, Stephen; Bremer, Justin; Foresto, Christopher; Rubin, Aaron M; Anderson, Peter I

    2012-01-01

    Intraosseous (IO) devices have gained popularity because of TCCC. The ability to gain access to the vascular system when intra venous access is not possible, and techniques such as central lines or cut-downs are beyond the scope of battlefield providers and tactically not feasible, has lead to the increased use of IO access. Since tibias are often not available sites in blast injury patients, the sternum was often used. Recently the humeral head has gained popularity because of ease of access and placement. The optimal needle length has not been defined or studied. Fifty consecutive shoulder MRIs among 18?40 year old patients were reviewed. Distances from the skin surface to the cortex from anterior and lateral trajectories were simulated and measured. Two different lateral trajectories were studied described as lateral minimum and lateral maximum trajectories, correlating with seemingly less and greater soft tissue. The cortical thickness was also recorded. Mean values and ranges for the measurements were determined. The anterior trajectory represented the shortest distance. Mean anterior, mean lateral minimum and mean lateral maximum distances were 2.3, 3.0 and 4.7cm with corresponding ranges of 1.1?4.1, 1.6?5.7 and 2.8?7.4cm respectively. The cortical thickness was 4mm in all cases. Although this information was gathered amongst civilians, and many military members may have more soft tissue, these results indicate that needle length generally in the 40?50mm range should be used via the anterior approach. Use of a standard 25mm needle often used in the tibia would be inadequate in over half the cases, and may result in undue tissue compression or distortion. 2012.

  19. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  20. Using highly efficient nonlinear experimental design methods for optimization of Lactococcus lactis fermentation in chemically defined media.

    PubMed

    Zhang, Guiying; Block, David E

    2009-01-01

    Optimization of fermentation media and processes is a difficult task due to the potential for high dimensionality and nonlinearity. Here we develop and evaluate variations on two novel and highly efficient methods for experimental fermentation optimization. The first approach is based on using a truncated genetic algorithm with a developing neural network model to choose the best experiments to run. The second approach uses information theory, along with Bayesian regularized neural network models, for experiment selection. To evaluate these methods experimentally, we used them to develop a new chemically defined medium for Lactococcus lactis IL1403, along with an optimal temperature and initial pH, to achieve maximum cell growth. The media consisted of 19 defined components or groups of components. The optimization results show that the maximum cell growth from the optimal process of each novel method is generally comparable to or higher than that achieved using a traditional statistical experimental design method, but these optima are reached in about half of the experiments (73-94 vs. 161, depending on the variants of methods). The optimal chemically defined media developed in this work are rich media that can support high cell density growth 3.5-4 times higher than the best reported synthetic medium and 72% higher than a commonly used complex medium (M17) at optimization scale. The best chemically defined medium found using the method was evaluated and compared with other defined or complex media at flask- and fermentor-scales. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  1. Optimization of Evans blue quantitation in limited rat tissue samples

    NASA Astrophysics Data System (ADS)

    Wang, Hwai-Lee; Lai, Ted Weita

    2014-10-01

    Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.

  2. Optimization of Evans blue quantitation in limited rat tissue samples.

    PubMed

    Wang, Hwai-Lee; Lai, Ted Weita

    2014-10-10

    Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.

  3. Optimal CCD readout by digital correlated double sampling

    NASA Astrophysics Data System (ADS)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  4. NSECT sinogram sampling optimization by normalized mutual information

    NASA Astrophysics Data System (ADS)

    Viana, Rodrigo S.; Galarreta-Valverde, Miguel A.; Mekkaoui, Choukri; Yoriyaz, Hélio; Jackowski, Marcel P.

    2015-03-01

    Neutron Stimulated Emission Computed Tomography (NSECT) is an emerging noninvasive imaging technique that measures the distribution of isotopes from biological tissue using fast-neutron inelastic scattering reaction. As a high-energy neutron beam illuminates the sample, the excited nuclei emit gamma rays whose energies are unique to the emitting nuclei. Tomographic images of each element in the spectrum can then be reconstructed to represent the spatial distribution of elements within the sample using a first generation tomographic scan. NSECT's high radiation dose deposition, however, requires a sampling strategy that can yield maximum image quality under a reasonable radiation dose. In this work, we introduce an NSECT sinogram sampling technique based on the Normalized Mutual Information (NMI) of the reconstructed images. By applying the Radon Transform on the ground-truth image obtained from a carbon-based synthetic phantom, different NSECT sinogram configurations were simulated and compared by using the NMI as a similarity measure. The proposed methodology was also applied on NSECT images acquired using MCNP5 Monte Carlo simulations of the same phantom to validate our strategy. Results show that NMI can be used to robustly predict the quality of the reconstructed NSECT images, leading to an optimal NSECT acquisition and a minimal absorbed dose by the patient.

  5. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude (M i ), or, for a small subset of our sample, M i and color (NUV - i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M i and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  6. Optimal probes for withdrawal of uncontaminated fluid samples

    NASA Astrophysics Data System (ADS)

    Sherwood, J. D.

    2005-08-01

    Withdrawal of fluid by a composite probe pushed against the face z =0 of a porous half-space z >0 is modeled assuming incompressible Darcy flow. The probe is circular, of radius a, with an inner sampling section of radius αa and a concentric outer guard probe αa βa is saturated with fluid 2; the two fluids have the same viscosity. It is assumed that the interface between the two fluids is sharp and remains so as it moves through the rock. The pressure in the probe is lower than that of the pore fluid in the rock, so that the fluid interface is convected with the fluids towards the probe. This idealized axisymmetric problem is solved numerically, and it is shown that an analysis based on far-field spherical flow towards a point sink is a good approximation when the nondimensional depth of fluid 1 is large, i.e., β ≫1. The inner sampling probe eventually produces pure fluid 2, and this technique has been proposed for sampling pore fluids in rock surrounding an oil well [A. Hrametz, C. Gardner, M. Wais, and M. Proett, U.S. Patent No. 6,301,959 B1 (16 October 2001)]. Fluid 1 is drilling fluid filtrate, which has displaced the original pore fluid (fluid 2), a pure sample of which is required. The time required to collect an uncontaminated sample of original pore fluid can be minimized by a suitable choice of the probe geometry α [J. Sherwood, J. Fitzgerald and B. Hill, U.S. Patent No. 6,719,049 B2 (13 April 2004)]. It is shown that the optimal choice of α depends on the depth of filtrate invasion β and the volume of sample required.

  7. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.

  8. Optimizing Collocation of Instrument Measurements and Field Sampling Activities

    NASA Astrophysics Data System (ADS)

    Bromley, G. T.; Durden, D.; Ayres, E.; Barnett, D.; Krauss, R.; Luo, H.; Meier, C. L.; Metzger, S.

    2015-12-01

    The National Ecological Observatory Network (NEON) will provide data from automated instrument measurements and manual sampling activities. To reliably infer ecosystem driver-response relationships, two contradicting requirements need to be considered: Both types of observations should be representative of the same target area while minimally impacting each other. For this purpose, a simple model was created that determines an optimal area for collocating plot-based manual field sampling activities with respect to the automated measurements. The maximum and minimum distances of the collocation areas were determined from the instrument source area distribution function in combination with sampling densities and a threshold, respectively. Specifically, the maximum distance was taken as the extent from within which 90% of the value observed by an instrument is sourced. Sampling densities were then generated through virtually distributing activity-specific impact estimates across the instrument source area. The minimum distance was determined as the position closest to the instrument location where the sampling density falls below a threshold that ensures <10% impact on the source area informing the instrument measurements. At most sites, a 30m minimum distance ensured minimal impact of manual field sampling on instrument measurements, however, sensitive sites (e.g., tundra) required a larger minimum distance. To determine how the model responds to uncertainties in its inputs, a numerical sensitivity analysis was conducted based on multivariate error distributions that retain the covariance structure. In 90% of all cases, the model was shown to be robust against 10% (1 σ) deviations in its inputs, continuing to yield a minimum distance of 30 m. For the remaining 10% of all cases, preliminary results suggest a prominent dependence of the minimum distance on climate decomposition index, which we use here as a proxy for the sensitivity of an environment to disturbance.

  9. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  10. Modified Direct Insertion/Cancellation Method Based Sample Rate Conversion for Software Defined Radio

    NASA Astrophysics Data System (ADS)

    Bostamam, Anas Muhamad; Sanada, Yukitoshi; Minami, Hideki

    In this paper, a new fractional sample rate conversion (SRC) scheme based on a direct insertion/cancellation scheme is proposed. This scheme is suitable for signals that are sampled at a high sample rate and converted to a lower sample rate. The direct insertion/cancellation scheme may achieve low-complexity and lower power consumption as compared to the other SRC techniques. However, the direct insertion/cancellation technique suffers from large aliasing and distortion. The aliasing from an adjacent channel interferes the desired signal and degrades the performance. Therefore, a modified direct insertion/cancellation scheme is proposed in order to realize high performance resampling.

  11. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    NASA Astrophysics Data System (ADS)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  12. Neurocognition in psychometrically defined college Schizotypy samples: we are not measuring the "right stuff".

    PubMed

    Chun, Charlotte A; Minor, Kyle S; Cohen, Alex S

    2013-03-01

    Although neurocognitive deficits are an integral characteristic of schizophrenia, there is inconclusive evidence as to whether they manifest across the schizophrenia-spectrum. We conducted two studies and a meta-analysis comparing neurocognitive functioning between psychometrically defined schizotypy and control groups recruited from a college population. Study One compared groups on measures of specific and global neurocognition, and subjective and objective quality of life. Study Two examined working memory and subjective cognitive complaints. Across both studies, the schizotypy group showed notably decreased subjective (d51.52) and objective (d51.02) quality of life and greater subjective cognitive complaints (d51.88); however, neurocognition was normal across all measures (d’s,.35). Our meta-analysis of 33 studies examining neurocognition in at-risk college students revealed between-group differences in the negligible effect size range for most domains. The schizotypy group demonstrated deficits of a small effect size for working memory and set-shifting abilities. Although at-risk individuals report relatively profound neurocognitive deficits and impoverished quality of life, neurocognitive functioning assessed behaviorally is largely intact. Our data suggest that traditionally defined neurocognitive deficits do not approximate the magnitude of subjective complaints associated with psychometrically defined schizotypy.

  13. Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties

    NASA Astrophysics Data System (ADS)

    Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens

    2013-04-01

    The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field

  14. Optimization for Peptide Sample Preparation for Urine Peptidomics

    SciTech Connect

    Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.

    2014-02-25

    when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.

  15. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  16. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  17. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    SciTech Connect

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  18. Problems associated with using filtration to define dissolved trace element concentrations in natural water samples

    USGS Publications Warehouse

    Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.

    1996-01-01

    Field and laboratory experiments indicate that a number of factors associated with filtration other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample) can produce significant variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk of these variations result from the inclusion/exclusion of colloidally associated trace elements in the filtrate, although dilution and sorption/desorption from filters also may be factors. Thus, dissolved trace element concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As such, simple filtration of unspecified volumes of natural water through unspecified 0.45-??m membrane filters may no longer represent an acceptable operational definition for a number of dissolved chemical constituents.

  19. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  20. Defining Optimal Brain Health in Adults: A Presidential Advisory From the American Heart Association/American Stroke Association.

    PubMed

    Gorelick, Philip B; Furie, Karen L; Iadecola, Costantino; Smith, Eric E; Waddy, Salina P; Lloyd-Jones, Donald M; Bae, Hee-Joon; Bauman, Mary Ann; Dichgans, Martin; Duncan, Pamela W; Girgus, Meighan; Howard, Virginia J; Lazar, Ronald M; Seshadri, Sudha; Testai, Fernando D; van Gaal, Stephen; Yaffe, Kristine; Wasiak, Hank; Zerna, Charlotte

    2017-10-01

    Cognitive function is an important component of aging and predicts quality of life, functional independence, and risk of institutionalization. Advances in our understanding of the role of cardiovascular risks have shown them to be closely associated with cognitive impairment and dementia. Because many cardiovascular risks are modifiable, it may be possible to maintain brain health and to prevent dementia in later life. The purpose of this American Heart Association (AHA)/American Stroke Association presidential advisory is to provide an initial definition of optimal brain health in adults and guidance on how to maintain brain health. We identify metrics to define optimal brain health in adults based on inclusion of factors that could be measured, monitored, and modified. From these practical considerations, we identified 7 metrics to define optimal brain health in adults that originated from AHA's Life's Simple 7: 4 ideal health behaviors (nonsmoking, physical activity at goal levels, healthy diet consistent with current guideline levels, and body mass index <25 kg/m(2)) and 3 ideal health factors (untreated blood pressure <120/<80 mm Hg, untreated total cholesterol <200 mg/dL, and fasting blood glucose <100 mg/dL). In addition, in relation to maintenance of cognitive health, we recommend following previously published guidance from the AHA/American Stroke Association, Institute of Medicine, and Alzheimer's Association that incorporates control of cardiovascular risks and suggest social engagement and other related strategies. We define optimal brain health but recognize that the truly ideal circumstance may be uncommon because there is a continuum of brain health as demonstrated by AHA's Life's Simple 7. Therefore, there is opportunity to improve brain health through primordial prevention and other interventions. Furthermore, although cardiovascular risks align well with brain health, we acknowledge that other factors differing from those related to

  1. Defining an Adequate Sample of Earlywood Vessels for Retrospective Injury Detection in Diffuse-Porous Species

    PubMed Central

    Arbellay, Estelle; Corona, Christophe; Stoffel, Markus; Fonti, Patrick; Decaulne, Armelle

    2012-01-01

    Vessels of broad-leaved trees have been analyzed to study how trees deal with various environmental factors. Cambial injury, in particular, has been reported to induce the formation of narrower conduits. Yet, little or no effort has been devoted to the elaboration of vessel sampling strategies for retrospective injury detection based on vessel lumen size reduction. To fill this methodological gap, four wounded individuals each of grey alder (Alnus incana (L.) Moench) and downy birch (Betula pubescens Ehrh.) were harvested in an avalanche path. Earlywood vessel lumina were measured and compared for each tree between the injury ring built during the growing season following wounding and the control ring laid down the previous year. Measurements were performed along a 10 mm wide radial strip, located directly next to the injury. Specifically, this study aimed at (i) investigating the intra-annual duration and local extension of vessel narrowing close to the wound margin and (ii) identifying an adequate sample of earlywood vessels (number and intra-ring location of cells) attesting to cambial injury. Based on the results of this study, we recommend analyzing at least 30 vessels in each ring. Within the 10 mm wide segment of the injury ring, wound-induced reduction in vessel lumen size did not fade with increasing radial and tangential distances, but we nevertheless advise favoring early earlywood vessels located closest to the injury. These findings, derived from two species widespread across subarctic, mountainous, and temperate regions, will assist retrospective injury detection in Alnus, Betula, and other diffuse-porous species as well as future related research on hydraulic implications after wounding. PMID:22761707

  2. Defining an adequate sample of earlywood vessels for retrospective injury detection in diffuse-porous species.

    PubMed

    Arbellay, Estelle; Corona, Christophe; Stoffel, Markus; Fonti, Patrick; Decaulne, Armelle

    2012-01-01

    Vessels of broad-leaved trees have been analyzed to study how trees deal with various environmental factors. Cambial injury, in particular, has been reported to induce the formation of narrower conduits. Yet, little or no effort has been devoted to the elaboration of vessel sampling strategies for retrospective injury detection based on vessel lumen size reduction. To fill this methodological gap, four wounded individuals each of grey alder (Alnus incana (L.) Moench) and downy birch (Betula pubescens Ehrh.) were harvested in an avalanche path. Earlywood vessel lumina were measured and compared for each tree between the injury ring built during the growing season following wounding and the control ring laid down the previous year. Measurements were performed along a 10 mm wide radial strip, located directly next to the injury. Specifically, this study aimed at (i) investigating the intra-annual duration and local extension of vessel narrowing close to the wound margin and (ii) identifying an adequate sample of earlywood vessels (number and intra-ring location of cells) attesting to cambial injury. Based on the results of this study, we recommend analyzing at least 30 vessels in each ring. Within the 10 mm wide segment of the injury ring, wound-induced reduction in vessel lumen size did not fade with increasing radial and tangential distances, but we nevertheless advise favoring early earlywood vessels located closest to the injury. These findings, derived from two species widespread across subarctic, mountainous, and temperate regions, will assist retrospective injury detection in Alnus, Betula, and other diffuse-porous species as well as future related research on hydraulic implications after wounding.

  3. Statistical geometry of lattice chain polymers with voids of defined shapes: sampling with strong constraints.

    PubMed

    Lin, Ming; Chen, Rong; Liang, Jie

    2008-02-28

    Proteins contain many voids, which are unfilled spaces enclosed in the interior. A few of them have shapes compatible to ligands and substrates and are important for protein functions. An important general question is how the need for maintaining functional voids is influenced by, and affects other aspects of proteins structures and properties (e.g., protein folding stability, kinetic accessibility, and evolution selection pressure). In this paper, we examine in detail the effects of maintaining voids of different shapes and sizes using two-dimensional lattice models. We study the propensity for conformations to form a void of specific shape, which is related to the entropic cost of void maintenance. We also study the location that voids of a specific shape and size tend to form, and the influence of compactness on the formation of such voids. As enumeration is infeasible for long chain polymer, a key development in this work is the design of a novel sequential Monte Carlo strategy for generating large number of sample conformations under very constraining restrictions. Our method is validated by comparing results obtained from sampling and from enumeration for short polymer chains. We succeeded in accurate estimation of entropic cost of void maintenance, with and without an increasing number of restrictive conditions, such as loops forming the wall of void with fixed length, with additionally fixed starting position in the sequence. Additionally, we have identified the key structural properties of voids that are important in determining the entropic cost of void formation. We have further developed a parametric model to predict quantitatively void entropy. Our model is highly effective, and these results indicate that voids representing functional sites can be used as an improved model for studying the evolution of protein functions and how protein function relates to protein stability.

  4. Statistical geometry of lattice chain polymers with voids of defined shapes: Sampling with strong constraints

    NASA Astrophysics Data System (ADS)

    Lin, Ming; Chen, Rong; Liang, Jie

    2008-02-01

    Proteins contain many voids, which are unfilled spaces enclosed in the interior. A few of them have shapes compatible to ligands and substrates and are important for protein functions. An important general question is how the need for maintaining functional voids is influenced by, and affects other aspects of proteins structures and properties (e.g., protein folding stability, kinetic accessibility, and evolution selection pressure). In this paper, we examine in detail the effects of maintaining voids of different shapes and sizes using two-dimensional lattice models. We study the propensity for conformations to form a void of specific shape, which is related to the entropic cost of void maintenance. We also study the location that voids of a specific shape and size tend to form, and the influence of compactness on the formation of such voids. As enumeration is infeasible for long chain polymer, a key development in this work is the design of a novel sequential Monte Carlo strategy for generating large number of sample conformations under very constraining restrictions. Our method is validated by comparing results obtained from sampling and from enumeration for short polymer chains. We succeeded in accurate estimation of entropic cost of void maintenance, with and without an increasing number of restrictive conditions, such as loops forming the wall of void with fixed length, with additionally fixed starting position in the sequence. Additionally, we have identified the key structural properties of voids that are important in determining the entropic cost of void formation. We have further developed a parametric model to predict quantitatively void entropy. Our model is highly effective, and these results indicate that voids representing functional sites can be used as an improved model for studying the evolution of protein functions and how protein function relates to protein stability.

  5. Modular Protein Expression Toolbox (MoPET), a standardized assembly system for defined expression constructs and expression optimization libraries

    PubMed Central

    Birkenfeld, Jörg; Franz, Jürgen; Gritzan, Uwe; Linden, Lars; Trautwein, Mark

    2017-01-01

    The design and generation of an optimal expression construct is the first and essential step in in the characterization of a protein of interest. Besides evaluation and optimization of process parameters (e.g. selection of the best expression host or cell line and optimal induction conditions and time points), the design of the expression construct itself has a major impact. However, the path to this final expression construct is often not straight forward and includes multiple learning cycles accompanied by design variations and retesting of construct variants, since multiple, functional DNA sequences of the expression vector backbone, either coding or non-coding, can have a major impact on expression yields. To streamline the generation of defined expression constructs of otherwise difficult to express proteins, the Modular Protein Expression Toolbox (MoPET) has been developed. This cloning platform allows highly efficient DNA assembly of pre-defined, standardized functional DNA modules with a minimal cloning burden. Combining these features with a standardized cloning strategy facilitates the identification of optimized DNA expression constructs in shorter time. The MoPET system currently consists of 53 defined DNA modules divided into eight functional classes and can be flexibly expanded. However, already with the initial set of modules, 792,000 different constructs can be rationally designed and assembled. Furthermore, this starting set was used to generate small and mid-sized combinatorial expression optimization libraries. Applying this screening approach, variants with up to 60-fold expression improvement have been identified by MoPET variant library screening. PMID:28520717

  6. Empirically Defined Subtypes of Alcohol Dependence in an Irish Family Sample

    PubMed Central

    Sintov, Nicole D.; Kendler, Kenneth S.; Young-Wolff, Kelly C.; Walsh, Dermot; Patterson, Diana G.; Prescott, Carol A.

    2009-01-01

    Alcohol dependence (AD) is clinically and etiologically heterogeneous. The goal of this study was to explore AD subtypes among a sample of 1, 221 participants in the Irish Affected Sib Pair Study of Alcohol Dependence, all of whom met DSM-IV criteria for AD. Variables used to identify the subtypes included major depressive disorder, antisocial personality disorder, illicit drug dependence (cannabis, sedatives, stimulants, cocaine, opioids, and hallucinogens), nicotine dependence, the personality traits of neuroticism and novelty seeking, and early alcohol use. Using latent class analysis, a 3-class solution was identified as the most parsimonious description of the data. Individuals in a Mild class were least likely to have comorbid psychopathology, whereas a Severe class had highest probabilities of all comorbid psychopathology. The third class was characterized by high probabilities of major depression and higher neuroticism scores, but lower likelihood of other comorbid disorders than seen in the Severe class. Overall, sibling pair resemblance for class was stronger within than between classes, and was greatest for siblings within the Severe class, suggesting a stronger familial etiology for this class. These findings are consistent with the affective regulation and behavioral disinhibition subtypes of alcoholism, and are in line with prior work suggesting familial influences on subtype etiology. PMID:20022183

  7. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  8. Defining the Enterovirus Diversity Landscape of a Fecal Sample: A Methodological Challenge?

    PubMed

    Faleye, Temitope Oluwasegun Cephas; Adewumi, Moses Olubusuyi; Adeniji, Johnson Adekunle

    2016-01-12

    Enteroviruses are a group of over 250 naked icosahedral virus serotypes that have been associated with clinical conditions that range from intrauterine enterovirus transmission withfataloutcome through encephalitis and meningitis, to paralysis. Classically, enterovirus detection was done by assaying for the development of the classic enterovirus-specific cytopathic effect in cell culture. Subsequently, the isolates were historically identified by a neutralization assay. More recently, identification has been done by reverse transcriptase-polymerase chain reaction (RT-PCR). However, in recent times, there is a move towards direct detection and identification of enteroviruses from clinical samples using the cell culture-independent RT semi-nested PCR (RT-snPCR) assay. This RT-snPCR procedure amplifies the VP1 gene, which is then sequenced and used for identification. However, while cell culture-based strategies tend to show a preponderance of certain enterovirus species depending on the cell lines included in the isolation protocol, the RT-snPCR strategies tilt in a different direction. Consequently, it is becoming apparent that the diversity observed in certain enterovirus species, e.g., enterovirus species B(EV-B), might not be because they are the most evolutionarily successful. Rather, it might stem from cell line-specific bias accumulated over several years of use of the cell culture-dependent isolation protocols. Furthermore, it might also be a reflection of the impact of the relative genome concentration on the result of pan-enterovirus VP1 RT-snPCR screens used during the identification of cell culture isolates. This review highlights the impact of these two processes on the current diversity landscape of enteroviruses and the need to re-assess enterovirus detection and identification algorithms in a bid to better balance our understanding of the enterovirus diversity landscape.

  9. Defining the Enterovirus Diversity Landscape of a Fecal Sample: A Methodological Challenge?

    PubMed Central

    Faleye, Temitope Oluwasegun Cephas; Adewumi, Moses Olubusuyi; Adeniji, Johnson Adekunle

    2016-01-01

    Enteroviruses are a group of over 250 naked icosahedral virus serotypes that have been associated with clinical conditions that range from intrauterine enterovirus transmission withfataloutcome through encephalitis and meningitis, to paralysis. Classically, enterovirus detection was done by assaying for the development of the classic enterovirus-specific cytopathic effect in cell culture. Subsequently, the isolates were historically identified by a neutralization assay. More recently, identification has been done by reverse transcriptase-polymerase chain reaction (RT-PCR). However, in recent times, there is a move towards direct detection and identification of enteroviruses from clinical samples using the cell culture-independent RT semi-nested PCR (RT-snPCR) assay. This RT-snPCR procedure amplifies the VP1 gene, which is then sequenced and used for identification. However, while cell culture-based strategies tend to show a preponderance of certain enterovirus species depending on the cell lines included in the isolation protocol, the RT-snPCR strategies tilt in a different direction. Consequently, it is becoming apparent that the diversity observed in certain enterovirus species, e.g., enterovirus species B(EV-B), might not be because they are the most evolutionarily successful. Rather, it might stem from cell line-specific bias accumulated over several years of use of the cell culture-dependent isolation protocols. Furthermore, it might also be a reflection of the impact of the relative genome concentration on the result of pan-enterovirus VP1 RT-snPCR screens used during the identification of cell culture isolates. This review highlights the impact of these two processes on the current diversity landscape of enteroviruses and the need to re-assess enterovirus detection and identification algorithms in a bid to better balance our understanding of the enterovirus diversity landscape. PMID:26771630

  10. Optimization of the transcranial magnetic stimulation protocol by defining a reliable estimate for corticospinal excitability.

    PubMed

    Cuypers, Koen; Thijs, Herbert; Meesen, Raf L J

    2014-01-01

    The goal of this study was to optimize the transcranial magnetic stimulation (TMS) protocol for acquiring a reliable estimate of corticospinal excitability (CSE) using single-pulse TMS. Moreover, the minimal number of stimuli required to obtain a reliable estimate of CSE was investigated. In addition, the effect of two frequently used stimulation intensities [110% relative to the resting motor threshold (rMT) and 120% rMT] and gender was evaluated. Thirty-six healthy young subjects (18 males and 18 females) participated in a double-blind crossover procedure. They received 2 blocks of 40 consecutive TMS stimuli at either 110% rMT or 120% rMT in a randomized order. Based upon our data, we advise that at least 30 consecutive stimuli are required to obtain the most reliable estimate for CSE. Stimulation intensity and gender had no significant influence on CSE estimation. In addition, our results revealed that for subjects with a higher rMT, fewer consecutive stimuli were required to reach a stable estimate of CSE. The current findings can be used to optimize the design of similar TMS experiments.

  11. Optimization of the alpha image reconstruction - an iterative CT-image reconstruction with well-defined image quality metrics.

    PubMed

    Lebedev, Sergej; Sawall, Stefan; Knaup, Michael; Kachelrieß, Marc

    2017-09-01

    Optimization of the AIR-algorithm for improved convergence and performance. The AIR method is an iterative algorithm for CT image reconstruction. As a result of its linearity with respect to the basis images, the AIR algorithm possesses well defined, regular image quality metrics, e.g. point spread function (PSF) or modulation transfer function (MTF), unlike other iterative reconstruction algorithms. The AIR algorithm computes weighting images α to blend between a set of basis images that preferably have mutually exclusive properties, e.g. high spatial resolution or low noise. The optimized algorithm uses an approach that alternates between the optimization of rawdata fidelity using an OSSART like update and regularization using gradient descent, as opposed to the initially proposed AIR using a straightforward gradient descent implementation. A regularization strength for a given task is chosen by formulating a requirement for the noise reduction and checking whether it is fulfilled for different regularization strengths, while monitoring the spatial resolution using the voxel-wise defined modulation transfer function for the AIR image. The optimized algorithm computes similar images in a shorter time compared to the initial gradient descent implementation of AIR. The result can be influenced by multiple parameters that can be narrowed down to a relatively simple framework to compute high quality images. The AIR images, for instance, can have at least a 50% lower noise level compared to the sharpest basis image, while the spatial resolution is mostly maintained. The optimization improves performance by a factor of 6, while maintaining image quality. Furthermore, it was demonstrated that the spatial resolution for AIR can be determined using regular image quality metrics, given smooth weighting images. This is not possible for other iterative reconstructions as a result of their non linearity. A simple set of parameters for the algorithm is discussed that provides

  12. Etiological factors of infections in diabetic foot syndrome - attempt to define optimal empirical therapy.

    PubMed

    Małecki, Rafał; Rosiński, Krzysztof; Adamiec, Rajmund

    2014-01-01

    Diabetic foot syndrome (DFS) represents one of the most frequent reasons for lower limb amputation in developed countries. In most cases, it is associated with bacterial infection, requiring optimal antibiotic therapy. The aim of this study was to identify the most frequent pathogens responsible for infections associated with DFS, establish the optimal protocol of empirical therapy, and ascertain the clinical variables that may determine the choice of the appropriate antibacterial agent. The analysis included hospital records of patients treated at the Department between 2008 and 2010. A total of 102 individuals were identified; their material was cultured and tested for antibiotic susceptibility. A total of 199 bacterial strains were isolated. There was a predominance of Gram-positive bacteria, particularly Staphylococcus aureus, Staphylococcus coagulase-negative strains, and Enterococcus faecalis. Of note was the high percentage of E. faecalis infection (16.08%). One can speculate on the potential etiological factors in the case of some bacteria, e.g. patients infected with S. aureus were characterized by higher monocytosis and lymphocytosis as compared to other patients. Analysis of drug susceptibility revealed that ciprofloxacin has the highest (but still only 44%) efficacy of all agents tested as monotherapy, and a combination of piperacillin and tazobactam or amoxicillin and clavulanate with aminoglycosides is particularly beneficial. Staphylococcus spp. predominates amongst the etiological factors of DFS infection; however, the rate of E. faecalis infection is alarmingly high. Monotherapy enables effective treatment in a minority of cases; therefore, at least two-drug protocols should be implemented from the very beginning of the therapy.

  13. Defining the optimal animal model for translational research using gene set enrichment analysis.

    PubMed

    Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert

    2016-08-01

    The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  14. Defining the "dose" of altitude training: how high to live for optimal sea level performance enhancement.

    PubMed

    Chapman, Robert F; Karlsen, Trine; Resaland, Geir K; Ge, R-L; Harber, Matthew P; Witkowski, Sarah; Stray-Gundersen, James; Levine, Benjamin D

    2014-03-15

    Chronic living at altitudes of ∼2,500 m causes consistent hematological acclimatization in most, but not all, groups of athletes; however, responses of erythropoietin (EPO) and red cell mass to a given altitude show substantial individual variability. We hypothesized that athletes living at higher altitudes would experience greater improvements in sea level performance, secondary to greater hematological acclimatization, compared with athletes living at lower altitudes. After 4 wk of group sea level training and testing, 48 collegiate distance runners (32 men, 16 women) were randomly assigned to one of four living altitudes (1,780, 2,085, 2,454, or 2,800 m). All athletes trained together daily at a common altitude from 1,250-3,000 m following a modified live high-train low model. Subjects completed hematological, metabolic, and performance measures at sea level, before and after altitude training; EPO was assessed at various time points while at altitude. On return from altitude, 3,000-m time trial performance was significantly improved in groups living at the middle two altitudes (2,085 and 2,454 m), but not in groups living at 1,780 and 2,800 m. EPO was significantly higher in all groups at 24 and 48 h, but returned to sea level baseline after 72 h in the 1,780-m group. Erythrocyte volume was significantly higher within all groups after return from altitude and was not different between groups. These data suggest that, when completing a 4-wk altitude camp following the live high-train low model, there is a target altitude between 2,000 and 2,500 m that produces an optimal acclimatization response for sea level performance.

  15. Estimating optimal sampling unit sizes for satellite surveys

    NASA Technical Reports Server (NTRS)

    Hallum, C. R.; Perry, C. R., Jr.

    1984-01-01

    This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.

  16. Defining the optimal time to the operating room may salvage early trauma deaths.

    PubMed

    Remick, Kyle N; Schwab, C William; Smith, Brian P; Monshizadeh, Amir; Kim, Patrick K; Reilly, Patrick M

    2014-05-01

    Early trauma deaths have the potential for salvage with immediate surgery. We studied time from injury to death in this group to qualify characteristics and quantify time to the operating room, yielding the greatest opportunity for salvage. The Pennsylvania Trauma Outcomes Study (PTOS) is a comprehensive registry including all Pennsylvania trauma centers. PTOS was queried for adult trauma patients from 1999 to 2010 dying within 4 hours of injury. The distribution of time to death (TD) was examined for subgroups according to mechanism of injury, hypotension (defined as systolic blood pressure ≤ 90 mm Hg), and operation required. The 5th percentile (TD5) and the 50th percentile (TD50) were calculated from the distributions and compared using the Mann-Whitney U-test. The PTOS yielded 6,547 deaths within 4 hours of injury. The overall TD5 and TD50 were 0:23 (hour:minute) and 0:59, respectively. Median penetrating injury times were significantly shorter than blunt injury times (TD5/TD50, 0:19/0:43 vs. 0:29/1:10). Median time was significantly shorter for hypotensive versus normotensive patients (TD5/TD50, 0:22/0:52 vs. 0:43/2:18). Operative subgroups had different TD5/TD50 (abdominal surgery [n = 607], 1:07/2:26; thoracic surgery [n = 756] 0:25/1:25; vascular surgery [n = 156], 0:35/2:15; and cranial surgery [n = 18], 1:20/2:42). Early trauma deaths have the potential for salvage with immediate surgery. We found TD to vary based on mechanism of injury, presence of hypotension, and type of surgery needed. With the use of TD5 and TD50 benchmarks in these subgroups, a trauma system may determine if decreased time to the operating room decreases mortality. Trauma systems can use these data to further improve prehospital and initial hospital phases of care for this subset of early death trauma patients. Epidemiologic study, level III.

  17. Optimization of ram semen cryopreservation using a chemically defined soybean lecithin-based extender.

    PubMed

    Emamverdi, M; Zhandi, M; Zare Shahneh, A; Sharafi, M; Akbari-Sharif, A

    2013-12-01

    The purpose of the present study was to investigate the effects of a chemically defined soybean lecithin-based semen extender as a substitute for egg yolk-based extenders in ram semen cryopreservation. In this study, 28 ejaculates were collected from four Zandi rams in the breeding season and then pooled together. The pooled semen was divided into six equal aliquots and diluted with six different extenders: (i) Tris-based extender (TE) containing 0.5% (w/v) soybean lecithin (SL0.5), (ii) TE containing 1% (w/v) soybean lecithin (SL1), (iii) TE containing 1.5% (w/v) soybean lecithin (SL1.5), (iv) TE containing 2% (w/v) soybean lecithin (SL2), (v) TE containing 2.5% (w/v) soybean lecithin (SL2.5) and (vi) TE containing 20% (v/v) egg yolk (EYT). After thawing, sperm motility and motion parameters, plasma membrane and acrosome integrity, apoptosis status and mitochondrial activity were evaluated. The results shown that total and progressive motility (54.43 ± 1.33% and 25.43 ± 0.96%, respectively) were significantly higher in SL1.5 when compared to other semen extenders. Sperm motion parameters (VAP, VSL, VCL, ALH and STR) were significantly higher in SL1.5 compared to other extender, with the exception of SL1 extender. Plasma membrane integrity (48.86 ± 1.38%) was significantly higher in SL1.5 when compared to other semen extenders. Also, percentage of spermatozoa with intact acrosome in SL1.5 (85.35 ± 2.19%) extender was significantly higher than that in SL0.5, SL2.5 and EYT extenders. The results showed that the proportion of live post-thawed sperm was significantly increased in SL1.5 extender compared to SL0.5, SL2 and EYT extenders. In addition, SL1, SL1.5 and SL2.5 extenders resulted in significantly lower percentage of early-apoptotic sperm than that in EYT extender. There were no significant differences in different semen extenders for percentage of post-thawed necrotic and late-apoptotic spermatozoa. Also, the results indicated that there are slight

  18. Hypofractionated radiosurgery for intact or resected brain metastases: defining the optimal dose and fractionation

    PubMed Central

    2013-01-01

    Background Hypofractionated Radiosurgery (HR) is a therapeutic option for delivering partial brain radiotherapy (RT) to large brain metastases or resection cavities otherwise not amenable to single fraction radiosurgery (SRS). The use, safety and efficacy of HR for brain metastases is not well characterized and the optimal RT dose-fractionation schedule is undefined. Methods Forty-two patients treated with HR in 3-5 fractions for 20 (48%) intact and 22 (52%) resected brain metastases with a median maximum dimension of 3.9 cm (0.8-6.4 cm) between May 2008 and August 2011 were reviewed. Twenty-two patients (52%) had received prior radiation therapy. Local (LC), intracranial progression free survival (PFS) and overall survival (OS) are reported and analyzed for relationship to multiple RT variables through Cox-regression analysis. Results The most common dose-fractionation schedules were 21 Gy in 3 fractions (67%), 24 Gy in 4 fractions (14%) and 30 Gy in 5 fractions (12%). After a median follow-up time of 15 months (range 2-41), local failure occurred in 13 patients (29%) and was a first site of failure in 6 patients (14%). Kaplan-Meier estimates of 1 year LC, intracranial PFS, and OS are: 61% (95% CI 0.53 – 0.70), 55% (95% CI 0.47 – 0.63), and 73% (95% CI 0.65 – 0.79), respectively. Local tumor control was negatively associated with PTV volume (p = 0.007) and was a significant predictor of OS (HR 0.57, 95% CI 0.33 - 0.98, p = 0.04). Symptomatic radiation necrosis occurred in 3 patients (7%). Conclusions HR is well tolerated in both new and recurrent, previously irradiated intact or resected brain metastases. Local control is negatively associated with PTV volume and a significant predictor of overall survival, suggesting a need for dose escalation when using HR for large intracranial lesions. PMID:23759065

  19. Optimized linear prediction for radial sampled multidimensional NMR experiments

    NASA Astrophysics Data System (ADS)

    Gledhill, John M.; Kasinath, Vignesh; Wand, A. Joshua

    2011-09-01

    Radial sampling in multidimensional NMR experiments offers greatly decreased acquisition times while also providing an avenue for increased sensitivity. Digital resolution remains a concern and depends strongly upon the extent of sampling of individual radial angles. Truncated time domain data leads to spurious peaks (artifacts) upon FT and 2D FT. Linear prediction is commonly employed to improve resolution in Cartesian sampled NMR experiments. Here, we adapt the linear prediction method to radial sampling. Significantly more accurate estimates of linear prediction coefficients are obtained by combining quadrature frequency components from the multiple angle spectra. This approach results in significant improvement in both resolution and removal of spurious peaks as compared to traditional linear prediction methods applied to radial sampled data. The 'averaging linear prediction' (ALP) method is demonstrated as a general tool for resolution improvement in multidimensional radial sampled experiments.

  20. TOMOGRAPHIC RECONSTRUCTION OF DIFFUSION PROPAGATORS FROM DW-MRI USING OPTIMAL SAMPLING LATTICES

    PubMed Central

    Ye, Wenxing; Entezari, Alireza; Vemuri, Baba C.

    2010-01-01

    This paper exploits the power of optimal sampling lattices in tomography based reconstruction of the diffusion propagator in diffusion weighted magnetic resonance imaging (DWMRI). Optimal sampling leads to increased accuracy of the tomographic reconstruction approach introduced by Pickalov and Basser [1]. Alternatively, the optimal sampling geometry allows for further reducing the number of samples while maintaining the accuracy of reconstruction of the diffusion propagator. The optimality of the proposed sampling geometry comes from the information theoretic advantages of sphere packing lattices in sampling multidimensional signals. These advantages are in addition to those accrued from the use of the tomographic principle used here for reconstruction. We present comparative results of reconstructions of the diffusion propagator using the Cartesian and the optimal sampling geometry for synthetic and real data sets. PMID:20596298

  1. Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization

    PubMed Central

    Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso

    2013-01-01

    The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321

  2. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  3. Optimization of sample size in controlled experiments: the CLAST rule.

    PubMed

    Botella, Juan; Ximénez, Carmen; Revuelta, Javier; Suero, Manuel

    2006-02-01

    Sequential rules are explored in the context of null hypothesis significance testing. Several studies have demonstrated that the fixed-sample stopping rule, in which the sample size used by researchers is determined in advance, is less practical and less efficient than sequential stopping rules. It is proposed that a sequential stopping rule called CLAST (composite limited adaptive sequential test) is a superior variant of COAST (composite open adaptive sequential test), a sequential rule proposed by Frick (1998). Simulation studies are conducted to test the efficiency of the proposed rule in terms of sample size and power. Two statistical tests are used: the one-tailed t test of mean differences with two matched samples, and the chi-square independence test for twofold contingency tables. The results show that the CLAST rule is more efficient than the COAST rule and reflects more realistically the practice of experimental psychology researchers.

  4. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    PubMed

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations.

  5. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.

    PubMed

    Brunelli, Davide; Caione, Carlo

    2015-07-10

    Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.

  6. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    PubMed Central

    Brunelli, Davide; Caione, Carlo

    2015-01-01

    Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203

  7. Defining Optimal Head-Tilt Position of Resuscitation in Neonates and Young Infants Using Magnetic Resonance Imaging Data

    PubMed Central

    Bhalala, Utpal S.; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A. G. M.; Allen, Robert H.; Acharya, Soumyadipta

    2016-01-01

    Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0–28 days) and young infants (age: 29 days–4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144–150°. PMID:27003759

  8. Optimal block sampling of routine, non-tumorous gallbladders.

    PubMed

    Wong, Newton Acs

    2017-03-08

    Gallbladders are common specimens in routine histopathological practice and there are, at least in the United Kingdom and Australia, national guidance on how to sample gallbladders without macroscopically-evident, focal lesions/tumours (hereafter referred to as non-tumorous gallbladders).(1) Nonetheless, this author has seen considerable variation in the numbers of blocks used and the parts of the gallbladder sampled, even within one histopathology department. The recently re-issued 'Tissue pathways for gastrointestinal and pancreatobiliary pathology' from the Royal College of Pathologists (RCPath), first recommends sampling of the cystic duct margin and "at least one section each of neck, body and any focal lesion".(1) This recommendation is referenced by a textbook chapter which itself proposes that "cross-sections of the gallbladder fundus and lateral wall should be submitted, along with the sections from the neck of the gallbladder and cystic duct, including its margin".(2) This article is protected by copyright. All rights reserved.

  9. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  10. Criteria to define a more relevant reference sample of titanium dioxide in the context of food: a multiscale approach.

    PubMed

    Dudefoi, William; Terrisse, Hélène; Richard-Plouet, Mireille; Gautron, Eric; Popa, Florin; Humbert, Bernard; Ropers, Marie-Hélène

    2017-05-01

    Titanium dioxide (TiO2) is a transition metal oxide widely used as a white pigment in various applications, including food. Due to the classification of TiO2 nanoparticles by the International Agency for Research on Cancer as potentially harmful for humans by inhalation, the presence of nanoparticles in food products needed to be confirmed by a set of independent studies. Seven samples of food-grade TiO2 (E171) were extensively characterised for their size distribution, crystallinity and surface properties by the currently recommended methods. All investigated E171 samples contained a fraction of nanoparticles, however, below the threshold defining the labelling of nanomaterial. On the basis of these results and a statistical analysis, E171 food-grade TiO2 totally differs from the reference material P25, confirming the few published data on this kind of particle. Therefore, the reference material P25 does not appear to be the most suitable model to study the fate of food-grade TiO2 in the gastrointestinal tract. The criteria currently to obtain a representative food-grade sample of TiO2 are the following: (1) crystalline-phase anatase, (2) a powder with an isoelectric point very close to 4.1, (3) a fraction of nanoparticles comprised between 15% and 45%, and (4) a low specific surface area around 10 m(2) g(-)(1).

  11. Sample of CFD optimization of a centrifugal compressor stage

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.

    2015-08-01

    Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.

  12. Optimization of Initial Prostate Biopsy in Clinical Practice: Sampling, Labeling, and Specimen Processing

    PubMed Central

    Bjurlin, Marc A.; Carter, H. Ballentine; Schellhammer, Paul; Cookson, Michael S.; Gomella, Leonard G.; Troyer, Dean; Wheeler, Thomas M.; Schlossberg, Steven; Penson, David F.; Taneja, Samir S.

    2014-01-01

    Purpose An optimal prostate biopsy in clinical practice is based on a balance between adequate detection of clinically significant prostate cancers (sensitivity), assuredness regarding the accuracy of negative sampling (negative predictive value [NPV]), limited detection of clinically insignificant cancers, and good concordance with whole-gland surgical pathology results to allow accurate risk stratification and disease localization for treatment selection. Inherent within this optimization is variation of the core number, location, labeling, and processing for pathologic evaluation. To date, there is no consensus in this regard. The purpose of this review is 3-fold: 1. To define the optimal number and location of biopsy cores during primary prostate biopsy among men with suspected prostate cancer, 2. To define the optimal method of labeling prostate biopsy cores for pathologic processing that will provide relevant and necessary clinical information for all potential clinical scenarios, and 3. To determine the maximal number of prostate biopsy cores allowable within a specimen jar that would not preclude accurate histologic evaluation of the tissue. Materials and Methods A bibliographic search covering the period up to July, 2012 was conducted using PubMed®. This search yielded approximately 550 articles. Articles were reviewed and categorized based on which of the three objectives of this review was addressed. Data was extracted, analyzed, and summarized. Recommendations based on this literature review and our clinical experience is provided. Results The use of 10–12-core extended-sampling protocols increases cancer detection rates (CDRs) compared to traditional sextant sampling methods and reduces the likelihood that patients will require a repeat biopsy by increasing NPV, ultimately allowing more accurate risk stratification without increasing the likelihood of detecting insignificant cancers. As the number of cores increases above 12 cores, the increase in

  13. Optimization conditions of samples saponification for tocopherol analysis.

    PubMed

    Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto

    2014-09-01

    A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Determination and optimization of spatial samples for distributed measurements.

    SciTech Connect

    Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  15. Optimized nested Markov chain Monte Carlo sampling: theory

    SciTech Connect

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.

  16. Optimizing analog-to-digital converters for sampling extracellular potentials.

    PubMed

    Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan

    2012-01-01

    In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy.

  17. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    ranging from the income level to age and her preference order over a set of products (e.g. movies in Netflix ). The ranking task is to learn a map- ping...learners in RankBoost. However, in both cases, the proposed strategy selects the samples which are estimated to produce a faster convergence from the...steps in Section 5. 2. Related Work A number of strategies have been proposed for active learning in the classification framework. Some of those center

  18. Adequately defining tumor cell proportion in tissue samples for molecular testing improves interobserver reproducibility of its assessment.

    PubMed

    Lhermitte, Benoît; Egele, Caroline; Weingertner, Noëlle; Ambrosetti, Damien; Dadone, Bérengère; Kubiniek, Valérie; Burel-Vandenbos, Fanny; Coyne, John; Michiels, Jean-François; Chenard, Marie-Pierre; Rouleau, Etienne; Sabourin, Jean-Christophe; Bellocq, Jean-Pierre

    2017-01-01

    discrepancies were clinically relevant since the study was conducted. Although semi-quantitative estimations remain somewhat subjective, their reliability improves when tumor cellularity is adequately defined and heterogeneous tissue samples are macrodissected for molecular analysis.

  19. Optimized Sampling Strategies For Non-Proliferation Monitoring: Report

    SciTech Connect

    Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.

    2015-10-20

    Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.

  20. Including sampling and phenotyping costs into the optimization of two stage designs for genomewide association studies.

    PubMed

    Müller, Hans-Helge; Pahl, Roman; Schäfer, Helmut

    2007-12-01

    We propose optimized two-stage designs for genome-wide case-control association studies, using a hypothesis testing paradigm. To save genotyping costs, the complete marker set is genotyped in a sub-sample only (stage I). On stage II, the most promising markers are then genotyped in the remaining sub-sample. In recent publications, two-stage designs were proposed which minimize the overall genotyping costs. To achieve full design optimization, we additionally include sampling costs into both the cost function and the design optimization. The resulting optimal designs differ markedly from those optimized for genotyping costs only (partially optimized designs), and achieve considerable further cost reductions. Compared with partially optimized designs, fully optimized two-stage designs have higher first-stage sample proportion. Furthermore, the increment of the sample size over the one-stage design, which is necessary in two-stage designs in order to compensate for the loss of power due to partial genotyping, is less pronounced for fully optimized two-stage designs. In addition, we address the scenario where the investigator is interested to gain as much information as possible, however is restricted in terms of a budget. In that we develop two-stage designs that maximize the power under a certain cost constraint.

  1. Optimizing fish sampling for fish - mercury bioaccumulation factors

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  2. Optimal sampling of visual information for lightness judgments

    PubMed Central

    Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.

    2013-01-01

    The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251

  3. Sampling technique is important for optimal isolation of pharyngeal gonorrhoea.

    PubMed

    Mitchell, M; Rane, V; Fairley, C K; Whiley, D M; Bradshaw, C S; Bissessor, M; Chen, M Y

    2013-11-01

    Culture is insensitive for the detection of pharyngeal gonorrhoea but isolation is pivotal to antimicrobial resistance surveillance. The aim of this study was to ascertain whether recommendations provided to clinicians (doctors and nurses) on pharyngeal swabbing technique could improve gonorrhoea detection rates and to determine which aspects of swabbing technique are important for optimal isolation. This study was undertaken at the Melbourne Sexual Health Centre, Australia. Detection rates among clinicians for pharyngeal gonorrhoea were compared before (June 2006-May 2009) and after (June 2009-June 2012) recommendations on swabbing technique were provided. Associations between detection rates and reported swabbing technique obtained via a clinician questionnaire were examined. The overall yield from testing before and after provision of the recommendations among 28 clinicians was 1.6% (134/8586) and 1.8% (264/15,046) respectively (p=0.17). Significantly higher detection rates were seen following the recommendations among clinicians who reported a change in their swabbing technique in response to the recommendations (2.1% vs. 1.5%; p=0.004), swabbing a larger surface area (2.0% vs. 1.5%; p=0.02), applying more swab pressure (2.5% vs. 1.5%; p<0.001) and a change in the anatomical sites they swabbed (2.2% vs. 1.5%; p=0.002). The predominant change in sites swabbed was an increase in swabbing of the oropharynx: from a median of 0% to 80% of the time. More thorough swabbing improves the isolation of pharyngeal gonorrhoea using culture. Clinicians should receive training to ensure swabbing is performed with sufficient pressure and that it covers an adequate area that includes the oropharynx.

  4. Optimal Sampling of a Reaction Coordinate in Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2005-01-01

    Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.

  5. Optimal Sampling of a Reaction Coordinate in Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2005-01-01

    Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.

  6. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  7. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  8. Optimal Designs of the Median Run Length Based Double Sampling X̄ Chart for Minimizing the Average Sample Size

    PubMed Central

    Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin

    2013-01-01

    Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873

  9. Balancing sample accumulation and DNA degradation rates to optimize noninvasive genetic sampling of sympatric carnivores.

    PubMed

    Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P

    2015-07-01

    Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates.

  10. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A model for estimating the value of sampling programs and the optimal number of samples for contaminated soil

    NASA Astrophysics Data System (ADS)

    Back, Pär-Erik

    2007-04-01

    A model is presented for estimating the value of information of sampling programs for contaminated soil. The purpose is to calculate the optimal number of samples when the objective is to estimate the mean concentration. A Bayesian risk-cost-benefit decision analysis framework is applied and the approach is design-based. The model explicitly includes sample uncertainty at a complexity level that can be applied to practical contaminated land problems with limited amount of data. Prior information about the contamination level is modelled by probability density functions. The value of information is expressed in monetary terms. The most cost-effective sampling program is the one with the highest expected net value. The model was applied to a contaminated scrap yard in Göteborg, Sweden, contaminated by metals. The optimal number of samples was determined to be in the range of 16-18 for a remediation unit of 100 m2. Sensitivity analysis indicates that the perspective of the decision-maker is important, and that the cost of failure and the future land use are the most important factors to consider. The model can also be applied for other sampling problems, for example, sampling and testing of wastes to meet landfill waste acceptance procedures.

  12. Optimal molecular profiling of tissue and tissue components: defining the best processing and microdissection methods for biomedical applications.

    PubMed

    Rodriguez-Canales, Jaime; Hanson, Jeffrey C; Hipp, Jason D; Balis, Ulysses J; Tangrea, Michael A; Emmert-Buck, Michael R; Bova, G Steven

    2013-01-01

    Isolation of well-preserved pure cell populations is a prerequisite for sound studies of the molecular basis of any tissue-based biological phenomenon. This updated chapter reviews current methods for obtaining anatomically specific signals from molecules isolated from tissues, a basic requirement for productive linking of phenotype and genotype. The quality of samples isolated from tissue and used for molecular analysis is often glossed over or omitted from publications, making interpretation and replication of data difficult or impossible. Fortunately, recently developed techniques allow life scientists to better document and control the quality of samples used for a given assay, creating a foundation for improvement in this area. Tissue processing for molecular studies usually involves some or all of the following steps: tissue collection, gross dissection/identification, fixation, processing/embedding, storage/archiving, sectioning, staining, microdissection/annotation, and pure analyte labeling/identification and quantification. We provide a detailed comparison of some current tissue microdissection technologies and provide detailed example protocols for tissue component handling upstream and downstream from microdissection. We also discuss some of the physical and chemical issues related to optimal tissue processing and include methods specific to cytology specimens. We encourage each laboratory to use these as a starting point for optimization of their overall process of moving from collected tissue to high-quality, appropriately anatomically tagged scientific results. Improvement in this area will significantly increase life science quality and productivity. The chapter is divided into introduction, materials, protocols, and notes subheadings. Because many protocols are covered in each of these sections, information relating to a single protocol is not contiguous. To get the greatest benefit from this chapter, readers are advised to read through the entire

  13. Defining Conditions for Optimal Inhibition of Food Intake in Rats by a Grape-Seed Derived Proanthocyanidin Extract

    PubMed Central

    Serrano, Joan; Casanova-Martí, Àngela; Blay, Mayte; Terra, Ximena; Ardévol, Anna; Pinent, Montserrat

    2016-01-01

    Food intake depends on homeostatic and non-homeostatic factors. In order to use grape seed proanthocyanidins (GSPE) as food intake limiting agents, it is important to define the key characteristics of their bioactivity within this complex function. We treated rats with acute and chronic treatments of GSPE at different doses to identify the importance of eating patterns and GSPE dose and the mechanistic aspects of GSPE. GSPE-induced food intake inhibition must be reproduced under non-stressful conditions and with a stable and synchronized feeding pattern. A minimum dose of around 350 mg GSPE/kg body weight (BW) is needed. GSPE components act by activating the Glucagon-like peptide-1 (GLP-1) receptor because their effect is blocked by Exendin 9-39. GSPE in turn acts on the hypothalamic center of food intake control probably because of increased GLP-1 production in the intestine. To conclude, GSPE inhibits food intake through GLP-1 signaling, but it needs to be dosed under optimal conditions to exert this effect. PMID:27775601

  14. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  15. Sampling of soil moisture fields and related errors: implications to the optimal sampling design

    NASA Astrophysics Data System (ADS)

    Yoo, Chulsang

    Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985-992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35-45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.

  16. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms.

  17. Optimization of techniques for multiple platform testing in small, precious samples such as human chorionic villus sampling.

    PubMed

    Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John

    2016-11-01

    Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  18. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    SciTech Connect

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  19. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  20. Defining "Normophilic" and "Paraphilic" Sexual Fantasies in a Population-Based Sample: On the Importance of Considering Subgroups.

    PubMed

    Joyal, Christian C

    2015-12-01

    interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining "normophilic" and "paraphilic" sexual fantasies in a population-based sample: On the importance of considering subgroups. Sex Med 2015;3:321-330.

  1. Optimal sampling efficiency in Monte Carlo sampling with an approximate potential

    SciTech Connect

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.

  2. An effective approach for obtaining optimal sampling windows for population pharmacokinetic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2009-01-01

    This paper describes an effective approach for optimizing sampling windows for population pharmacokinetic experiments. Sampling windows has been proposed for population pharmacokinetic experiments that are conducted in late phase drug development programs where patients are enrolled in many centers and out-patient clinic settings. Collection of samples under this uncontrolled environment at fixed times may be problematic and can result in uninformative data. A sampling windows approach is more practicable, as it provides the opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses D-optimality to specify time intervals around fixed D-optimal time points that results in a specified level of efficiency. The sampling windows have different lengths and achieve two objectives: the joint sampling windows design attains a high specified efficiency level and also reflects the sensitivities of the plasma concentration-time profile to parameters. It is shown that optimal sampling windows obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected.

  3. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry.

  4. Defining optimal DEM resolutions and point densities for modelling hydrologically sensitive areas in agricultural catchments dominated by microtopography

    NASA Astrophysics Data System (ADS)

    Thomas, I. A.; Jordan, P.; Shine, O.; Fenton, O.; Mellander, P.-E.; Dunlop, P.; Murphy, P. N. C.

    2017-02-01

    Defining critical source areas (CSAs) of diffuse pollution in agricultural catchments depends upon the accurate delineation of hydrologically sensitive areas (HSAs) at highest risk of generating surface runoff pathways. In topographically complex landscapes, this delineation is constrained by digital elevation model (DEM) resolution and the influence of microtopographic features. To address this, optimal DEM resolutions and point densities for spatially modelling HSAs were investigated, for onward use in delineating CSAs. The surface runoff framework was modelled using the Topographic Wetness Index (TWI) and maps were derived from 0.25 m LiDAR DEMs (40 bare-earth points m-2), resampled 1 m and 2 m LiDAR DEMs, and a radar generated 5 m DEM. Furthermore, the resampled 1 m and 2 m LiDAR DEMs were regenerated with reduced bare-earth point densities (5, 2, 1, 0.5, 0.25 and 0.125 points m-2) to analyse effects on elevation accuracy and important microtopographic features. Results were compared to surface runoff field observations in two 10 km2 agricultural catchments for evaluation. Analysis showed that the accuracy of modelled HSAs using different thresholds (5%, 10% and 15% of the catchment area with the highest TWI values) was much higher using LiDAR data compared to the 5 m DEM (70-100% and 10-84%, respectively). This was attributed to the DEM capturing microtopographic features such as hedgerow banks, roads, tramlines and open agricultural drains, which acted as topographic barriers or channels that diverted runoff away from the hillslope scale flow direction. Furthermore, the identification of 'breakthrough' and 'delivery' points along runoff pathways where runoff and mobilised pollutants could be potentially transported between fields or delivered to the drainage channel network was much higher using LiDAR data compared to the 5 m DEM (75-100% and 0-100%, respectively). Optimal DEM resolutions of 1-2 m were identified for modelling HSAs, which balanced the need

  5. Parallel genetic algorithm with population-based sampling approach to discrete optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Subramanian, Nithya

    Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of

  6. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  7. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  8. Metagenomic Analysis of Dairy Bacteriophages: Extraction Method and Pilot Study on Whey Samples Derived from Using Undefined and Defined Mesophilic Starter Cultures.

    PubMed

    Muhammed, Musemma K; Kot, Witold; Neve, Horst; Mahony, Jennifer; Castro-Mejía, Josué L; Krych, Lukasz; Hansen, Lars H; Nielsen, Dennis S; Sørensen, Søren J; Heller, Knut J; van Sinderen, Douwe; Vogensen, Finn K

    2017-10-01

    Despite being potentially highly useful for characterizing the biodiversity of phages, metagenomic studies are currently not available for dairy bacteriophages, partly due to the lack of a standard procedure for phage extraction. We optimized an extraction method that allows the removal of the bulk protein from whey and milk samples with losses of less than 50% of spiked phages. The protocol was applied to extract phages from whey in order to test the notion that members of Lactococcus lactis 936 (now Sk1virus), P335, c2 (now C2virus) and Leuconostoc phage groups are the most frequently encountered in the dairy environment. The relative abundance and diversity of phages in eight and four whey mixtures from dairies using undefined mesophilic mixed-strain cultures containing Lactococcus lactis subsp. lactis biovar diacetylactis and Leuconostoc species (i.e., DL starter cultures) and defined cultures, respectively, were assessed. Results obtained from transmission electron microscopy and high-throughput sequence analyses revealed the dominance of Lc. lactis 936 phages (order Caudovirales, family Siphoviridae) in dairies using undefined DL starter cultures and Lc. lactis c2 phages (order Caudovirales, family Siphoviridae) in dairies using defined cultures. The 936 and Leuconostoc phages demonstrated limited diversity. Possible coinduction of temperate P335 prophages and satellite phages in one of the whey mixtures was also observed.IMPORTANCE The method optimized in this study could provide an important basis for understanding the dynamics of the phage community (abundance, development, diversity, evolution, etc.) in dairies with different sizes, locations, and production strategies. It may also enable the discovery of previously unknown phages, which is crucial for the development of rapid molecular biology-based methods for phage burden surveillance systems. The dominance of only a few phage groups in the dairy environment signifies the depth of knowledge gained over

  9. Single Pass Albumin Dialysis-A Dose-Finding Study to Define Optimal Albumin Concentration and Dialysate Flow.

    PubMed

    Schmuck, Rosa Bianca; Nawrot, Gesa-Henrike; Fikatas, Panagiotis; Reutzel-Selke, Anja; Pratschke, Johann; Sauer, Igor Maximilian

    2017-02-01

    Several artificial liver support concepts have been evaluated both in vitro and clinically. Single pass albumin dialysis (SPAD) has shown to be one of the most simple approaches for removing albumin-bound toxins and water-soluble substances. Being faced with acute liver failure (ALF) in everyday practice encouraged our attempt to define the optimal conditions for SPAD more precisely in a standardized experimental setup. Albumin concentration was adjusted to either 1%, 2%, 3%, or 4%, while the flow rate of the dialysate was kept constant at a speed of 700 mL/h. The flow rate of the dialysate was altered between 350, 500, 700, and 1000 mL/h, whereas the albumin concentration was continuously kept at 3%. This study revealed that the detoxification of albumin-bound substances could be improved by increasing the concentration of albumin in the dialysate with an optimum at 3%. A further increase of the albumin concentration to 4% did not lead to a significant increase in detoxification. Furthermore, we observed a gradual increase of the detoxification efficiency for albumin-bound substances, from 350 mL/h to 700 mL/h (for bilirubin) or 1000 mL/h (for bile acids) of dialysate flow. Water-soluble toxins (ammonia, creatinine, urea, uric acid) were removed almost completely, regardless of albumin concentration or flow rate. In conclusion, this study confirmed that SPAD is effective in eliminating albumin-bound as well as water-soluble toxins using a simulation of ALF. Furthermore, this project was successful in evaluating the most effective combination of albumin concentration (3%) and dialysate flow (700 mL/h-1000 mL/h) in SPAD for the first time. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  11. Sample Optimization for Five Plant-Parasitic Nematodes in an Alfalfa Field

    PubMed Central

    Goodell, P. B.; Ferris, H.

    1981-01-01

    A data base representing nematode counts and soil weight from 1,936 individual soil cores taken from a 7-ha alfalfa field was used to investigate sample optimization for five plant-parasitic nematodes: Meloidogyne arenaria, Pratylenchus minyus, Merlinius brevidens, Helicotylenchus digonicus, and Paratrichodorus minor. Sample plans were evaluated by the accuracy and reliability of their estimation of the population and by the cost of collecting, processing, and counting the samples. Interactive FORTRAN programs were constructed to simulate four collecting patterns: random; division of the field into square sub-units (cells); and division of the field into rectangular sub-traits (strips) running in two directions. Depending on the pattern, sample numbers varied from 1 to 25 with each sample representing from 1 to 50 cores. Each pattern, sample, and core combination was replicated 50 times. Strip stratification north/south was the most optimal sampling pattern in this field because it isolated a streak of fine-textured soil. The mathematical optimmn was not found because of data range limitations. When practical economic time constraints (5 hr to collect, process, and count nematode samples) are placed on the optimization process, all species estimates deviate no more than 25 % from the true mean. If accuracy constraints are placed on the process (no more than 15% deviation from true field mean), all species except Merlinius required less than 5 hr to complete the sample process. PMID:19300768

  12. Approximate Optimal Control of Affine Nonlinear Continuous-Time Systems Using Event-Sampled Neurodynamic Programming.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2017-03-01

    This paper presents an approximate optimal control of nonlinear continuous-time systems in affine form by using the adaptive dynamic programming (ADP) with event-sampled state and input vectors. The knowledge of the system dynamics is relaxed by using a neural network (NN) identifier with event-sampled inputs. The value function, which becomes an approximate solution to the Hamilton-Jacobi-Bellman equation, is generated by using event-sampled NN approximator. Subsequently, the NN identifier and the approximated value function are utilized to obtain the optimal control policy. Both the identifier and value function approximator weights are tuned only at the event-sampled instants leading to an aperiodic update scheme. A novel adaptive event sampling condition is designed to determine the sampling instants, such that the approximation accuracy and the stability are maintained. A positive lower bound on the minimum inter-sample time is guaranteed to avoid accumulation point, and the dependence of inter-sample time upon the NN weight estimates is analyzed. A local ultimate boundedness of the resulting nonlinear impulsive dynamical closed-loop system is shown. Finally, a numerical example is utilized to evaluate the performance of the near-optimal design. The net result is the design of an event-sampled ADP-based controller for nonlinear continuous-time systems.

  13. Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes.

    PubMed

    Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos

    2015-02-18

    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without

  14. GLLS for optimally sampled continuous dynamic system modeling: theory and algorithm.

    PubMed

    Feng, D; Ho, D; Lau, K K; Siu, W C

    1999-04-01

    The original generalized linear least squares (GLLS) algorithm was developed for non-uniformly sampled biomedical system parameter estimation using finely sampled instantaneous measurements (D. Feng, S.C. Huang, Z. Wang, D. Ho, An unbiased parametric imaging algorithm for non-uniformly sampled biomedical system parameter estimation, IEEE Trans. Med. Imag. 15 (1996) 512-518). This algorithm is particularly useful for image-wide generation of parametric images with positron emission tomography (PET), as it is computationally efficient and statistically reliable (D. Feng, D. Ho, Chen, K., L.C. Wu, J.K. Wang, R.S. Liu, S.H. Yeh, An evaluation of the algorithms for determining local cerebral metabolic rates of glucose using positron emission tomography dynamic data, IEEE Trans. Med. Imag. 14 (1995) 697-710). However, when dynamic PET image data are sampled according to the optimal image sampling schedule (OISS) to reduce memory and storage space (X. Li, D. Feng, K. Chen, Optimal image sampling schedule: A new effective way to reduce dynamic image storage space and functional image processing time, IEEE Trans. Med. Imag. 15 (1996) 710-718), only a few temporal image frames are recorded (e.g. only four images are recorded for the four parameter fluoro-deoxy-glucose (FDG) model). These image frames are recorded in terms of accumulated radio-activity counts and as a result, the direct application of GLLS is not reliable as instantaneous measurement samples can no longer be approximated by averaging of accumulated measurements over the sampling intervals. In this paper, we extend GLLS to OISS-GLLS which deals with the fewer accumulated measurement samples obtained from OISS dynamic systems. The theory and algorithm of this new technique are formulated and studied extensively. To investigate statistical reliability and computational efficiency of OISS-GLLS, a simulation study using dynamic PET data was performed. OISS-GLLS using 4-measurement samples was compared to the non

  15. Optimization of Sample Points for Monitoring Arable Land Quality by Simulated Annealing while Considering Spatial Variations

    PubMed Central

    Wang, Junxiao; Wang, Xiaorui; Zhou, Shenglu; Wu, Shaohua; Zhu, Yan; Lu, Chunfeng

    2016-01-01

    With China’s rapid economic development, the reduction in arable land has emerged as one of the most prominent problems in the nation. The long-term dynamic monitoring of arable land quality is important for protecting arable land resources. An efficient practice is to select optimal sample points while obtaining accurate predictions. To this end, the selection of effective points from a dense set of soil sample points is an urgent problem. In this study, data were collected from Donghai County, Jiangsu Province, China. The number and layout of soil sample points are optimized by considering the spatial variations in soil properties and by using an improved simulated annealing (SA) algorithm. The conclusions are as follows: (1) Optimization results in the retention of more sample points in the moderate- and high-variation partitions of the study area; (2) The number of optimal sample points obtained with the improved SA algorithm is markedly reduced, while the accuracy of the predicted soil properties is improved by approximately 5% compared with the raw data; (3) With regard to the monitoring of arable land quality, a dense distribution of sample points is needed to monitor the granularity. PMID:27706051

  16. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    PubMed

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  17. An algorithm for the weighting matrices in the sampled-data optimal linear regulator problem

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.; Caglayan, A. K.

    1976-01-01

    The sampled-data optimal linear regulator problem provides a means whereby a control designer can use an understanding of continuous optimal regulator design to produce a digital state variable feedback control law which satisfies continuous system performance specifications. A basic difficulty in applying the sampled-data regulator theory is the requirement that certain digital performance index weighting matrices, expressed as complicated functions of system matrices, be computed. Infinite series representations are presented for the weighting matrices of the time-invariant version of the optimal linear sampled-data regulator problem. Error bounds are given for estimating the effect of truncating the series expressions after a finite number of terms, and a method is described for their computer implementation. A numerical example is given to illustrate the results.

  18. Sample size calculation for testing differences between cure rates with the optimal log-rank test.

    PubMed

    Wu, Jianrong

    2017-01-01

    In this article, sample size calculations are developed for use when the main interest is in the differences between the cure rates of two groups. Following the work of Ewell and Ibrahim, the asymptotic distribution of the weighted log-rank test is derived under the local alternative. The optimal log-rank test under the proportional distributions alternative is discussed, and sample size formulas for the optimal and standard log-rank tests are derived. Simulation results show that the proposed formulas provide adequate sample size estimation for trial designs and that the optimal log-rank test is more efficient than the standard log-rank test, particularly when both cure rates and percentages of censoring are small.

  19. Defining the Optimal Planning Target Volume in Image-Guided Stereotactic Radiosurgery of Brain Metastases: Results of a Randomized Trial

    SciTech Connect

    Kirkpatrick, John P.; Wang, Zhiheng; Sampson, John H.; McSherry, Frances; Herndon, James E.; Allen, Karen J.; Duffy, Eileen; Hoang, Jenny K.; Chang, Zheng; Yoo, David S.; Kelsey, Chris R.; Yin, Fang-Fang

    2015-01-01

    Purpose: To identify an optimal margin about the gross target volume (GTV) for stereotactic radiosurgery (SRS) of brain metastases, minimizing toxicity and local recurrence. Methods and Materials: Adult patients with 1 to 3 brain metastases less than 4 cm in greatest dimension, no previous brain radiation therapy, and Karnofsky performance status (KPS) above 70 were eligible for this institutional review board–approved trial. Individual lesions were randomized to 1- or 3- mm uniform expansion of the GTV defined on contrast-enhanced magnetic resonance imaging (MRI). The resulting planning target volume (PTV) was treated to 24, 18, or 15 Gy marginal dose for maximum PTV diameters less than 2, 2 to 2.9, and 3 to 3.9 cm, respectively, using a linear accelerator–based image-guided system. The primary endpoint was local recurrence (LR). Secondary endpoints included neurocognition Mini-Mental State Examination, Trail Making Test Parts A and B, quality of life (Functional Assessment of Cancer Therapy-Brain), radionecrosis (RN), need for salvage radiation therapy, distant failure (DF) in the brain, and overall survival (OS). Results: Between February 2010 and November 2012, 49 patients with 80 brain metastases were treated. The median age was 61 years, the median KPS was 90, and the predominant histologies were non–small cell lung cancer (25 patients) and melanoma (8). Fifty-five, 19, and 6 lesions were treated to 24, 18, and 15 Gy, respectively. The PTV/GTV ratio, volume receiving 12 Gy or more, and minimum dose to PTV were significantly higher in the 3-mm group (all P<.01), and GTV was similar (P=.76). At a median follow-up time of 32.2 months, 11 patients were alive, with median OS 10.6 months. LR was observed in only 3 lesions (2 in the 1 mm group, P=.51), with 6.7% LR 12 months after SRS. Biopsy-proven RN alone was observed in 6 lesions (5 in the 3-mm group, P=.10). The 12-month DF rate was 45.7%. Three months after SRS, no significant change in

  20. Optimization of Proteomic Sample Preparation Procedures for Comprehensive Protein Characterization of Pathogenic Systems

    PubMed Central

    Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.

    2008-01-01

    Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792

  1. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  2. Sample Subset Optimization Techniques for Imbalanced and Ensemble Learning Problems in Bioinformatics Applications.

    PubMed

    Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y

    2014-03-01

    Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.

  3. XAFSmass: a program for calculating the optimal mass of XAFS samples

    NASA Astrophysics Data System (ADS)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  4. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater

    PubMed Central

    Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal

    2016-01-01

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016

  5. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.

    PubMed

    Zahid, Erum; Hussain, Ijaz; Spöck, Gunter; Faisal, Muhammad; Shabbir, Javid; M AbdEl-Salam, Nasser; Hussain, Tajammal

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design.

  6. Algorithms for integration of stochastic differential equations using parallel optimized sampling in the Stratonovich calculus

    NASA Astrophysics Data System (ADS)

    Kiesewetter, Simon; Drummond, Peter D.

    2017-03-01

    A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.

  7. Optimal sample size allocation for Welch's test in one-way heteroscedastic ANOVA.

    PubMed

    Shieh, Gwowen; Jan, Show-Li

    2015-06-01

    The determination of an adequate sample size is a vital aspect in the planning stage of research studies. A prudent strategy should incorporate all of the critical factors and cost considerations into sample size calculations. This study concerns the allocation schemes of group sizes for Welch's test in a one-way heteroscedastic ANOVA. Optimal allocation approaches are presented for minimizing the total cost while maintaining adequate power and for maximizing power performance for a fixed cost. The commonly recommended ratio of sample sizes is proportional to the ratio of the population standard deviations or the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Detailed numerical investigations have shown that these usual allocation methods generally do not give the optimal solution. The suggested procedures are illustrated using an example of the cost-efficiency evaluation in multidisciplinary pain centers.

  8. OptSSeq: High-throughput sequencing readout of growth enrichment defines optimal gene expression elements for homoethanologenesis

    DOE PAGES

    Ghosh, Indro Neil; Landick, Robert

    2016-07-16

    The optimization of synthetic pathways is a central challenge in metabolic engineering. OptSSeq (Optimization by Selection and Sequencing) is one approach to this challenge. OptSSeq couples selection of optimal enzyme expression levels linked to cell growth rate with high-throughput sequencing to track enrichment of gene expression elements (promoters and ribosomebinding sites) from a combinatorial library. OptSSeq yields information on both optimal and suboptimal enzyme levels, and helps identify constraints that limit maximal product formation. Here we report a proof-of-concept implementation of OptSSeq using homoethanologenesis, a two-step pathway consisting of pyruvate decarboxylase (Pdc) and alcohol dehydrogenase (Adh) that converts pyruvate tomore » ethanol and is naturally optimized in the bacterium Zymomonas mobilis. We used OptSSeq to determine optimal gene expression elements and enzyme levels for Z. mobilis Pdc, AdhA, and AdhB expressed in Escherichia coli. By varying both expression signals and gene order, we identified an optimal solution using only Pdc and AdhB. We resolved current uncertainty about the functions of the Fe2+-dependent AdhB and Zn2+- dependent AdhA by showing that AdhB is preferred over AdhA for rapid growth in both E. coli and Z. mobilis. Finally, by comparing predictions of growth-linked metabolic flux to enzyme synthesis costs, we established that optimal E. coli homoethanologenesis was achieved by our best pdc-adhB expression cassette and that the remaining constraints lie in the E. coli metabolic network or inefficient Pdc or AdhB function in E. coli. Furthermore, OptSSeq is a general tool for synthetic biology to tune enzyme levels in any pathway whose optimal function can be linked to cell growth or survival.« less

  9. An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.

    PubMed

    Kumar, Naresh

    2009-02-01

    This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.

  10. Multivariate optimization of molecularly imprinted polymer solid-phase extraction applied to parathion determination in different water samples.

    PubMed

    Alizadeh, Taher; Ganjali, Mohammad Reza; Nourozi, Parviz; Zare, Mashaalah

    2009-04-13

    In this work a parathion selective molecularly imprinted polymer was synthesized and applied as a high selective adsorber material for parathion extraction and determination in aqueous samples. The method was based on the sorption of parathion in the MIP according to simple batch procedure, followed by desorption by using methanol and measurement with square wave voltammetry. Plackett-Burman and Box-Behnken designs were used for optimizing the solid-phase extraction, in order to enhance the recovery percent and improve the pre-concentration factor. By using the screening design, the effect of six various factors on the extraction recovery was investigated. These factors were: pH, stirring rate (rpm), sample volume (V(1)), eluent volume (V(2)), organic solvent content of the sample (org%) and extraction time (t). The response surface design was carried out considering three main factors of (V(2)), (V(1)) and (org%) which were found to be main effects. The mathematical model for the recovery percent was obtained as a function of the mentioned main effects. Finally the main effects were adjusted according to the defined desirability function. It was found that the recovery percents more than 95% could be easily obtained by using the optimized method. By using the experimental conditions, obtained in the optimization step, the method allowed parathion selective determination in the linear dynamic range of 0.20-467.4 microg L(-1), with detection limit of 49.0 ng L(-1) and R.S.D. of 5.7% (n=5). Parathion content of water samples were successfully analyzed when evaluating potentialities of the developed procedure.

  11. Optimization of Sampling Positions for Measuring Ventilation Rates in Naturally Ventilated Buildings Using Tracer Gas

    PubMed Central

    Shen, Xiong; Zong, Chao; Zhang, Guoqiang

    2012-01-01

    Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.

  12. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2017-09-27

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability

    DTIC Science & Technology

    2015-07-01

    Importance Sampling in the Evaluation and Optimization of Buffered Failure Probability Marwan M. Harajli Graduate Student, Dept. of Civil and Environ...Seattle, USA Johannes O. Royset Associate Professor, Operations Research Dept., Naval Postgraduate School , Monterey, USA ABSTRACT: Engineering design is...criterion is usually the failure probability. In this paper, we examine the buffered failure probability as an attractive alternative to the failure

  14. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    NASA Astrophysics Data System (ADS)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  15. A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization

    PubMed Central

    Tian, Shulin; Yang, Chenglin; Liu, Cheng

    2016-01-01

    The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using. PMID:27738424

  16. Optimal sampling of antipsychotic medicines: a pharmacometric approach for clinical practice

    PubMed Central

    Perera, Vidya; Bies, Robert R; Mo, Gary; Dolton, Michael J; Carr, Vaughan J; McLachlan, Andrew J; Day, Richard O; Polasek, Thomas M; Forrest, Alan

    2014-01-01

    Aim To determine optimal sampling strategies to allow the calculation of clinical pharmacokinetic parameters for selected antipsychotic medicines using a pharmacometric approach. Methods This study utilized previous population pharmacokinetic parameters of the antipsychotic medicines aripiprazole, clozapine, olanzapine, perphenazine, quetiapine, risperidone (including 9-OH risperidone) and ziprasidone. d-optimality was utilized to identify time points which accurately predicted the pharmacokinetic parameters (and expected error) of each drug at steady-state. A standard two stage population approach (STS) with MAP-Bayesian estimation was used to compare area under the concentration–time curves (AUC) generated from sparse optimal time points and rich extensive data. Monte Carlo Simulation (MCS) was used to simulate 1000 patients with population variability in pharmacokinetic parameters. Forward stepwise regression analysis was used to determine the most predictive time points of the AUC for each drug at steady-state. Results Three optimal sampling times were identified for each antipsychotic medicine. For aripiprazole, clozapine, olanzapine, perphenazine, risperidone, 9-OH risperidone, quetiapine and ziprasidone the CV% of the apparent clearance using optimal sampling strategies were 19.5, 8.6, 9.5, 13.5, 12.9, 10.0, 16.0 and 10.7, respectively. Using the MCS and linear regression approach to predict AUC, the recommended sampling windows were 16.5–17.5 h, 10–11 h, 23–24 h, 19–20 h, 16.5–17.5 h, 22.5–23.5 h, 5–6 h and 5.5–6.5 h, respectively. Conclusion This analysis provides important sampling information for future population pharmacokinetic studies and clinical studies investigating the pharmacokinetics of antipsychotic medicines. PMID:24773369

  17. Optimization of low-background alpha spectrometers for analysis of thick samples.

    PubMed

    Misiaszek, M; Pelczar, K; Wójcik, M; Zuzel, G; Laubenstein, M

    2013-11-01

    Results of alpha spectrometric measurements performed deep underground and above ground with and without active veto show that the underground measurement of thick samples is the most sensitive method due to significant reduction of the muon-induced background. In addition, the polonium diffusion requires for some samples an appropriate selection of an energy region in the registered spectrum. On the basis of computer simulations the best counting conditions are selected for a thick lead sample in order to optimize the detection limit. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples

    NASA Astrophysics Data System (ADS)

    Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav

    2014-05-01

    Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).

  19. Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case

    PubMed Central

    Schmerling, Edward; Janson, Lucas; Pavone, Marco

    2015-01-01

    Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041

  20. Optimization of chemically defined cell culture media--replacing fetal bovine serum in mammalian in vitro methods.

    PubMed

    van der Valk, J; Brunner, D; De Smet, K; Fex Svenningsen, A; Honegger, P; Knudsen, L E; Lindl, T; Noraberg, J; Price, A; Scarino, M L; Gstraunthaler, G

    2010-06-01

    Quality assurance is becoming increasingly important. Good laboratory practice (GLP) and good manufacturing practice (GMP) are now established standards. The biomedical field aims at an increasing reliance on the use of in vitro methods. Cell and tissue culture methods are generally fast, cheap, reproducible and reduce the use of experimental animals. Good cell culture practice (GCCP) is an attempt to develop a common standard for in vitro methods. The implementation of the use of chemically defined media is part of the GCCP. This will decrease the dependence on animal serum, a supplement with an undefined and variable composition. Defined media supplements are commercially available for some cell types. However, information on the formulation by the companies is often limited and such supplements can therefore not be regarded as completely defined. The development of defined media is difficult and often takes place in isolation. A workshop was organised in 2009 in Copenhagen to discuss strategies to improve the development and use of serum-free defined media. In this report, the results from the meeting are discussed and the formulation of a basic serum-free medium is suggested. Furthermore, recommendations are provided to improve information exchange on newly developed serum-free media.

  1. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  2. Use of passive diffusion sampling method for defining NO2 concentrations gradient in São Paulo, Brazil

    PubMed Central

    da Silva, Agnes Soares; Cardoso, Maria Regina; Meliefste, Kees; Brunekreef, Bert

    2006-01-01

    Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available. PMID:16772044

  3. An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning

    PubMed Central

    Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco

    2015-01-01

    Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130

  4. Optimizing sampling device for the fecal immunochemical test increases colonoscopy yields in colorectal cancer screening.

    PubMed

    Huang, Yanqin; Li, Qilong; Ge, Weiting; Hu, Yue; Cai, Shanrong; Yuan, Ying; Zhang, Suzhan; Zheng, Shu

    2016-03-01

    The fecal immunochemical test (FIT) that quantifies hemoglobin concentration is reported to be better than qualitative FIT and the reason for its superiority has not been interpreted. To evaluate and understand the superiority of quantitative FIT, a representative randomly selected population (n=2355) in Jiashan County, China, aged 40-74 years was invited for colorectal cancer screening in 2012. Three fecal samples were collected from each participant by one optimized and two common sampling devices, and then tested by both quantitative and qualitative FITs. Colonoscopy was provided independently to all participants. The performances of five featured screening strategies were compared. A total of 1020 participants were eligible. For screening advanced neoplasia, the positive predictive value (PPV) and the specificity of the strategy that tested one sample dissolved in an optimized device by quantitative FIT [PPV=40.8%, 95% confidence interval (CI): 27.1-54.6; specificity=96.8%, 95% CI: 95.7-98.0] were significantly improved over the strategy that tested one sample dissolved in the common device by qualitative FIT (PPV=14.1%, 95% CI: 8.2-19.9; specificity=87.9%, 95% CI: 85.8-89.9), whereas the sensitivity did not differ (39.2 and 37.3%, P=0.89). Similar disparity in performance was observed between the strategies using qualitative FIT to test one sample dissolved in optimized (PPV=29.5%, 95% CI: 18.1-41.0; specificity=95.3%, 95% CI: 94.0-96.7) versus common sampling devices. High sensitivity for advanced neoplasia was observed in the strategy that tested two samples by qualitative FIT (52.9%, 95% CI: 39.2-66.6). Quantitative FIT is better than qualitative FIT for screening advanced colorectal neoplasia. However, the fecal sampling device might contribute most significantly toward the superiority of quantitative FIT.

  5. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  6. Sampling scheme optimization for diffuse optical tomography based on data and image space rankings

    NASA Astrophysics Data System (ADS)

    Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong

    2016-10-01

    We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.

  7. Optimal Sampling-Based Motion Planning under Differential Constraints: the Drift Case with Linear Affine Dynamics

    PubMed Central

    Schmerling, Edward; Janson, Lucas; Pavone, Marco

    2015-01-01

    In this paper we provide a thorough, rigorous theoretical framework to assess optimality guarantees of sampling-based algorithms for drift control systems: systems that, loosely speaking, can not stop instantaneously due to momentum. We exploit this framework to design and analyze a sampling-based algorithm (the Differential Fast Marching Tree algorithm) that is asymptotically optimal, that is, it is guaranteed to converge, as the number of samples increases, to an optimal solution. In addition, our approach allows us to provide concrete bounds on the rate of this convergence. The focus of this paper is on mixed time/control energy cost functions and on linear affine dynamical systems, which encompass a range of models of interest to applications (e.g., double-integrators) and represent a necessary step to design, via successive linearization, sampling-based and provably-correct algorithms for non-linear drift control systems. Our analysis relies on an original perturbation analysis for two-point boundary value problems, which could be of independent interest. PMID:26997749

  8. Optimizing Spatio-Temporal Sampling Designs of Synchronous, Static, or Clustered Measurements

    NASA Astrophysics Data System (ADS)

    Helle, Kristina; Pebesma, Edzer

    2010-05-01

    When sampling spatio-temporal random variables, the cost of a measurement may differ according to the setup of the whole sampling design: static measurements, i.e. repeated measurements at the same location, synchronous measurements or clustered measurements may be cheaper per measurement than completely individual sampling. Such "grouped" measurements may however not be as good as individually chosen ones because of redundancy. Often, the overall cost rather than the total number of measurements is fixed. A sampling design with grouped measurements may allow for a larger number of measurements thus outweighing the drawback of redundancy. The focus of this paper is to include the tradeoff between the number of measurements and the freedom of their location in sampling design optimisation. For simple cases, optimal sampling designs may be fully determined. To predict e.g. the mean over a spatio-temporal field having known covariance, the optimal sampling design often is a grid with density determined by the sampling costs [1, Ch. 15]. For arbitrary objective functions sampling designs can be optimised relocating single measurements, e.g. by Spatial Simulated Annealing [2]. However, this does not allow to take advantage of lower costs when using grouped measurements. We introduce a heuristic that optimises an arbitrary objective function of sampling designs, including static, synchronous, or clustered measurements, to obtain better results at a given sampling budget. Given the cost for a measurement, either within a group or individually, the algorithm first computes affordable sampling design configurations. The number of individual measurements as well as kind and number of grouped measurements are determined. Random locations and dates are assigned to the measurements. Spatial Simulated Annealing is used on each of these initial sampling designs (in parallel) to improve them. In grouped measurements either the whole group is moved or single measurements within the

  9. An individual urinary proteome analysis in normal human beings to define the minimal sample number to represent the normal urinary proteome

    PubMed Central

    2012-01-01

    Background The urinary proteome has been widely used for biomarker discovery. A urinary proteome database from normal humans can provide a background for discovery proteomics and candidate proteins/peptides for targeted proteomics. Therefore, it is necessary to define the minimum number of individuals required for sampling to represent the normal urinary proteome. Methods In this study, inter-individual and inter-gender variations of urinary proteome were taken into consideration to achieve a representative database. An individual analysis was performed on overnight urine samples from 20 normal volunteers (10 males and 10 females) by 1DLC/MS/MS. To obtain a representative result of each sample, a replicate 1DLCMS/MS analysis was performed. The minimal sample number was estimated by statistical analysis. Results For qualitative analysis, less than 5% of new proteins/peptides were identified in a male/female normal group by adding a new sample when the sample number exceeded nine. In addition, in a normal group, the percentage of newly identified proteins/peptides was less than 5% upon adding a new sample when the sample number reached 10. Furthermore, a statistical analysis indicated that urinary proteomes from normal males and females showed different patterns. For quantitative analysis, the variation of protein abundance was defined by spectrum count and western blotting methods. And then the minimal sample number for quantitative proteomic analysis was identified. Conclusions For qualitative analysis, when considering the inter-individual and inter-gender variations, the minimum sample number is 10 and requires a balanced number of males and females in order to obtain a representative normal human urinary proteome. For quantitative analysis, the minimal sample number is much greater than that for qualitative analysis and depends on the experimental methods used for quantification. PMID:23170922

  10. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Optimized probability sampling of study sites to improve generalizability in a multisite intervention trial.

    PubMed

    Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D

    2010-01-01

    Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.

  12. Time optimization of (90)Sr measurements: Sequential measurement of multiple samples during ingrowth of (90)Y.

    PubMed

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-04-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.

  13. Sample volume optimization for radon-in-water detection by liquid scintillation counting.

    PubMed

    Schubert, Michael; Kopitz, Juergen; Chałupnik, Stanisław

    2014-08-01

    Radon is used as environmental tracer in a wide range of applications particularly in aquatic environments. If liquid scintillation counting (LSC) is used as detection method the radon has to be transferred from the water sample into a scintillation cocktail. Whereas the volume of the cocktail is generally given by the size of standard LSC vials (20 ml) the water sample volume is not specified. Aim of the study was an optimization of the water sample volume, i.e. its minimization without risking a significant decrease in LSC count-rate and hence in counting statistics. An equation is introduced, which allows calculating the ²²²Rn concentration that was initially present in a water sample as function of the volumes of water sample, sample flask headspace and scintillation cocktail, the applicable radon partition coefficient, and the detected count-rate value. It was shown that water sample volumes exceeding about 900 ml do not result in a significant increase in count-rate and hence counting statistics. On the other hand, sample volumes that are considerably smaller than about 500 ml lead to noticeably lower count-rates (and poorer counting statistics). Thus water sample volumes of about 500-900 ml should be chosen for LSC radon-in-water detection, if 20 ml vials are applied.

  14. Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

    SciTech Connect

    BILISOLY, ROGER L.; MCKENNA, SEAN A.

    2003-01-01

    Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no

  15. Defining the immunogenicity and antigenicity of HLA epitopes is crucial for optimal epitope matching in clinical renal transplantation.

    PubMed

    Kramer, C S M; Roelen, D L; Heidt, S; Claas, F H J

    2017-07-01

    Transplantation of an human leukocyte antigen (HLA) mismatched graft can lead to the development of donor-specific antibodies (DSA), which can result in antibody mediated rejection and graft loss as well as complicate repeat transplantation. These DSA are induced by foreign epitopes present on the mismatched HLA antigens of the donor. However, not all epitopes appear to be equally effective in their ability to induce DSA. Understanding the characteristics of HLA epitopes is crucial for optimal epitope matching in clinical transplantation. In this review, the latest insights on HLA epitopes are described with a special focus on the definition of immunogenicity and antigenicity of HLA epitopes. Furthermore, the use of this knowledge to prevent HLA antibody formation and to select the optimal donor for sensitised transplant candidates will be discussed. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Enrichment of single neurons and defined brain regions from human brain tissue samples for subsequent proteome analysis.

    PubMed

    Molina, Mariana; Steinbach, Simone; Park, Young Mok; Yun, Su Yeong; Di Lorenzo Alho, Ana Tereza; Heinsen, Helmut; Grinberg, Lea T; Marcus, Katrin; Leite, Renata E Paraizo; May, Caroline

    2015-07-01

    Brain function in normal aging and neurological diseases has long been a subject of interest. With current technology, it is possible to go beyond descriptive analyses to characterize brain cell populations at the molecular level. However, the brain comprises over 100 billion highly specialized cells, and it is a challenge to discriminate different cell groups for analyses. Isolating intact neurons is not feasible with traditional methods, such as tissue homogenization techniques. The advent of laser microdissection techniques promises to overcome previous limitations in the isolation of specific cells. Here, we provide a detailed protocol for isolating and analyzing neurons from postmortem human brain tissue samples. We describe a workflow for successfully freezing, sectioning and staining tissue for laser microdissection. This protocol was validated by mass spectrometric analysis. Isolated neurons can also be employed for western blotting or PCR. This protocol will enable further examinations of brain cell-specific molecular pathways and aid in elucidating distinct brain functions.

  17. Additive SMILES-based optimal descriptors in QSAR modelling bee toxicity: Using rare SMILES attributes to define the applicability domain.

    PubMed

    Toropov, A A; Benfenati, E

    2008-05-01

    The additive SMILES-based optimal descriptors have been used for modelling the bee toxicity. The influence of relative prevalence of the SMILES attributes in a training and test sets to the models for bee toxicity has been analysed. Avoiding the use of rare attributes improves statistical characteristics of the model on the external test set. The possibility of using the probability of the presence of SMILES attributes in training and test sets for rational definition of the applicability domain is discussed.

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples

    PubMed Central

    Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084

  20. Pharmacokinetic Modeling and Optimal Sampling Strategies for Therapeutic Drug Monitoring of Rifampin in Patients with Tuberculosis

    PubMed Central

    Sturkenboom, Marieke G. G.; Mulder, Leonie W.; de Jager, Arthur; van Altena, Richard; Aarnoutse, Rob E.; de Lange, Wiel C. M.; Proost, Johannes H.; Kosterink, Jos G. W.; van der Werf, Tjip S.

    2015-01-01

    Rifampin, together with isoniazid, has been the backbone of the current first-line treatment of tuberculosis (TB). The ratio of the area under the concentration-time curve from 0 to 24 h (AUC0–24) to the MIC is the best predictive pharmacokinetic-pharmacodynamic parameter for determinations of efficacy. The objective of this study was to develop an optimal sampling procedure based on population pharmacokinetics to predict AUC0–24 values. Patients received rifampin orally once daily as part of their anti-TB treatment. A one-compartmental pharmacokinetic population model with first-order absorption and lag time was developed using observed rifampin plasma concentrations from 55 patients. The population pharmacokinetic model was developed using an iterative two-stage Bayesian procedure and was cross-validated. Optimal sampling strategies were calculated using Monte Carlo simulation (n = 1,000). The geometric mean AUC0–24 value was 41.5 (range, 13.5 to 117) mg · h/liter. The median time to maximum concentration of drug in serum (Tmax) was 2.2 h, ranging from 0.4 to 5.7 h. This wide range indicates that obtaining a concentration level at 2 h (C2) would not capture the peak concentration in a large proportion of the population. Optimal sampling using concentrations at 1, 3, and 8 h postdosing was considered clinically suitable with an r2 value of 0.96, a root mean squared error value of 13.2%, and a prediction bias value of −0.4%. This study showed that the rifampin AUC0–24 in TB patients can be predicted with acceptable accuracy and precision using the developed population pharmacokinetic model with optimal sampling at time points 1, 3, and 8 h. PMID:26055359

  1. Optimal sample sizes for Welch's test under various allocation and cost considerations.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2011-12-01

    The issue of the sample size necessary to ensure adequate statistical power has been the focus of considerableattention in scientific research. Conventional presentations of sample size determination do not consider budgetary and participant allocation scheme constraints, although there is some discussion in the literature. The introduction of additional allocation and cost concerns complicates study design, although the resulting procedure permits a practical treatment of sample size planning. This article presents exact techniques for optimizing sample size determinations in the context of Welch (Biometrika, 29, 350-362, 1938) test of the difference between two means under various design and cost considerations. The allocation schemes include cases in which (1) the ratio of group sizes is given and (2) one sample size is specified. The cost implications suggest optimally assigning subjects (1) to attain maximum power performance for a fixed cost and (2) to meet adesignated power level for the least cost. The proposed methods provide useful alternatives to the conventional procedures and can be readily implemented with the developed R and SAS programs that are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  2. A preliminary evaluation of the validity of binge-eating disorder defining features in a community-based sample.

    PubMed

    Klein, Kelly M; Forney, K Jean; Keel, Pamela K

    2016-05-01

    Little empirical attention has been paid to the DSM-5 definition of binge-eating disorder (BED), particularly to the associated features of binge episodes. The present study sought to determine how the associated features and undue influence of weight/shape on self-evaluation contribute to evidence of a clinically significant eating disorder. Secondary analyses were conducted on data (N = 80; 76.3% women, 76.3% Caucasian, ages 18-43) collected through an epidemiological study of eating patterns. Descriptive statistics were used to report the sample prevalence of the features, independently and in combination. Correlations and alpha reliability were employed to examine relationships among associated features, distress regarding bingeing, and clinical diagnosis. Regression models and receiver-operating characteristic (ROC) curves were used to determine the utility of the features for explaining variance in distress. Internal consistency reliability for indicators was low, and several features demonstrated low or nonsignificant associations with distress and diagnosis. Feeling disgusted/depressed/guilty was the only unique predictor of distress (p = 0.001). For the ROC curves, three features was the best threshold for predicting distress. Results support the need to refine the features to ensure better detection of clinically significant eating pathology for research inclusion and treatment of the illness. © 2015 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:524-528). © 2015 Wiley Periodicals, Inc.

  3. A Preliminary Evaluation of the Validity of Binge Eating Disorder Defining Features in a Community-Based Sample

    PubMed Central

    Klein, Kelly M.; Forney, K. Jean; Keel, Pamela K.

    2015-01-01

    Objective Little empirical attention has been paid to the DSM-5 definition of Binge Eating Disorder (BED), particularly to the associated features of binge episodes. The present study sought to determine how the associated features and undue influence of weight/shape on self-evaluation contribute to evidence of a clinically significant eating disorder. Method Secondary analyses were conducted on data (N = 80; 76.3% women, 76.3% Caucasian, ages 18–43) collected through an epidemiological study of eating patterns. Descriptive statistics were used to report the sample prevalence of the features, independently and in combination. Correlations and alpha reliability were employed to examine relationships among associated features, distress regarding bingeing, and clinical diagnosis. Regression models and receiver-operating characteristic (ROC) curves were used to determine the utility of the features for explaining variance in distress. Results Internal consistency reliability for indicators was low, and several features demonstrated low or non-significant associations with distress and diagnosis. Feeling disgusted/depressed/guilty was the only unique predictor of distress (p = 0.001). For the ROC curves, three features was the best threshold for predicting distress. Discussion Results support the need to refine the features to ensure better detection of clinically significant eating pathology for research inclusion and treatment of the illness. PMID:26607858

  4. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  5. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  6. Optimal adaptive group sequential design with flexible timing of sample size determination.

    PubMed

    Cui, Lu; Zhang, Lanju; Yang, Bo

    2017-04-26

    Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.

  7. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    SciTech Connect

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-06-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.

  8. Optimizing the molecular diagnosis of GALNS: novel methods to define and characterize Morquio-A syndrome-associated mutations.

    PubMed

    Caciotti, Anna; Tonin, Rodolfo; Rigoldi, Miriam; Ferri, Lorenzo; Catarzi, Serena; Cavicchi, Catia; Procopio, Elena; Donati, Maria Alice; Ficcadenti, Anna; Fiumara, Agata; Barone, Rita; Garavelli, Livia; Rocco, Maja Di; Filocamo, Mirella; Antuzzi, Daniela; Scarpa, Maurizio; Mooney, Sean D; Li, Biao; Skouma, Anastasia; Bianca, Sebastiano; Concolino, Daniela; Casalone, Rosario; Monti, Elena; Pantaleo, Marilena; Giglio, Sabrina; Guerrini, Renzo; Parini, Rossella; Morrone, Amelia

    2015-03-01

    Morquio A syndrome (MPS IVA) is a systemic lysosomal storage disorder caused by the deficiency of N-acetylgalactosamine-6-sulfatase (GALNS), encoded by the GALNS gene. We studied 37 MPS IV A patients and defined genotype-phenotype correlations based on clinical data, biochemical assays, molecular analyses, and in silico structural analyses of associated mutations. We found that standard sequencing procedures, albeit identifying 14 novel small GALNS genetic lesions, failed to characterize the second disease-causing mutation in the 16% of the patients' cohort. To address this drawback and uncover potential gross GALNS rearrangements, we developed molecular procedures (CNV [copy-number variation] assays, QF-PCRs [quantitative fluorescent-PCRs]), endorsed by CGH-arrays. Using this approach, we characterized two new large deletions and their corresponding breakpoints. Both deletions were heterozygous and included the first exon of the PIEZO1 gene, which is associated with dehydrated hereditary stomatocitosis, an autosomal-dominant syndrome. In addition, we characterized the new GALNS intronic lesion c.245-11C>G causing m-RNA defects, although identified outside the GT/AG splice pair. We estimated the occurrence of the disease in the Italian population to be approximately 1:300,000 live births and defined a molecular testing algorithm designed to help diagnosing MPS IVA and foreseeing disease progression.

  9. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    NASA Technical Reports Server (NTRS)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  10. A two-stage method to determine optimal product sampling considering dynamic potential market.

    PubMed

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  11. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    SciTech Connect

    JR Bontha; GR Golcar; N Hannigan

    2000-08-29

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.

  12. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  13. Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.

    PubMed

    Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno

    2010-07-01

    The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.

  14. Optimized Sample Preparation of Endoscopic (ePFT) Collected Pancreatic Fluid for SDS-PAGE Analysis

    PubMed Central

    Paulo, Joao A.; Lee, Linda S.; Wu, Bechien; Repas, Kathryn; Banks, Peter A.; Conwell, Darwin L.; Steen, Hanno

    2011-01-01

    The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically-relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (ePFT). Using SDS-PAGE protein profiling, we investigate (1) precipitation techniques to maximize protein extraction, (2) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (3) effects of multiple freeze-thaw cycles on protein stability, and (4) the utility of protease inhibitors. Our experiments revealed that trichloroacetic acid (TCA) precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23°C and 37°C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid. PMID:20589857

  15. Sampling design optimization of a mussel watch-type monitoring program, the French Monitoring Network

    SciTech Connect

    Beliaeff, B.; Claisse, D.; Smith, P.J.

    1995-12-31

    In the French Monitoring Network, trace element and organic concentration in biota has been measured for 15 years on a quarterly basis at over 80 sites scattered along the French coastline. A reduction in the sampling effort may be needed as a result of budget restrictions. A constant budget, however, would allow the advancement of certain research and development projects, such as the feasibility of new chemical analysis. The basic problem confronting the program sampling design optimization is finding optimal numbers of sites in a given non-heterogeneous area and of sampling events within a year at each site. First, they determine a site specific cost function integrating analysis, personnel, and computer costs. Then, within-year and between-site variance components are estimated from the results of a linear model which includes a seasonal component. These two steps provide a cost-precision optimum for each contaminant. An example is given using the data from the 4 sites of the Loire estuary. Over all sites, significant `U`-shaped trends are estimated for Pb, PCBs, {Sigma}DDT and {alpha}-HCH, while PAHs show a significant inverted `U`-shaped curve. For most chemicals the within-year variance appears to be much higher than the between sites variance. This leads to the conclusion that, for this case, reducing the number of sites by two is preferable economically and in terms of monitoring efficiency to reducing the sampling frequency by the same factor. Further implications for the French Monitoring Network are discussed.

  16. Defining optimal laser-fiber sweeping angle for effective tissue vaporization using 180 W 532 nm lithium triborate laser.

    PubMed

    Ko, Woo Jin; Choi, Benjamin B; Kang, Hyun Wook; Rajabhandharaks, Danop; Rutman, Matthew; Osterberg, E Charles

    2012-04-01

    The goal of this study is to identify the most efficient sweeping angle (SA) during photoselective vaporization of the prostate (PVP). Experiments were conducted with GreenLight XPS™ laser at 120 and 180 W. Ten blocks of porcine kidney were used for each SA (0, 15, 30, 45, 60, 90, and 120 degrees). Vaporization efficiency was assessed by the amount of tissue removed per time. The coagulation zone (CZ) thickness was also measured. Maximal vaporization rate (VR) was achieved at SA 15 and 30 degrees. Irrespective of power, VR increased and CZ decreased linearly with decreasing SA from 120 to 30 degrees. The CZ was the thinnest at SA 30 degrees. Optimal vaporization occurred at a SA of 15 degrees and 30 degrees with the lowest CZ at 30 degrees. Contrary to a previous recommendation for a wider SA (60 degrees or greater), a narrower SA (30 degrees) achieved the maximal tissue vaporization efficiency.

  17. Defining the Optimal Surgeon Experience for Breast Cancer Sentinel Lymph Node Biopsy: A Model for Implementation of New Surgical Techniques

    PubMed Central

    McMasters, Kelly M.; Wong, Sandra L.; Chao, Celia; Woo, Claudine; Tuttle, Todd M.; Noyes, R. Dirk; Carlson, David J.; Laidley, Alison L.; McGlothin, Terre Q.; Ley, Philip B.; Brown, C. Matthew; Glaser, Rebecca L.; Pennington, Robert E.; Turk, Peter S.; Simpson, Diana; Edwards, Michael J.

    2001-01-01

    Objective To determine the optimal experience required to minimize the false-negative rate of sentinel lymph node (SLN) biopsy for breast cancer. Summary Background Data Before abandoning routine axillary dissection in favor of SLN biopsy for breast cancer, each surgeon and institution must document acceptable SLN identification and false-negative rates. Although some studies have examined the impact of individual surgeon experience on the SLN identification rate, minimal data exist to determine the optimal experience required to minimize the more crucial false-negative rate. Methods Analysis was performed of a large prospective multiinstitutional study involving 226 surgeons. SLN biopsy was performed using blue dye, radioactive colloid, or both. SLN biopsy was performed with completion axillary LN dissection in all patients. The impact of surgeon experience on the SLN identification and false-negative rates was examined. Logistic regression analysis was performed to evaluate independent factors in addition to surgeon experience associated with these outcomes. Results A total of 2,148 patients were enrolled in the study. Improvement in the SLN identification and false-negative rates was found after 20 cases had been performed. Multivariate analysis revealed that patient age, nonpalpable tumors, and injection of blue dye alone for SLN biopsy were independently associated with decreased SLN identification rates, whereas upper outer quadrant tumor location was the only factor associated with an increased false-negative rate. Conclusions Surgeons should perform at least 20 SLN cases with acceptable results before abandoning routine axillary dissection. This study provides a model for surgeon training and experience that may be applicable to the implementation of other new surgical technologies. PMID:11524582

  18. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  19. Optimized Ar(+)-ion milling procedure for TEM cross-section sample preparation.

    PubMed

    Dieterle, Levin; Butz, Benjamin; Müller, Erich

    2011-11-01

    High-quality samples are indispensable for every reliable transmission electron microscopy (TEM) investigation. In order to predict optimized parameters for the final Ar(+)-ion milling preparation step, topographical changes of symmetrical cross-section samples by the sputtering process were modeled by two-dimensional Monte-Carlo simulations. Due to its well-known sputtering yield of Ar(+)-ions and its easiness in mechanical preparation Si was used as model system. The simulations are based on a modified parameterized description of the sputtering yield of Ar(+)-ions on Si summarized from literature. The formation of a wedge-shaped profile, as commonly observed during double-sector ion milling of cross-section samples, was reproduced by the simulations, independent of the sputtering angle. Moreover, the preparation of wide, plane parallel sample areas by alternating single-sector ion milling is predicted by the simulations. These findings were validated by a systematic ion-milling study (single-sector vs. double-sector milling at various sputtering angles) using Si cross-section samples as well as two other material-science examples. The presented systematic single-sector ion-milling procedure is applicable for most Ar(+)-ion mills, which allow simultaneous milling from both sides of a TEM sample (top and bottom) in an azimuthally restricted sector perpendicular to the central epoxy line of that cross-sectional TEM sample. The procedure is based on the alternating milling of the two halves of the TEM sample instead of double-sector milling of the whole sample. Furthermore, various other practical aspects are issued like the dependency of the topographical quality of the final sample on parameters like epoxy thickness and incident angle.

  20. Analysis of the optimal sampling rate for state estimation in sensor networks with delays.

    PubMed

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo

    2017-03-27

    When addressing the problem of state estimation in sensor networks, the effects of communications on estimator performance are often neglected. High accuracy requires a high sampling rate, but this leads to higher channel load and longer delays, which in turn worsens estimation performance. This paper studies the problem of determining the optimal sampling rate for state estimation in sensor networks from a theoretical perspective that takes into account traffic generation, a model of network behaviour and the effect of delays. Some theoretical results about Riccati and Lyapunov equations applied to sampled systems are derived, and a solution was obtained for the ideal case of perfect sensor information. This result is also interesting for non-ideal sensors, as in some cases it works as an upper bound of the optimisation solution.

  1. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil.

    PubMed

    Silvestri, Erin E; Feldhake, David; Griffin, Dale; Lisle, John; Nichols, Tonya L; Shah, Sanjiv R; Pemberton, Adin; Schaefer, Frank W

    2016-11-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries. Copyright © 2016. Published by Elsevier B.V.

  2. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    USGS Publications Warehouse

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  3. The self-defining axis of symmetry: A new method to determine optimal symmetry and its application and limitation in craniofacial surgery.

    PubMed

    Martini, Markus; Klausing, Anne; Messing-Jünger, Martina; Lüchters, Guido

    2017-09-01

    Analysis of symmetry represents an essential aspect of plastic-reconstructive surgery. For cases in which reference points are either not fixed or are changed due to corrective intervention the determination of a symmetry axis is sometimes almost impossible and a pre-defined symmetry axis would not always be helpful. To assess cranial shape of surgical patients with craniosynostosis, a new algebraic approach was chosen in which deviation from the optimal symmetry axis could be quantified. Optimal symmetry was defined based on a single central point in the fronto-orbital advancement (FOA) hyperplane and a corresponding landmark pair. The forehead symmetry evaluation was based on 3D-scans series of 13 children, on whom cranioplasty with FOA was performed and 15 healthy children who served as control group. Children with plagiocephaly showed considerable improvement in symmetry postoperatively, with stable values over one year, while those with trigonocephaly and brachycephaly showed constant good symmetry in the forehead both pre- and postoperatively. With the help of an optimally calculated symmetry axis this new analysis method offers a solution, which is independent of preset dimensions. Patients can be evaluated according to their individual needs regarding symmetry and also be compared with one another. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Optimization methods for multi-scale sampling of soil moisture and snow in the Southern Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2015-12-01

    Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.

  5. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  6. Experiments Optimized for Magic Angle Spinning and Oriented Sample Solid-State NMR of Proteins

    PubMed Central

    Das, Bibhuti B.; Lin, Eugene C.; Opella, Stanley J.

    2013-01-01

    Structure determination by solid-state NMR of proteins is rapidly advancing as result of recent developments of samples, experimental methods, and calculations. There are a number of different solid-state NMR approaches that utilize stationary, aligned samples or magic angle spinning of unoriented ‘powder’ samples, and depending on the sample and the experimental method can emphasize the measurement of distances or angles, ideally both, as sources of structural constraints. Multi-dimensional correlation spectroscopy of low-gamma nuclei such as 15N and 13C is an important step for making resonance assignments and measurements of angular restraints in membrane proteins. However, the efficiency of coherence transfer predominantly depends upon the strength of dipole-dipole interaction, and this can vary from site to site and between sample alignments, for example, during the mixing of 13C and 15N magnetization in stationary aligned and in magic angle spinning samples. Here, we demonstrate that the efficiency of polarization transfer can be improved by using adiabatic demagnetization and remagnetization techniques on stationary aligned samples; and proton assisted insensitive nuclei cross-polarization in magic angle sample spinning samples. Adiabatic cross-polarization technique provides an alternative mechanism for spin-diffusion experiments correlating 15N/15N and 15N/13C chemical shifts over large distances. Improved efficiency in cross-polarization with 40% – 100% sensitivity enhancements are observed in proteins and single crystals, respectively. We describe solid-state NMR experimental techniques that are optimal for membrane proteins in liquid crystalline phospholipid bilayers under physiological conditions. The techniques are illustrated with data from both single crystals of peptides and of membrane proteins in phospholipid bilayers. PMID:24044695

  7. Defining Blood Processing Parameters for Optimal Detection of γ-H2AX Foci: A Small Blood Volume Method.

    PubMed

    Wojewodzka, Maria; Sommer, Sylwester; Kruszewski, Marcin; Sikorska, Katarzyna; Lewicki, Maciej; Lisowska, Halina; Wegierek-Ciuk, Aneta; Kowalska, Magdalena; Lankoff, Anna

    2015-07-01

    Biodosimetric methods used to measure the effects of radiation are critical for estimating the health risks to irradiated individuals or populations. The direct measurement of radiation-induced γ-H2AX foci in peripheral blood lymphocytes is one approach that provides a useful end point for triage. Despite the documented advantages of the γ-H2AX assay, there is considerable variation among laboratories regarding foci formation in the same exposure conditions and cell lines. Taking this into account, the goal of our study was to evaluate the influence of different blood processing parameters on the frequency of γ-H2AX foci and optimize a small blood volume protocol for the γ-H2AX assay, which simulates the finger prick blood collection method. We found that the type of fixative, temperature and blood processing time markedly affect the results of the γ-H2AX assay. In addition, we propose a protocol for the γ-H2AX assay that may serve as a potential guideline in the event of large-scale radiation incidents.

  8. Well-defined hydrophilic molecularly imprinted polymer microspheres for efficient molecular recognition in real biological samples by facile RAFT coupling chemistry.

    PubMed

    Zhao, Man; Chen, Xiaojing; Zhang, Hongtao; Yan, Husheng; Zhang, Huiqi

    2014-05-12

    A facile and highly efficient new approach (namely RAFT coupling chemistry) to obtain well-defined hydrophilic molecularly imprinted polymer (MIP) microspheres with excellent specific recognition ability toward small organic analytes in the real, undiluted biological samples is described. It involves the first synthesis of "living" MIP microspheres with surface-bound vinyl and dithioester groups via RAFT precipitation polymerization (RAFTPP) and their subsequent grafting of hydrophilic polymer brushes by the simple coupling reaction of hydrophilic macro-RAFT agents (i.e., hydrophilic polymers with a dithioester end group) with vinyl groups on the "living" MIP particles in the presence of a free radical initiator. The successful grafting of hydrophilic polymer brushes onto the obtained MIP particles was confirmed by SEM, FT-IR, static contact angle and water dispersion studies, elemental analyses, and template binding experiments. Well-defined MIP particles with densely grafted hydrophilic polymer brushes (∼1.8 chains/nm(2)) of desired chemical structures and molecular weights were readily obtained, which showed significantly improved surface hydrophilicity and could thus function properly in real biological media. The origin of the high grafting densities of the polymer brushes was clarified and the general applicability of the strategy was demonstrated. In particular, the well-defined characteristics of the resulting hydrophilic MIP particles allowed the first systematic study on the effects of various structural parameters of the grafted hydrophilic polymer brushes on their water-compatibility, which is of great importance for rationally designing more advanced real biological sample-compatible MIPs.

  9. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    SciTech Connect

    Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  10. An S/H circuit with parasitics optimized for IF-sampling

    NASA Astrophysics Data System (ADS)

    Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue

    2016-06-01

    An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).

  11. Optimized analysis of DNA methylation and gene expression from small, anatomically-defined areas of the brain.

    PubMed

    Bettscheider, Marc; Kuczynska, Arleta; Almeida, Osborne; Spengler, Dietmar

    2012-07-12

    Exposure to diet, drugs and early life adversity during sensitive windows of life can lead to lasting changes in gene expression that contribute to the display of physiological and behavioural phenotypes. Such environmental programming is likely to increase the susceptibility to metabolic, cardiovascular and mental diseases. DNA methylation and histone modifications are considered key processes in the mediation of the gene-environment dialogue and appear also to underlay environmental programming. In mammals, DNA methylation typically comprises the covalent addition of a methyl group at the 5-position of cytosine within the context of CpG dinucleotides. CpG methylation occurs in a highly tissue- and cell-specific manner making it a challenge to study discrete, small regions of the brain where cellular heterogeneity is high and tissue quantity limited. Moreover, because gene expression and methylation are closely linked events, increased value can be gained by comparing both parameters in the same sample. Here, a step-by-step protocol (Figure 1) for the investigation of epigenetic programming in the brain is presented using the 'maternal separation' paradigm of early life adversity for illustrative purposes. The protocol describes the preparation of micropunches from differentially-aged mouse brains from which DNA and RNA can be simultaneously isolated, thus allowing DNA methylation and gene expression analyses in the same sample.

  12. Optimal sampling strategies for detecting linkage of a complex trait with known genetic heterogeneity

    SciTech Connect

    Easton, D.F.; Goldgar, D.E.

    1994-09-01

    As genes underlying susceptibility to human disease are identified through linkage analysis, it is becoming increasingly clear that genetic heterogeneity is the rule rather than the exception. The focus of the present work is to examine the power and optimal sampling design for localizing a second disease gene when one disease gene has previously been identified. In particular, we examined the case when the unknown locus had lower penetrance, but higher frequency, than the known locus. Three scenarios regarding knowledge about locus 1 were examined: no linkage information (i.e. standard heterogeneity analysis), tight linkage with a known highly polymorphic marker locus, and mutation testing. Exact expected LOD scores (ELODs) were calculated for a number of two-locus genetic models under the 3 scenarios of heterogeneity for nuclear families containing 2, 3 or 4 affected children, with 0 or 1 affected parents. A cost function based upon the cost of ascertaining and genotyping sufficient samples to achieve an ELOD of 3.0 was used to evaluate the designs. As expected, the power and the optimal pedigree sampling strategy was dependent on the underlying model and the heterogeneity testing status. When the known locus had higher penetrance than the unknown locus, three affected siblings with unaffected parents proved to be optimal for all levels of heterogeneity. In general, mutation testing at the first locus provided substantially more power for detecting the second locus than linkage evidence alone. However, when both loci had relatively low penetrance, mutation testing provided little improvement in power since most families could be expected to be segregating the high risk allele at both loci.

  13. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  14. Defining an optimal stromal derived factor-1 presentation for effective recruitment of mesenchymal stem cells in 3D.

    PubMed

    Iannone, Maria; Ventre, Maurizio; Pagano, Gemma; Giannoni, Paolo; Quarto, Rodolfo; Netti, Paolo Antonio

    2014-11-01

    In "situ" tissue engineering is a promising approach in regenerative medicine, envisaging to potentiate the physiological tissue repair processes by recruiting the host's own cellular progenitors at the lesion site by means of bioactive materials. Despite numerous works focused the attention in characterizing novel chemoattractant molecules, only few studied the optimal way to present signal in the microenvironment, in order to recruit cells more effectively. In this work, we have analyzed the effects of gradients of stromal derived factor-1 (SDF-1) on the migratory behavior of human mesenchymal stem cells (MSCs). We have characterized the expression of the chemokine-associated receptor, CXCR4, using cytofluorimetric and real-time PCR analyses. Gradients of SDF-1 were created in 3D collagen gels in a chemotaxis chamber. Migration parameters were evaluated using different chemoattractant concentrations. Our results show that cell motion is strongly affected by the spatio-temporal features of SDF-1 gradients. In particular, we demonstrated that the presence of SDF-1 not only influences cell motility but alters the cell state in terms of SDF-1 receptor expression and productions, thus modifying the way cells perceive the signal itself. Our observations highlight the importance of a correct stimulation of MSCs by means of SDF-1 in order to implement on effective cell recruitment. Our results could be useful for the creation of a "cell instructive material" that is capable to communicate with the cells and control and direct tissue regeneration. Biotechnol. Bioeng. 2014;111: 2303-2316. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  15. Optimal selection of gene and ingroup taxon sampling for resolving phylogenetic relationships.

    PubMed

    Townsend, Jeffrey P; Lopez-Giraldez, Francesc

    2010-07-01

    A controversial topic that underlies much of phylogenetic experimental design is the relative utility of increased taxonomic versus character sampling. Conclusions about the relative utility of adding characters or taxa to a current phylogenetic study have subtly hinged upon the appropriateness of the rate of evolution of the characters added for resolution of the phylogeny in question. Clearly, the addition of characters evolving at optimal rates will have much greater impact upon accurate phylogenetic analysis than will the addition of characters with an inappropriate rate of evolution. Development of practical analytical predictions of the asymptotic impact of adding additional taxa would complement computational investigations of the relative utility of these two methods of expanding acquired data. Accordingly, we here formulate a measure of the phylogenetic informativeness of the additional sampling of character states from a new taxon added to the canonical phylogenetic quartet. We derive the optimal rate of evolution for characters assessed in taxa to be sampled and a metric of informativeness based on the rate of evolution of the characters assessed in the new taxon and the distance of the new taxon from the internode of interest. Calculation of the informativeness per base pair of additional character sampling for included taxa versus additional character sampling for novel taxa can be used to estimate cost-effectiveness and optimal efficiency of phylogenetic experimental design. The approach requires estimation of rates of evolution of individual sites based on an alignment of genes orthologous to those to be sequenced, which may be identified in a well-established clade of sister taxa or of related taxa diverging at a deeper phylogenetic scale. Some approximate idea of the potential phylogenetic relationships of taxa to be sequenced is also desirable, such as may be obtained from ribosomal RNA sequence alone. Application to the solution of recalcitrant

  16. Development of a population pharmacokinetic model and optimal sampling strategies for intravenous ciprofloxacin.

    PubMed

    Forrest, A; Ballow, C H; Nix, D E; Birmingham, M C; Schentag, J J

    1993-05-01

    Data obtained from 74 acutely ill patients treated in two clinical efficacy trials were used to develop a population model of the pharmacokinetics of intravenous (i.v.) ciprofloxacin. Dosage regimens ranged between 200 mg every 12 h and 400 mg every 8 h. Plasma samples (2 to 19 per patient; mean +/- standard deviation = 7 +/- 5) were obtained and assayed (by high-performance liquid chromatography) for ciprofloxacin. These data and patient covariates were modelled by iterative two-stage analysis, an approach which generates pharmacokinetic parameter values for both the population and each individual patient. The final model was used to implement a maximum a posteriori-Bayesian pharmacokinetic parameter value estimator. Optimal sampling theory was used to determine the best (maximally informative) two-, three-, four-, five-, and six-sample study designs (e.g., optimal sampling strategy 2 [OSS2] was the two-sample strategy) for identifying a patient's pharmacokinetic parameter values. These OSSs and the population model were evaluated by selecting the relatively rich data sets, those with 7 to 10 samples obtained in a single dose interval (n = 29), and comparing the parameter estimates (obtained by the maximum a posteriori-Bayesian estimator) based on each of the OSSs with those obtained by fitting all of the available data from each patient. Distributional clearance and apparent volumes were significantly related to body size (e.g., weight in kilograms or body surface area in meters squared); plasma clearance (CLT in liters per hour) was related to body size and renal function (creatinine clearance [CLCR] in milliliters per minute per 1.73 m2) by the equation CLT = (0.00145.CLCR + 0.167).weight. However, only 30% of the variance in CLT was explained by this relationship, and no other patient covariates were significant. Compared with previously published data, this target population had smaller distribution volumes (by 30%; P < 0.01) and CLT (by 44%; P < 0.001) than

  17. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    PubMed

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  18. Nucleic acid quantity and quality from paraffin blocks: defining optimal fixation, processing and DNA/RNA extraction techniques.

    PubMed

    Turashvili, Gulisa; Yang, Winnie; McKinney, Steven; Kalloger, Steve; Gale, Nadia; Ng, Ying; Chow, Katie; Bell, Lynda; Lorette, Julie; Carrier, Melinda; Luk, Margaret; Aparicio, Samuel; Huntsman, David; Yip, Stephen

    2012-02-01

    Although the extraction and analysis of nucleic acids from formalin-fixed paraffin-embedded tissues is a routine and growing part of pathology practice, no generally accepted recommendations exist to guide laboratories in their selection of tissue fixation, processing and DNA/RNA extraction techniques. The aim of this study was to determine how fixation method and length, paraffin embedding, processing conditions and nucleic acid extraction methods affect quality and quantity of DNA and RNA, and their performance in downstream applications. Nine tissue samples were subjected to freezing, fixation in formalin for <24 h and 7 days followed by conventional processing, and fixation in molecular fixative for <24 h and 7 days followed by rapid processing. DNA and RNA were isolated using in-house extraction and commercial kits, and assessed by PCR reactions for amplicons with varying sizes ranging from 268 to 1327 bp and one-step RT-PCR for 621 bp and 816 bp amplicons of housekeeping genes. Molecular fixative (MF) appeared to perform well under nearly all circumstances (extraction methods, fixation lengths and longer amplicons), often performing as well as frozen samples. Formalin fixation generally performed well only for shorter length amplicons and short fixation (<24 h). WaxFree kit showed consistently higher success rates for DNA and poorer rates for RNA. RecoverAll kit generally performed suboptimally in combination with prolonged formalin fixation. In conclusion, the Molecular Fixative regardless of fixation length, and the rapid tissue processing system were able to preserve large DNA and RNA fragments in paraffin blocks, making these techniques preferable for use in downstream molecular diagnostic assays. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Challenges in defining an optimal approach to formula-based allocations of public health funds in the United States

    PubMed Central

    Buehler, James W; Holtgrave, David R

    2007-01-01

    -based versus competitive allocation methods are needed to promote the optimal use of public health funds. In the meantime, those who use formula-based strategies to allocate funds should be familiar with the nuances of this approach. PMID:17394645

  20. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  1. Optimal approaches for inline sampling of organisms in ballast water: L-shaped vs. Straight sample probes

    NASA Astrophysics Data System (ADS)

    Wier, Timothy P.; Moser, Cameron S.; Grant, Jonathan F.; Riley, Scott C.; Robbins-Wamsley, Stephanie H.; First, Matthew R.; Drake, Lisa A.

    2017-10-01

    Both L-shaped (;L;) and straight (;Straight;) sample probes have been used to collect water samples from a main ballast line in land-based or shipboard verification testing of ballast water management systems (BWMS). A series of experiments was conducted to quantify and compare the sampling efficiencies of L and Straight sample probes. The findings from this research-that both L and Straight probes sample organisms with similar efficiencies-permit increased flexibility for positioning sample probes aboard ships.

  2. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  3. Mimicry among Unequally Defended Prey Should Be Mutualistic When Predators Sample Optimally.

    PubMed

    Aubier, Thomas G; Joron, Mathieu; Sherratt, Thomas N

    2017-03-01

    Understanding the conditions under which moderately defended prey evolve to resemble better-defended prey and whether this mimicry is parasitic (quasi-Batesian) or mutualistic (Müllerian) is central to our understanding of warning signals. Models of predator learning generally predict quasi-Batesian relationships. However, predators' attack decisions are based not only on learning alone but also on the potential future rewards. We identify the optimal sampling strategy of predators capable of classifying prey into different profitability categories and contrast the implications of these rules for mimicry evolution with a classical Pavlovian model based on conditioning. In both cases, the presence of moderately unprofitable mimics causes an increase in overall consumption. However, in the case of the optimal sampling strategy, this increase in consumption is typically outweighed by the increase in overall density of prey sharing the model appearance (a dilution effect), causing a decrease in mortality. It suggests that if predators forage efficiently to maximize their long-term payoff, genuine quasi-Batesian mimicry should be rare, which may explain the scarcity of evidence for it in nature. Nevertheless, we show that when moderately defended mimics are profitable to attack by hungry predators, then they can be parasitic on their models, just as classical Batesian mimics are.

  4. Optimization of multi-channel neutron focusing guides for extreme sample environments

    NASA Astrophysics Data System (ADS)

    Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.

    2014-07-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  5. In-line e-beam inspection with optimized sampling and newly developed ADC

    NASA Astrophysics Data System (ADS)

    Ikota, Masami; Miura, Akihiro; Fukunishi, Munenori; Hiroi, Takashi; Sugimoto, Aritoshi

    2003-07-01

    An electron beam inspection is strongly required for HARI to detect contact and via defects that an optical inspection cannot detect. Conventionally, an e-beam inspection system is used as an analytical tool for checking the process margin. Due to its low throughput speed, it has not been used for in-line QC. Therefore, we optimized the inspection area and developed a new auto defect classification (ADC) to use with e-beam inspection as an in-line inspection tool. A 10% interval scan sampling proved able to estimate defect densities. Inspection could be completed within 1 hour. We specifically adapted the developed ADC for use with e-beam inspection because the voltage contrast images were not sufficiently clear so that classifications could not be made with conventional ADC based on defect geometry. The new ADC used the off-pattern area of the defect to discriminate particles from other voltage contrast defects with an accuracy of greater than 90%. Using sampling optimization and the new ADC, we achieved inspection and auto defect review with throughput of less than 1 and one-half hours. We implemented the system as a procedure for product defect QC and proved its effectiveness for in-line e-beam inspection.

  6. Optimization of arsenic extraction in rice samples by Plackett-Burman design and response surface methodology.

    PubMed

    Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang

    2016-08-01

    Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA.

  7. Identification of time point to best define 'sub-optimal response' following intravitreal ranibizumab therapy for diabetic macular edema based on real-life data.

    PubMed

    Chatziralli, I; Santarelli, M; Patrao, N; Nicholson, L; Zola, M; Rajendram, R; Hykin, P; Sivaprasad, S

    2017-06-16

    PurposeTo determine the average time-point at which it is best to define 'sub-optimal response' after ranibizumab treatment for diabetic macular edema (DME) based on the data obtained from real-life clinical practice.MethodsIn this retrospective observational study, 322 consecutive treatment naïve eyes with DME were treated with three loading doses of intravitreal ranibizumab followed by re-treatment based on decision of the treating physician on a case-by-case basis. The demographic data, clinic-based visual acuity measurements and central subfield thickness (CST) assessed on spectral domain optical coherence tomography (OCT) were evaluated at baseline (month 0), 1, 2, 3, 6, and 12 months.ResultsOn an average, the improvement in visual acuity and CST was first seen after the loading dose. However, the maximal response in terms of proportion of patients with improvement in visual acuity and/ or CST in this cohort was observed at 12 months. Patients who presented with low visual acuity at baseline (<37 ETDRS letters) were unlikely to attain driving vision with ranibizumab therapy.ConclusionsOn an average, a 'sub-optimal response' after ranibizumab therapy is best defined at month 12 as patients may continue to improve with treatment.Eye advance online publication, 16 June 2017; doi:10.1038/eye.2017.111.

  8. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  9. Optimization of a miniaturized DBD plasma chip for mercury detection in water samples.

    PubMed

    Abdul-Majeed, Wameath S; Parada, Jaime H Lozano; Zimmerman, William B

    2011-11-01

    In this work, an optimization study was conducted to investigate the performance of a custom-designed miniaturized dielectric barrier discharge (DBD) microplasma chip to be utilized as a radiation source for mercury determination in water samples. The experimental work was implemented by using experimental design, and the results were assessed by applying statistical techniques. The proposed DBD chip was designed and fabricated in a simple way by using a few microscope glass slides aligned together and held by a Perspex chip holder, which proved useful for miniaturization purposes. Argon gas at 75-180 mL/min was used in the experiments as a discharge gas, while AC power in the range 75-175 W at 38 kHz was supplied to the load from a custom-made power source. A UV-visible spectrometer was used, and the spectroscopic parameters were optimized thoroughly and applied in the later analysis. Plasma characteristics were determined theoretically by analysing the recorded spectroscopic data. The estimated electron temperature (T(e) = 0.849 eV) was found to be higher than the excitation temperature (T(exc) = 0.55 eV) and the rotational temperature (T(rot) = 0.064 eV), which indicates non-thermal plasma is generated in the proposed chip. Mercury cold vapour generation experiments were conducted according to experimental plan by examining four parameters (HCl and SnCl(2) concentrations, argon flow rate, and the applied power) and considering the recorded intensity for the mercury line (253.65 nm) as the objective function. Furthermore, an optimization technique and statistical approaches were applied to investigate the individual and interaction effects of the tested parameters on the system performance. The calculated analytical figures of merit (LOD = 2.8 μg/L and RSD = 3.5%) indicates a reasonable precision system to be adopted as a basis for a miniaturized portable device for mercury detection in water samples.

  10. Automation of sample preparation for mass cytometry barcoding in support of clinical research: protocol optimization.

    PubMed

    Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir

    2017-03-01

    Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.

  11. Sampling design and optimal sensor placement strategies for basin-scale SWE estimation

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Welch, S. C.; Bales, R. C.; Glaser, S. D.; Rittger, K. E.; Rice, R.

    2012-12-01

    We present a quantitative framework by which to assess the number of required samples (sensors), as well as their respective locations, to most optimally estimate spatial SWE patterns using sensor networks across the 5000 sq. km American River basin of California. To inform the selection of future sensor locations, 11 years of reconstructed, spatially dense (500 x 500 m resolution) SWE data were used to develop metrics of historical SWE distributions. The historical data were split into eight years of training and three years of validation data, clustering the data set to derive spatial regions which share similar SWE characteristics. Rank-based clustering was compared to geographically-based clustering (sub-basin delineation) to determine the existence of stationary covariance structures within the overall SWE dataset. Within each cluster, a quantitative sensor-placement algorithm, based on maximizing the metric of Mutual Information, was implemented and compared to a randomized placement approach. Gaussian process models were then trained to evaluate the efficacy of each placement approach. Rank based clusters remained stable inter-annually, suggesting that rankings of pixel-by-pixel SWE exhibit stationary features that can be exploited by a sensor-placement algorithm. Rank-based clustering yielded 200 mm average root mean square error (RMSE) for twenty randomly selected sensing locations, outperforming geographic and basin-wide placement approaches, which generated 460 mm and 290 mm RMSE, respectively. Mutual Information-based sampling provided the best placement strategy, improving RMSE between 0 and 100 mm compared to random placements. Increasing the number of rank-based clusters consistently lowered average RMSE from 400 mm for one cluster to 175 mm for eight clusters, for twenty total sensors placed. To optimize sensor placement, or to inform future sampling or surveying strategies, we recommend a strategy that couples rank-based clustering with Mutual

  12. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.

    PubMed

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène

    2016-06-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported.

  13. Determining the optimal forensic DNA analysis procedure following investigation of sample quality.

    PubMed

    Hedell, Ronny; Hedman, Johannes; Mostad, Petter

    2017-07-17

    Crime scene traces of various types are routinely sent to forensic laboratories for analysis, generally with the aim of addressing questions about the source of the trace. The laboratory may choose to analyse the samples in different ways depending on the type and quality of the sample, the importance of the case and the cost and performance of the available analysis methods. Theoretically well-founded guidelines for the choice of analysis method are, however, lacking in most situations. In this paper, it is shown how such guidelines can be created using Bayesian decision theory. The theory is applied to forensic DNA analysis, showing how the information from the initial qPCR analysis can be utilized. It is assumed the alternatives for analysis are using a standard short tandem repeat (STR) DNA analysis assay, using the standard assay and a complementary assay, or the analysis may be cancelled following quantification. The decision is based on information about the DNA amount and level of DNA degradation of the forensic sample, as well as case circumstances and the cost for analysis. Semi-continuous electropherogram models are used for simulation of DNA profiles and for computation of likelihood ratios. It is shown how tables and graphs, prepared beforehand, can be used to quickly find the optimal decision in forensic casework.

  14. Stepped Care to Optimize Pain care Effectiveness (SCOPE) trial study design and sample characteristics.

    PubMed

    Kroenke, Kurt; Krebs, Erin; Wu, Jingwei; Bair, Matthew J; Damush, Teresa; Chumbler, Neale; York, Tish; Weitlauf, Sharon; McCalley, Stephanie; Evans, Erica; Barnd, Jeffrey; Yu, Zhangsheng

    2013-03-01

    Pain is the most common physical symptom in primary care, accounting for an enormous burden in terms of patient suffering, quality of life, work and social disability, and health care and societal costs. Although collaborative care interventions are well-established for conditions such as depression, fewer systems-based interventions have been tested for chronic pain. This paper describes the study design and baseline characteristics of the enrolled sample for the Stepped Care to Optimize Pain care Effectiveness (SCOPE) study, a randomized clinical effectiveness trial conducted in five primary care clinics. SCOPE has enrolled 250 primary care veterans with persistent (3 months or longer) musculoskeletal pain of moderate severity and randomized them to either the stepped care intervention or usual care control group. Using a telemedicine collaborative care approach, the intervention couples automated symptom monitoring with a telephone-based, nurse care manager/physician pain specialist team to treat pain. The goal is to optimize analgesic management using a stepped care approach to drug selection, symptom monitoring, dose adjustment, and switching or adding medications. All subjects undergo comprehensive outcome assessments at baseline, 1, 3, 6 and 12 months by interviewers blinded to treatment group. The primary outcome is pain severity/disability, and secondary outcomes include pain beliefs and behaviors, psychological functioning, health-related quality of life and treatment satisfaction. Innovations of SCOPE include optimized analgesic management (including a stepped care approach, opioid risk stratification, and criteria-based medication adjustment), automated monitoring, and centralized care management that can cover multiple primary care practices. Published by Elsevier Inc.

  15. A Procedure to Determine the Optimal Sensor Positions for Locating AE Sources in Rock Samples

    NASA Astrophysics Data System (ADS)

    Duca, S.; Occhiena, C.; Sambuelli, L.

    2015-03-01

    Within a research work aimed to better understand frost weathering mechanisms of rocks, laboratory tests have been designed to specifically assess a theoretical model of crack propagation due to ice segregation process in water-saturated and thermally microcracked cubic samples of Arolla gneiss. As the formation and growth of microcracks during freezing tests on rock material is accompanied by a sudden release of stored elastic energy, the propagation of elastic waves can be detected, at the laboratory scale, by acoustic emission (AE) sensors. The AE receiver array geometry is a sensitive factor influencing source location errors, for it can greatly amplify the effect of small measurement errors. Despite the large literature on the AE source location, little attention, to our knowledge, has been paid to the description of the experimental design phase. As a consequence, the criteria for sensor positioning are often not declared and not related to location accuracy. In the present paper, a tool for the identification of the optimal sensor position on a cubic shape rock specimen is presented. The optimal receiver configuration is chosen by studying the condition numbers of each of the kernel matrices, used for inverting the arrival time and finding the source location, and obtained for properly selected combinations between sensors and sources positions.

  16. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  17. Dynamic simulation tools for the analysis and optimization of novel collection, filtration and sample preparation systems

    SciTech Connect

    Clague, D; Weisgraber, T; Rockway, J; McBride, K

    2006-02-12

    The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.

  18. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  19. Spatially-Optimized Sequential Sampling Plan for Cabbage Aphids Brevicoryne brassicae L. (Hemiptera: Aphididae) in Canola Fields.

    PubMed

    Severtson, Dustin; Flower, Ken; Nansen, Christian

    2016-08-01

    The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact.

  20. A sampling optimization analysis of soil-bugs diversity (Crustacea, Isopoda, Oniscidea).

    PubMed

    Messina, Giuseppina; Cazzolla Gatti, Roberto; Droutsa, Angeliki; Barchitta, Martina; Pezzino, Elisa; Agodi, Antonella; Lombardo, Bianca Maria

    2016-01-01

    Biological diversity analysis is among the most informative approaches to describe communities and regional species compositions. Soil ecosystems include large numbers of invertebrates, among which soil bugs (Crustacea, Isopoda, Oniscidea) play significant ecological roles. The aim of this study was to provide advices to optimize the sampling effort, to efficiently monitor the diversity of this taxon, to analyze its seasonal patterns of species composition, and ultimately to understand better the coexistence of so many species over a relatively small area. Terrestrial isopods were collected at the Natural Reserve "Saline di Trapani e Paceco" (Italy), using pitfall traps monthly monitored over 2 years. We analyzed parameters of α- and β-diversity and calculated a number of indexes and measures to disentangle diversity patterns. We also used various approaches to analyze changes in biodiversity over time, such as distributions of species abundances and accumulation and rarefaction curves. As concerns species richness and total abundance of individuals, spring resulted the best season to monitor Isopoda, to reduce sampling efforts, and to save resources without losing information, while in both years abundances were maximum between summer and autumn. This suggests that evaluations of β-diversity are maximized if samples are first collected during the spring and then between summer and autumn. Sampling during these coupled seasons allows to collect a number of species close to the γ-diversity (24 species) of the area. Finally, our results show that seasonal shifts in community composition (i.e., dynamic fluctuations in species abundances during the four seasons) may minimize competitive interactions, contribute to stabilize total abundances, and allow the coexistence of phylogenetically close species within the ecosystem.

  1. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  2. Bayesian inversion of geophysical data using combined particle swarm optimization and Metropolis sampling

    NASA Astrophysics Data System (ADS)

    Danek, T.; Wojdyla, M.; Farquharson, C.

    2012-04-01

    Bayesian inversion of geophysical data has many advantages over the classic deterministic approach. Basins of attraction of global solutions are more likely to be found, solution uncertainty can be quantified, and proper parametrizations consistent with the resolving power of the data can be chosen using Bayesian information criteria. Large sets of results, sometimes for different parametrizations, provide additional information which can help address interpretational problems with respect to noise level, changes in parametrization, equivalence or influence of other factors like multidimensional structures or anisotropy. Additionally, a large set of results is an interpretational tool itself. If some external geological information can be included, the interpreter is able to choose the best result from the database of accepted results. Sometimes a non-optimal solution that nevertheless has an acceptable misfit can be in better agreement with the geological model then the global best result. The main disadvantage of the Bayesian approach is its computational intensity and its sometimes slow convergence. These problems can be overcome by combining typical stochastic sampling methods (e.g. Metropolis algorithm) with powerful metaheuristics (e.g. particle swarm optimization) and by multilevel parallelization of computations. Due to the natural parallelism of many geophysical forward solvers (first level of parallelization) and metaheuristical optimization engines (second level of parallelization), the proposed hybrid method was implemented for use in massively parallel environments such as PC or GPU clusters. The results to be presented are focused on 1D magnetotelluric (MT) inversion for both real and synthetic data. However, examples of the application of the proposed approach to other geophysical problems will be presented and discussed. 1D MT was chosen for the main tests because of its computational simplicity, which makes it feasible for more demanding inversion

  3. Cutoff Scores for MMPI-2 and MMPI-2-RF Cognitive-Somatic Validity Scales for Psychometrically Defined Malingering Groups in a Military Sample.

    PubMed

    Jones, Alvin

    2016-07-12

    This research examined cutoff scores for MMPI-2 and MMPI-2-RF validity scales specifically developed to assess non-credible reporting of cognitive and/or somatic symptoms. The validity scales examined included the Response Bias Scale (RBS), the Symptom Validity Scales (FBS, FBS-r), Infrequent Somatic Responses scale (Fs), and the Henry-Heilbronner Indexes (HHI, HHI-r). Cutoffs were developed by comparing a psychometrically defined non-malingering group with three psychometrically defined malingering groups (probable, probable to definite, and definite malingering) and a group that combined all malingering groups. The participants in this research were drawn from a military sample consisting largely of patients with traumatic brain injury (mostly mild traumatic brain injury). Specificities for cutoffs of at least 0.90 are provided. Sensitivities, predictive values, and likelihood ratios are also provided. RBS had the largest mean effect size (d) when the malingering groups were compared to the non-malingering group (d range = 1.23-1.58). Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  5. An optimized procedure for exosome isolation and analysis using serum samples: Application to cancer biomarker discovery.

    PubMed

    Li, Mu; Rai, Alex J; DeCastro, G Joel; Zeringer, Emily; Barta, Timothy; Magdaleno, Susan; Setterquist, Robert; Vlassov, Alexander V

    2015-10-01

    Exosomes are RNA and protein-containing nanovesicles secreted by all cell types and found in abundance in body fluids, including blood, urine and cerebrospinal fluid. These vesicles seem to be a perfect source of biomarkers, as their cargo largely reflects the content of parental cells, and exosomes originating from all organs can be obtained from circulation through minimally invasive or non-invasive means. Here we describe an optimized procedure for exosome isolation and analysis using clinical samples, starting from quick and robust extraction of exosomes with Total exosome isolation reagent, then isolation of RNA followed by qRT-PCR. Effectiveness of this workflow is exemplified by analysis of the miRNA content of exosomes derived from serum samples - obtained from the patients with metastatic prostate cancer, treated prostate cancer patients who have undergone prostatectomy, and control patients without prostate cancer. Three promising exosomal microRNA biomarkers were identified, discriminating these groups: hsa-miR375, hsa-miR21, hsa-miR574. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    NASA Astrophysics Data System (ADS)

    Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.

    2004-04-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.

  7. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives.

    PubMed

    Flegg, Jennifer A; Guérin, Philippe J; Nosten, Francois; Ashley, Elizabeth A; Phyo, Aung Pyae; Dondorp, Arjen M; Fairhurst, Rick M; Socheat, Duong; Borrmann, Steffen; Björkman, Anders; Mårtensson, Andreas; Mayxay, Mayfong; Newton, Paul N; Bethell, Delia; Se, Youry; Noedl, Harald; Diakite, Mahamadou; Djimde, Abdoulaye A; Hien, Tran T; White, Nicholas J; Stepniewska, Kasia

    2013-11-13

    The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate "reference" half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the "true" half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. The median (range) parasite half-life for all clinical studies combined was 3.1 (0.7-12.9) hours. Schedule A1

  8. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    PubMed Central

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  9. Dynamic routing and spectrum assignment based on multilayer virtual topology and ant colony optimization in elastic software-defined optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Zhang, Qi; Tian, Qinghua; Tian, Feng; Rao, Lan; Xin, Xiangjun

    2017-07-01

    Elastic software-defined optical networks greatly improve the flexibility of the optical switching network while it has brought challenges to the routing and spectrum assignment (RSA). A multilayer virtual topology model is proposed to solve RSA problems. Two RSA algorithms based on the virtual topology are proposed, which are the ant colony optimization (ACO) algorithm of minimum consecutiveness loss and the ACO algorithm of maximum spectrum consecutiveness. Due to the computing power of the control layer in the software-defined network, the routing algorithm avoids the frequent link-state information between routers. Based on the effect of the spectrum consecutiveness loss on the pheromone in the ACO, the path and spectrum of the minimal impact on the network are selected for the service request. The proposed algorithms have been compared with other algorithms. The results show that the proposed algorithms can reduce the blocking rate by at least 5% and perform better in spectrum efficiency. Moreover, the proposed algorithms can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness.

  10. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    PubMed

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups.Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample.Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 (n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification.Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents <18 y of age (score range: 0-14), and 1/2, between 4 and 5 (4/5), and between 6 and 7 (6/7) in adult-only households (range: 0-8). With minor variations, the same cutoffs were also identified in the macroregions. Although our findings confirm, in general, the classification currently used, the limit of 1/2 (compared with 0/1) for separating the milder from the baseline category emerged consistently in all analyses.Conclusions: Nationwide findings corroborate previous local

  11. A study of the thermal decomposition of adulterated cocaine samples under optimized aerobic pyrolytic conditions.

    PubMed

    Gostic, T; Klemenc, S; Stefane, B

    2009-05-30

    The pyrolysis behaviour of pure cocaine base as well as the influence of various additives was studied using conditions that are relevant to the smoking of illicit cocaine by humans. For this purpose an aerobic pyrolysis device was developed and the experimental conditions were optimized. In the first part of our study the optimization of some basic experimental parameters of the pyrolysis was performed, i.e., the furnace temperature, the sampling start time, the heating period, the sampling time, and the air-flow rate through the system. The second part of the investigation focused on the volatile products formed during the pyrolysis of a pure cocaine free base and mixtures of cocaine base and adulterants. The anaesthetics lidocaine, benzocaine, procaine, the analgesics phenacetine and paracetamol, and the stimulant caffeine were used as the adulterants. Under the applied experimental conditions complete volatilization of the samples was achieved, i.e., the residuals of the studied compounds were not detected in the pyrolysis cell. Volatilization of the pure cocaine base showed that the cocaine recovery available for inhalation (adsorbed on traps) was approximately 76%. GC-MS and NMR analyses of the smoke condensate revealed the presence of some additional cocaine pyrolytic products, such as anhydroecgonine methyl ester (AEME), benzoic acid (BA) and carbomethoxycycloheptatrienes (CMCHTs). Experiments with different cocaine-adulterant mixtures showed that the addition of the adulterants changed the thermal behaviour of the cocaine. The most significant of these was the effect of paracetamol. The total recovery of the cocaine (adsorbed on traps and in a glass tube) from the 1:1 cocaine-paracetamol mixture was found to be only 3.0+/-0.8%, versus 81.4+/-2.9% for the pure cocaine base. The other adulterants showed less-extensive effects on the recovery of cocaine, but the pyrolysis of the cocaine-procaine mixture led to the formation of some unique pyrolytic products

  12. Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors

    USGS Publications Warehouse

    Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.

    2013-01-01

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.

  13. Optimizing EUS-guided liver biopsy sampling: comprehensive assessment of needle types and tissue acquisition techniques.

    PubMed

    Schulman, Allison R; Thompson, Christopher C; Odze, Robert; Chan, Walter W; Ryou, Marvin

    2017-02-01

    EUS-guided liver biopsy sampling using FNA and, more recently, fine-needle biopsy (FNB) needles has been reported with discrepant diagnostic accuracy, in part due to differences in methodology. We aimed to compare liver histologic yields of 4 EUS-based needles and 2 percutaneous needles to identify optimal number of needle passes and suction. Six needle types were tested on human cadaveric tissue: one 19G FNA needle, one existing 19G FNB needle, one novel 19G FNB needle, one 22G FNB needle, and two 18G percutaneous needles (18G1 and 18G2). Two needle excursion patterns (1 vs 3 fanning passes) were performed on all EUS needles. Primary outcome was number of portal tracts. Secondary outcomes were degree of fragmentation and specimen adequacy. Pairwise comparisons were performed using t tests, with a 2-sided P < .05 considered to be significant. Multivariable regression analysis was performed. In total, 288 liver biopsy samplings (48 per needle type) were performed. The novel 19G FNB needle had significantly increased mean portal tracts compared with all needle types. The 22G FNB needle had significantly increased portal tracts compared with the 18G1 needle (3.8 vs 2.5, P < .001) and was not statistically different from the 18G2 needle (3.8 vs 3.5, P = .68). FNB needles (P < .001) and 3 fanning passes (P ≤ .001) were independent predictors of the number of portal tracts. A novel 19G EUS-guided liver biopsy needle provides superior histologic yield compared with 18G percutaneous needles and existing 19G FNA and core needles. Moreover, the 22G FNB needle may be adequate for liver biopsy sampling. Investigations are underway to determine whether these results can be replicated in a clinical setting. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  14. Bacterial screening of platelet concentrates on day 2 and 3 with flow cytometry: the optimal sampling time point?

    PubMed Central

    Vollmer, Tanja; Schottstedt, Volkmar; Bux, Juergen; Walther-Wenke, Gabriele; Knabbe, Cornelius; Dreier, Jens

    2014-01-01

    Background There is growing concern on the residual risk of bacterial contamination of platelet concentrates in Germany, despite the reduction of the shelf-life of these concentrates and the introduction of bacterial screening. In this study, the applicability of the BactiFlow flow cytometric assay for bacterial screening of platelet concentrates on day 2 or 3 of their shelf-life was assessed in two German blood services. The results were used to evaluate currently implemented or newly discussed screening strategies. Materials and methods Two thousand and ten apheresis platelet concentrates were tested on day 2 or day 3 after donation using BactiFlow flow cytometry. Reactive samples were confirmed by the BacT/Alert culture system. Results Twenty-four of the 2,100 platelet concentrates tested were reactive in the first test by BactiFlow. Of these 24 platelet concentrates, 12 were false-positive and the other 12 were initially reactive. None of the microbiological cultures of the initially reactive samples was positive. Parallel examination of 1,026 platelet concentrates by culture revealed three positive platelet concentrates with bacteria detected only in the anaerobic culture bottle and identified as Staphylococcus species. Two platelet concentrates were confirmed positive for Staphylcoccus epidermidis by culture. Retrospective analysis of the growth kinetics of the bacteria indicated that the bacterial titres were most likely below the diagnostic sensitivity of the BactiFlow assay (<300 CFU/mL) and probably had no transfusion relevance. Conclusions The BactiFlow assay is very convenient for bacterial screening of platelet concentrates independently of the testing day and the screening strategy. Although the optimal screening strategy could not be defined, this study provides further data to help achieve this goal. PMID:24887230

  15. Bacterial screening of platelet concentrates on day 2 and 3 with flow cytometry: the optimal sampling time point?

    PubMed

    Vollmer, Tanja; Schottstedt, Volkmar; Bux, Juergen; Walther-Wenke, Gabriele; Knabbe, Cornelius; Dreier, Jens

    2014-07-01

    There is growing concern on the residual risk of bacterial contamination of platelet concentrates in Germany, despite the reduction of the shelf-life of these concentrates and the introduction of bacterial screening. In this study, the applicability of the BactiFlow flow cytometric assay for bacterial screening of platelet concentrates on day 2 or 3 of their shelf-life was assessed in two German blood services. The results were used to evaluate currently implemented or newly discussed screening strategies. Two thousand and ten apheresis platelet concentrates were tested on day 2 or day 3 after donation using BactiFlow flow cytometry. Reactive samples were confirmed by the BacT/Alert culture system. Twenty-four of the 2,100 platelet concentrates tested were reactive in the first test by BactiFlow. Of these 24 platelet concentrates, 12 were false-positive and the other 12 were initially reactive. None of the microbiological cultures of the initially reactive samples was positive. Parallel examination of 1,026 platelet concentrates by culture revealed three positive platelet concentrates with bacteria detected only in the anaerobic culture bottle and identified as Staphylococcus species. Two platelet concentrates were confirmed positive for Staphylcoccus epidermidis by culture. Retrospective analysis of the growth kinetics of the bacteria indicated that the bacterial titres were most likely below the diagnostic sensitivity of the BactiFlow assay (<300 CFU/mL) and probably had no transfusion relevance. The BactiFlow assay is very convenient for bacterial screening of platelet concentrates independently of the testing day and the screening strategy. Although the optimal screening strategy could not be defined, this study provides further data to help achieve this goal.

  16. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    NASA Astrophysics Data System (ADS)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  17. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  18. Towards an optimal sampling of peculiar velocity surveys for Wiener Filter reconstructions

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan

    2017-06-01

    The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goal is to test the dependence of the reconstruction quality on the distribution and nature of data points. Mock data sets, extending to 250 h-1 Mpc, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogues. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor-quality bulk velocities. In addition, the reconstruction quality depends strongly on the grouping of individual data points into single points to suppress virial motions in high-density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: assuming no error, they differ from the simulated field by less than 100 km s-1 up to the extreme edge of the catalogues or up to a distance of three times the mean distance of data points for non-uniform catalogues. The overall conclusions hold when errors are added.

  19. Improved peptide mass fingerprinting matches via optimized sample preparation in MALDI mass spectrometry.

    PubMed

    Padliya, Neerav D; Wood, Troy D

    2008-10-03

    Peptide mass fingerprinting (PMF) is a powerful technique in which experimentally measured m/z values of peptides resulting from a protein digest form the basis for a characteristic fingerprint of the intact protein. Due to its propensity to generate singly charged ions, along with its relative insensitivity to salts and buffers, matrix-assisted laser desorption and ionization (MALDI)-time-of-flight mass spectrometry (TOFMS) is the MS method of choice for PMF. The qualitative features of the mass spectrum can be selectively tuned by employing different methods to prepare the protein digest and matrix for MALDI-TOFMS. The selective tuning of MALDI mass spectra in order to optimize PMF is addressed here. Bovine serum albumin, carbonic anhydrase, cytochrome c, hemoglobin alpha- and beta-chain, and myoglobin were digested with trypsin and then analyzed by MALDI-TOFMS. 2,5-dihydroxybenzoic acid (DHB) and alpha-cyano-4-hydroxycinnamic acid (CHCA) were prepared using six different sample preparation methods: dried droplet, application of protein digest on MALDI plate followed by addition of matrix, dried droplet with vacuum drying, overlayer, sandwich, and dried droplet with heating. Improved results were obtained for the matrix alpha-cyano-4-hydroxycinnamic acid using a modification of the died droplet method in which the MALDI plate was heated to 80 degrees C prior to matrix application, which is supported by observations from scanning electron microscopy. Although each protein was found to have a different optimum sample preparation method for PMF, in general higher sequence coverage for PMF was obtained using DHB. The best PMF results were obtained when all of the mass spectral data for a particular protein digest was convolved together.

  20. Quality analysis of salmon calcitonin in a polymeric bioadhesive pharmaceutical formulation: sample preparation optimization by DOE.

    PubMed

    D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart

    2010-12-01

    A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer.

  1. Optimal satellite sampling to resolve global-scale dynamics in the I-T system

    NASA Astrophysics Data System (ADS)

    Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.

    2016-12-01

    The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.

  2. Optimality, sample size, and power calculations for the sequential parallel comparison design.

    PubMed

    Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A

    2011-10-15

    The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Defining optimal cutoff scores for cognitive impairment using Movement Disorder Society Task Force criteria for mild cognitive impairment in Parkinson's disease.

    PubMed

    Goldman, Jennifer G; Holden, Samantha; Bernard, Bryan; Ouyang, Bichun; Goetz, Christopher G; Stebbins, Glenn T

    2013-12-01

    The recently proposed Movement Disorder Society (MDS) Task Force diagnostic criteria for mild cognitive impairment in Parkinson's disease (PD-MCI) represent a first step toward a uniform definition of PD-MCI across multiple clinical and research settings. However, several questions regarding specific criteria remain unanswered, including optimal cutoff scores by which to define impairment on neuropsychological tests. Seventy-six non-demented PD patients underwent comprehensive neuropsychological assessment and were classified as PD-MCI or PD with normal cognition (PD-NC). The concordance of PD-MCI diagnosis by MDS Task Force Level II criteria (comprehensive assessment), using a range of standard deviation (SD) cutoff scores, was compared with our consensus diagnosis of PD-MCI or PD-NC. Sensitivity, specificity, and positive and negative predictive values were examined for each cutoff score. PD-MCI subtype classification and distribution of cognitive domains impaired were evaluated. Concordance for PD-MCI diagnosis was greatest for defining impairment on neuropsychological tests using a 2 SD cutoff score below appropriate norms. This cutoff also provided the best discriminatory properties for separating PD-MCI from PD-NC compared with other cutoff scores. With the MDS PD-MCI criteria, multiple domain impairment was more frequent than single domain impairment, with predominant executive function, memory, and visuospatial function deficits. Application of the MDS Task Force PD-MCI Level II diagnostic criteria demonstrates good sensitivity and specificity at a 2 SD cutoff score. The predominance of multiple domain impairment in PD-MCI with the Level II criteria suggests not only influences of testing abnormality requirements, but also the widespread nature of cognitive deficits within PD-MCI. © 2013 Movement Disorder Society.

  4. Defining the optimal window for cranial transplantation of human induced pluripotent stem cell-derived cells to ameliorate radiation-induced cognitive impairment.

    PubMed

    Acharya, Munjal M; Martirosian, Vahan; Christie, Lori-Ann; Riparip, Lara; Strnadel, Jan; Parihar, Vipan K; Limoli, Charles L

    2015-01-01

    Past preclinical studies have demonstrated the capability of using human stem cell transplantation in the irradiated brain to ameliorate radiation-induced cognitive dysfunction. Intrahippocampal transplantation of human embryonic stem cells and human neural stem cells (hNSCs) was found to functionally restore cognition in rats 1 and 4 months after cranial irradiation. To optimize the potential therapeutic benefits of human stem cell transplantation, we have further defined optimal transplantation windows for maximizing cognitive benefits after irradiation and used induced pluripotent stem cell-derived hNSCs (iPSC-hNSCs) that may eventually help minimize graft rejection in the host brain. For these studies, animals given an acute head-only dose of 10 Gy were grafted with iPSC-hNSCs at 2 days, 2 weeks, or 4 weeks following irradiation. Animals receiving stem cell grafts showed improved hippocampal spatial memory and contextual fear-conditioning performance compared with irradiated sham-surgery controls when analyzed 1 month after transplantation surgery. Importantly, superior performance was evident when stem cell grafting was delayed by 4 weeks following irradiation compared with animals grafted at earlier times. Analysis of the 4-week cohort showed that the surviving grafted cells migrated throughout the CA1 and CA3 subfields of the host hippocampus and differentiated into neuronal (∼39%) and astroglial (∼14%) subtypes. Furthermore, radiation-induced inflammation was significantly attenuated across multiple hippocampal subfields in animals receiving iPSC-hNSCs at 4 weeks after irradiation. These studies expand our prior findings to demonstrate that protracted stem cell grafting provides improved cognitive benefits following irradiation that are associated with reduced neuroinflammation.

  5. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  6. Comparison of Enterococcus density estimates in marine beach and bay samples by real-time polymerase chain reaction, membrane filtration and defined substrate testing.

    PubMed

    Ferretti, James A; Tran, Hiep V; Cosgrove, Elizabeth; Protonentis, John; Loftin, Virginia; Conklin, Carol S; Grant, Robert N

    2011-05-01

    Currently, densities of Enterococcus in marine bathing beach samples are performed using conventional methods which require 24 h to obtain results. Real-time PCR methods are available which can measure results in as little as 3 h. The purpose of this study was to evaluate a more rapid test method for the determination of bacterial contamination in marine bathing beaches to better protect human health. The geometric mean of Enterococcus densities using Enterolert® defined substrate testing and membrane filtration ranged from 5.2 to 150 MPN or CFU/100mL and corresponding qPCR results ranged from 6.6 to 1785 CCE/100 mL. The regression analysis of these results showed a positive correlation between qPCR and conventional tests with an overall correlation (r) of 0.71. qPCR was found to provide accurate and sensitive estimate of Enterococcus densities and has the potential to be used as a rapid test method for the quantification of Enterococcus in marine waters.

  7. Defining the Optimal Selenium Dose for Prostate Cancer Risk Reduction: Insights from the U-Shaped Relationship between Selenium Status, DNA Damage, and Apoptosis.

    PubMed

    Chiang, Emily C; Shen, Shuren; Kengeri, Seema S; Xu, Huiping; Combs, Gerald F; Morris, J Steven; Bostwick, David G; Waters, David J

    2009-12-21

    Our work in dogs has revealed a U-shaped dose response between selenium status and prostatic DNA damage that remarkably parallels the relationship between dietary selenium and prostate cancer risk in men, suggesting that more selenium is not necessarily better. Herein, we extend this canine work to show that the selenium dose that minimizes prostatic DNA damage also maximizes apoptosis-a cancer-suppressing death switch used by prostatic epithelial cells. These provocative findings suggest a new line of thinking about how selenium can reduce cancer risk. Mid-range selenium status (.67-.92 ppm in toenails) favors a process we call "homeostatic housecleaning"-an upregulated apoptosis that preferentially purges damaged prostatic cells. Also, the U-shaped relationship provides valuable insight into stratifying individuals as selenium-responsive or selenium-refractory, based upon the likelihood of reducing their cancer risk by additional selenium. By studying elderly dogs, the only non-human animal model of spontaneous prostate cancer, we have established a robust experimental approach bridging the gap between laboratory and human studies that can help to define the optimal doses of cancer preventives for large-scale human trials. Moreover, our observations bring much needed clarity to the null results of the Selenium and Vitamin E Cancer Prevention Trial (SELECT) and set a new research priority: testing whether men with low, suboptimal selenium levels less than 0.8 ppm in toenails can achieve cancer risk reduction through daily supplementation.

  8. Defining the Optimal Selenium Dose for Prostate Cancer Risk Reduction: Insights from the U-Shaped Relationship between Selenium Status, DNA Damage, and Apoptosis

    PubMed Central

    Chiang, Emily C.; Shen, Shuren; Kengeri, Seema S.; Xu, Huiping; Combs, Gerald F.; Morris, J. Steven; Bostwick, David G.; Waters, David J.

    2009-01-01

    Our work in dogs has revealed a U-shaped dose response between selenium status and prostatic DNA damage that remarkably parallels the relationship between dietary selenium and prostate cancer risk in men, suggesting that more selenium is not necessarily better. Herein, we extend this canine work to show that the selenium dose that minimizes prostatic DNA damage also maximizes apoptosis—a cancer-suppressing death switch used by prostatic epithelial cells. These provocative findings suggest a new line of thinking about how selenium can reduce cancer risk. Mid-range selenium status (.67–.92 ppm in toenails) favors a process we call “homeostatic housecleaning”—an upregulated apoptosis that preferentially purges damaged prostatic cells. Also, the U-shaped relationship provides valuable insight into stratifying individuals as selenium-responsive or selenium-refractory, based upon the likelihood of reducing their cancer risk by additional selenium. By studying elderly dogs, the only non-human animal model of spontaneous prostate cancer, we have established a robust experimental approach bridging the gap between laboratory and human studies that can help to define the optimal doses of cancer preventives for large-scale human trials. Moreover, our observations bring much needed clarity to the null results of the Selenium and Vitamin E Cancer Prevention Trial (SELECT) and set a new research priority: testing whether men with low, suboptimal selenium levels less than 0.8 ppm in toenails can achieve cancer risk reduction through daily supplementation. PMID:20877485

  9. The design and evaluation of a shaped filter collection device to sample and store defined volume dried blood spots from finger pricks.

    PubMed

    Polley, Spencer D; Bell, David; Oliver, James; Tully, Frank; Perkins, Mark D; Chiodini, Peter L; González, Iveth J

    2015-02-05

    Dried blood spots are a common medium for collecting patient blood prior to testing for malaria by molecular methods. A new shaped filter device for the quick and simple collection of a designated volume of patient blood has been designed and tested against conventional blood spots for accuracy and precision. Shaped filter devices were laser cut from Whatman GB003 paper to absorb a 20 μl blood volume. These devices were used to sample Plasmodium falciparum infected blood and the volume absorbed was measured volumetrically. Conventional blood spots were made by pipetting 20 μl of the same blood onto Whatman 3MM paper. DNA was extracted from both types of dried blood spot using Qiagen DNA blood mini or Chelex extraction for real-time PCR analysis, and PURE extraction for malaria LAMP testing. The shaped filter devices collected a mean volume of 21.1 μl of blood, with a coefficient of variance of 8.1%. When used for DNA extraction by Chelex and Qiagen methodologies the mean number of international standard units of P. falciparum DNA recovered per μl of the eluate was 53.1 (95% CI: 49.4 to 56.7) and 32.7 (95% CI: 28.8 to 36.6), respectively for the shaped filter device, and 54.6 (95% CI: 52.1 to 57.1) and 12.0 (95% CI: 9.9 to 14.1), respectively for the 3MM blood spots. Qiagen extraction of 200 μl of whole infected blood yielded 853.6 international standard units of P. falciparum DNA per μl of eluate. A shaped filter device provides a simple way to quickly sample and store a defined volume of blood without the need for any additional measuring devices. Resultant dried blood spots may be employed for DNA extraction using a variety of technologies for nucleic acid amplification without the need for repeated cleaning of scissors or punches to prevent cross contamination of samples and results are comparable to traditional DBS.

  10. Stochastic sampling for deterministic structural topology optimization with many load cases: Density-based and ground structure approaches

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojia Shelly; de Sturler, Eric; Paulino, Glaucio H.

    2017-10-01

    We propose an efficient probabilistic method to solve a deterministic problem -- we present a randomized optimization approach that drastically reduces the enormous computational cost of optimizing designs under many load cases for both continuum and truss topology optimization. Practical structural designs by topology optimization typically involve many load cases, possibly hundreds or more. The optimal design minimizes a, possibly weighted, average of the compliance under each load case (or some other objective). This means that in each optimization step a large finite element problem must be solved for each load case, leading to an enormous computational effort. On the contrary, the proposed randomized optimization method with stochastic sampling requires the solution of only a few (e.g., 5 or 6) finite element problems (large linear systems) per optimization step. Based on simulated annealing, we introduce a damping scheme for the randomized approach. Through numerical examples in two and three dimensions, we demonstrate that the stochastic algorithm drastically reduces computational cost to obtain similar final topologies and results (e.g., compliance) compared with the standard algorithms. The results indicate that the damping scheme is effective and leads to rapid convergence of the proposed algorithm.

  11. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    PubMed

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-08-30

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  12. Hydrocephalus Defined

    MedlinePlus

    ... narrow pathways. CSF is in constant production and absorption; it has a defined pathway from the lateral ... there is an imbalance of production and/or absorption. With most types of hydrocephalus, the fluid gets ...

  13. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  14. Defining “Normophilic” and “Paraphilic” Sexual Fantasies in a Population‐Based Sample: On the Importance of Considering Subgroups

    PubMed Central

    2015-01-01

    criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067

  15. Persistent Organic Pollutant Determination in Killer Whale Scat Samples: Optimization of a Gas Chromatography/Mass Spectrometry Method and Application to Field Samples.

    PubMed

    Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K

    2016-01-01

    Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions.

  16. Optimization of sampling methods for within-tree populations of red oak borer, Enaphalodes rufulus (Haldeman) (Coleoptera: Cerambycidae).

    PubMed

    Crook, D J; Fierke, M K; Mauromoustakos, A; Kinney, D L; Stephen, F M

    2007-06-01

    In the Ozark Mountains of northern Arkansas and southern Missouri, an oak decline event, coupled with epidemic populations of red oak borer (Enaphalodes rufulus Haldeman), has resulted in extensive red oak (Quercus spp., section Lobatae) mortality. Twenty-four northern red oak trees, Quercus rubra L., infested with red oak borer, were felled in the Ozark National Forest between March 2002 and June 2003. Infested tree boles were cut into 0.5-m sample bolts, and the following red oak borer population variables were measured: current generation galleries, live red oak borer, emergence holes, and previous generation galleries. Population density estimates from sampling plans using varying numbers of samples taken randomly and systematically were compared with total census measurements for the entire infested tree bole. Systematic sampling consistently yielded lower percent root mean square error (%RMSE) than random sampling. Systematic sampling of one half of the tree (every other 0.5-m sample along the tree bole) yielded the lowest values. Estimates from plans systematically sampling one half the tree and systematic proportional sampling using seven or nine samples did not differ significantly from each other and were within 25% RMSE of the "true" mean. Thus, we recommend systematically removing and dissecting seven 0.5-m samples from infested trees as an optimal sampling plan for monitoring red oak borer within-tree population densities. This optimal sampling plan should allow for collection of acceptably accurate within-tree population density data for this native wood-boring insect and reducing labor and costs of dissecting whole trees.

  17. Assessment of nodal target definition and dosimetry using three different techniques: implications for re-defining the optimal pelvic field in endometrial cancer

    PubMed Central

    2010-01-01

    Purposes 1. To determine the optimal pelvic nodal clinical target volume for post-operative treatment of endometrial cancer. 2. To compare the DVH of different treatment planning techniques applied to this new CTV and the surrounding tissues. Methods and Materials Based on the literature, we selected a methodology to delineate nodal target volume to define a NEW-CTV and NEW-PTV. Conventional 2D fields, 3D fields based on anatomic guidelines per RTOG 0418, 3D fields based on our guidelines, and IMRT based on our guidelines were assessed for coverage of NEW-CTV, NEW-PTV, and surrounding structures. CT scans of 10 patients with gynecologic malignancies after TAH/BSO were used. DVHs were compared. Results For NEW-PTV, mean V45Gy were 50% and 69% for 2D and RTOG 0418-3DCRT vs. 98% and 97% for NEW-3DCRT and NEW-IMRT (p < 0.0009). Mean V45Gy small bowel were 24% and 20% for 2D and RTOG 0418-3DCRT, increased to 32% with NEW-3DCRT, and decreased to 14% with IMRT (p = 0.005, 0.138, 0.002). Mean V45Gy rectum were 26%, 35%, and 52% for 2D, RTOG 0418-3DCRT, and NEW-3DCRT, and decreased to 26% with NEW-IMRT (p < 0.05). Mean V45Gy bladder were 83%, 51%, and 73% for 2D, RTOG 0418-3DCRT, and NEW-3DCRT, and decreased to 30% with NEW-IMRT (p < 0.002). Conclusions Conventional 2D and RTOG 0418-based 3DCRT plans cover only a fraction of our comprehensive PTV. A 3DCRT plan covers this PTV with high doses to normal tissues, whereas IMRT covers the PTV while delivering lower normal tissue doses. Re-consideration of what specifically the pelvic target encompasses is warranted. PMID:20579393

  18. Assessment of nodal target definition and dosimetry using three different techniques: implications for re-defining the optimal pelvic field in endometrial cancer.

    PubMed

    Guo, Susan; Ennis, Ronald D; Bhatia, Stephen; Trichter, Frieda; Bashist, Benjamin; Shah, Jinesh; Chadha, Manjeet

    2010-06-27

    1. To determine the optimal pelvic nodal clinical target volume for post-operative treatment of endometrial cancer. 2. To compare the DVH of different treatment planning techniques applied to this new CTV and the surrounding tissues. Based on the literature, we selected a methodology to delineate nodal target volume to define a NEW-CTV and NEW-PTV. Conventional 2D fields, 3D fields based on anatomic guidelines per RTOG 0418, 3D fields based on our guidelines, and IMRT based on our guidelines were assessed for coverage of NEW-CTV, NEW-PTV, and surrounding structures. CT scans of 10 patients with gynecologic malignancies after TAH/BSO were used. DVHs were compared. For NEW-PTV, mean V45Gy were 50% and 69% for 2D and RTOG 0418-3DCRT vs. 98% and 97% for NEW-3DCRT and NEW-IMRT (p < 0.0009). Mean V45Gy small bowel were 24% and 20% for 2D and RTOG 0418-3DCRT, increased to 32% with NEW-3DCRT, and decreased to 14% with IMRT (p = 0.005, 0.138, 0.002). Mean V45Gy rectum were 26%, 35%, and 52% for 2D, RTOG 0418-3DCRT, and NEW-3DCRT, and decreased to 26% with NEW-IMRT (p < 0.05). Mean V45Gy bladder were 83%, 51%, and 73% for 2D, RTOG 0418-3DCRT, and NEW-3DCRT, and decreased to 30% with NEW-IMRT (p < 0.002). Conventional 2D and RTOG 0418-based 3DCRT plans cover only a fraction of our comprehensive PTV. A 3DCRT plan covers this PTV with high doses to normal tissues, whereas IMRT covers the PTV while delivering lower normal tissue doses. Re-consideration of what specifically the pelvic target encompasses is warranted.

  19. Development of an optimal sampling schedule for children receiving ketamine for short-term procedural sedation and analgesia.

    PubMed

    Sherwin, Catherine M T; Stockmann, Chris; Grimsrud, Kristin; Herd, David W; Anderson, Brian J; Spigarelli, Michael G

    2015-02-01

    Intravenous racemic ketamine is commonly administered for procedural sedation, although few pharmacokinetic studies have been conducted among children. Moreover, an optimal sampling schedule has not been derived to enable the conduct of pharmacokinetic studies that minimally inconvenience study participants. Concentration-time data were obtained from 57 children who received 1-1.5 mg·kg(-1) of racemic ketamine as an intravenous bolus. A population pharmacokinetic analysis was conducted using nonlinear mixed effects models, and the results were used as inputs to develop a D-optimal sampling schedule. The pharmacokinetics of ketamine were described using a two-compartment model. The volume of distribution in the central and peripheral compartments were 20.5 l∙70 kg(-1) and 220 l∙70 kg(-1), respectively. The intercompartmental clearance and total body clearance were 87.3 and 87.9 l·h(-1) ∙70 kg(-1), respectively. Population parameter variability ranged from 34% to 98%. Initially, blood samples were drawn on 3-6 occasions spanning a range of 14-152 min after dosing. Using these data, we determined that four optimal sampling windows occur at 1-5, 5.5-7.5, 10-20, and 90-180 min after dosing. Monte Carlo simulations indicated that these sampling windows produced precise and unbiased ketamine pharmacokinetic parameter estimates. An optimal sampling schedule was developed that allowed assessment of the pharmacokinetic parameters of ketamine among children requiring short-term procedural sedation. © 2014 John Wiley & Sons Ltd.

  20. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    PubMed

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  1. Optimized method for atmospheric signal reduction in irregular sampled InSAR time series assisted by external atmospheric information

    NASA Astrophysics Data System (ADS)

    Gong, W.; Meyer, F. J.

    2013-12-01

    It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit

  2. Diets and selected lifestyle practices of self-defined adult vegetarians from a population-based sample suggest they are more 'health conscious'

    PubMed

    Bedford, Jennifer L; Barr, Susan I

    2005-04-13

    BACKGROUND: Few population-based studies of vegetarians have been published. Thus we compared self-reported vegetarians to non-vegetarians in a representative sample of British Columbia (BC) adults, weighted to reflect the BC population. METHODS: Questionnaires, 24-hr recalls and anthropometric measures were completed during in-person interviews with 1817 community-dwelling residents, 19-84 years, recruited using a population-based health registry. Vegetarian status was self-defined. ANOVA with age as a covariate was used to analyze continuous variables, and chi-square was used for categorical variables. Supplement intakes were compared using the Mann-Whitney test. RESULTS: Approximately 6% (n = 106) stated that they were vegetarian, and most did not adhere rigidly to a flesh-free diet. Vegetarians were more likely female (71% vs. 49%), single, of low-income status, and tended to be younger. Female vegetarians had lower BMI than non-vegetarians (23.1 +/- 0.7 (mean +/- SE) vs. 25.7 +/- 0.2 kg/m2), and also had lower waist circumference (75.0 +/- 1.5 vs. 79.8 +/- 0.5 cm). Male vegetarians and non-vegetarians had similar BMI (25.9 +/- 0.8 vs. 26.7 +/- 0.2 kg/m2) and waist circumference (92.5 +/- 2.3 vs. 91.7 +/- 0.4 cm). Female vegetarians were more physically active (69% vs. 42% active >/=4/wk) while male vegetarians were more likely to use nutritive supplements (71% vs. 51%). Energy intakes were similar, but vegetarians reported higher % energy as carbohydrate (56% vs. 50%), and lower % protein (men only; 13% vs. 17%) or % fat (women only; 27% vs. 33%). Vegetarians had higher fiber, magnesium and potassium intakes. For several other nutrients, differences by vegetarian status differed by gender. The prevalence of inadequate magnesium intake (% below Estimated Average Requirement) was lower in vegetarians than non-vegetarians (15% vs. 34%). Female vegetarians also had a lower prevalence of inadequate thiamin, folate, vitamin B6 and C intakes. Vegetarians were more

  3. Diets and selected lifestyle practices of self-defined adult vegetarians from a population-based sample suggest they are more 'health conscious'

    PubMed Central

    Bedford, Jennifer L; Barr, Susan I

    2005-01-01

    Background Few population-based studies of vegetarians have been published. Thus we compared self-reported vegetarians to non-vegetarians in a representative sample of British Columbia (BC) adults, weighted to reflect the BC population. Methods Questionnaires, 24-hr recalls and anthropometric measures were completed during in-person interviews with 1817 community-dwelling residents, 19–84 years, recruited using a population-based health registry. Vegetarian status was self-defined. ANOVA with age as a covariate was used to analyze continuous variables, and chi-square was used for categorical variables. Supplement intakes were compared using the Mann-Whitney test. Results Approximately 6% (n = 106) stated that they were vegetarian, and most did not adhere rigidly to a flesh-free diet. Vegetarians were more likely female (71% vs. 49%), single, of low-income status, and tended to be younger. Female vegetarians had lower BMI than non-vegetarians (23.1 ± 0.7 (mean ± SE) vs. 25.7 ± 0.2 kg/m2), and also had lower waist circumference (75.0 ± 1.5 vs. 79.8 ± 0.5 cm). Male vegetarians and non-vegetarians had similar BMI (25.9 ± 0.8 vs. 26.7 ± 0.2 kg/m2) and waist circumference (92.5 ± 2.3 vs. 91.7 ± 0.4 cm). Female vegetarians were more physically active (69% vs. 42% active ≥4/wk) while male vegetarians were more likely to use nutritive supplements (71% vs. 51%). Energy intakes were similar, but vegetarians reported higher % energy as carbohydrate (56% vs. 50%), and lower % protein (men only; 13% vs. 17%) or % fat (women only; 27% vs. 33%). Vegetarians had higher fiber, magnesium and potassium intakes. For several other nutrients, differences by vegetarian status differed by gender. The prevalence of inadequate magnesium intake (% below Estimated Average Requirement) was lower in vegetarians than non-vegetarians (15% vs. 34%). Female vegetarians also had a lower prevalence of inadequate thiamin, folate, vitamin B6 and C intakes. Vegetarians were more likely than

  4. FPCA-based method to select optimal sampling schedules that capture between-subject variability in longitudinal studies.

    PubMed

    Wu, Meihua; Diez-Roux, Ana; Raghunathan, Trivellore E; Sánchez, Brisa N

    2017-05-08

    A critical component of longitudinal study design involves determining the sampling schedule. Criteria for optimal design often focus on accurate estimation of the mean profile, although capturing the between-subject variance of the longitudinal process is also important since variance patterns may be associated with covariates of interest or predict future outcomes. Existing design approaches have limited applicability when one wishes to optimize sampling schedules to capture between-individual variability. We propose an approach to derive optimal sampling schedules based on functional principal component analysis (FPCA), which separately characterizes the mean and the variability of longitudinal profiles and leads to a parsimonious representation of the temporal pattern of the variability. Simulation studies show that the new design approach performs equally well compared to an existing approach based on parametric mixed model (PMM) when a PMM is adequate for the data, and outperforms the PMM-based approach otherwise. We use the methods to design studies aiming to characterize daily salivary cortisol profiles and identify the optimal days within the menstrual cycle when urinary progesterone should be measured. © 2017, The International Biometric Society.

  5. ROLE OF LABORATORY SAMPLING DEVICES AND LABORATORY SUBSAMPLING METHODS IN OPTIMIZING REPRESENTATIVENESS STRATEGIES

    EPA Science Inventory

    Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...

  6. ROLE OF LABORATORY SAMPLING DEVICES AND LABORATORY SUBSAMPLING METHODS IN OPTIMIZING REPRESENTATIVENESS STRATEGIES

    EPA Science Inventory

    Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...

  7. Non-uniform sampling in EPR--optimizing data acquisition for HYSCORE spectroscopy.

    PubMed

    Nakka, K K; Tesiram, Y A; Brereton, I M; Mobli, M; Harmer, J R

    2014-08-21

    Non-uniform sampling combined with maximum entropy reconstruction is a powerful technique used in multi-dimensional NMR spectroscopy to reduce sample measurement time. We adapted this technique to the pulse EPR experiment hyperfine sublevel correlation (HYSCORE) and show that experimental times can be shortened by approximately an order of magnitude as compared to conventional linear sampling with negligible loss of information.

  8. Optimizing the sampling density of a wave-front sensor in adaptive optics systems: application to scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K.; Pircher, Michael

    2017-02-01

    We present the optimization of an adaptive optics loop for retinal imaging. Generally, the wave-front is overdetermined compared to the number of corrector elements. The sampling of the sensor can be reduced while maintaining an efficient correction, leading to higher sensitivity, faster correction and larger dynamic range. An analytical model was developed to characterize the link between number of actuators, number of micro-lenses and correction performance. The optimized correction loop was introduced into a scanning laser ophthalmoscope. In vivo images of foveal photoreceptors were recorded and the obtained image quality is equivalent to the state of the art in retinal AO-imaging.

  9. Overlay optimization for 1x node technology and beyond via rule based sparse sampling

    NASA Astrophysics Data System (ADS)

    Aung, Nyan L.; Chung, Woong Jae; Subramany, Lokesh; Hussain, Shehzeen; Samudrala, Pavan; Gao, Haiyong; Hao, Xueli; Chen, Yen-Jen; Gomez, Juan-Manuel

    2016-03-01

    We demonstrate a cost-effective automated rule based sparse sampling method that can detect the spatial variation of overlay errors as well as the overlay signature of the fields. Our technique satisfies the following three rules: (i) homogeneous distribution of ~200 samples across the wafer, (ii) equal number of samples in scan up and scan down condition and (iii) equal number of sampling on each overlay marks per field. When rule based samplings are implemented on the two products, the differences between the full wafer map sampling and the rule based sampling are within 3.5 nm overlay spec with residuals M+3σ of 2.4 nm (x) and 2.43 nm (y) for Product A and 2.98 nm (x) and 3.32 nm (y) for Product B.

  10. Orientational sampling and rigid-body minimization in molecular docking revisited: On-the-fly optimization and degeneracy removal

    NASA Astrophysics Data System (ADS)

    Gschwend, Daniel A.; Kuntz, Irwin D.

    1996-04-01

    Strategies for computational association of molecular components entail a compromise between configurational exploration and accurate evaluation. Following the work of Meng et al. [Proteins, 17 (1993) 266], we investigate issues related to sampling and optimization in molecular docking within the context of the DOCK program. An extensive analysis of diverse sampling conditions for six receptor-ligand complexes has enabled us to evaluate the tractability and utility of on-the-fly force-field score minimization, as well as the method for configurational exploration. We find that the sampling scheme in DOCK is extremely robust in its ability to produce configurations near to those experimentally observed. Furthermore, despite the heavy resource demands of refinement, the incorporation of a rigid-body, grid-based simplex minimizer directly into the docking process results in a docking strategy that is more efficient at retrieving experimentally observed configurations than docking in the absence of optimization. We investigate the capacity for further performance enhancement by implementing a degeneracy checking protocol aimed at circumventing redundant optimizations of geometrically similar orientations. Finally, we present methods that assist in the selection of sampling levels appropriate to desired result quality and available computational resources.

  11. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  12. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  13. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    PubMed Central

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula

    2015-01-01

    Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567

  14. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry.

    PubMed

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula

    2015-11-25

    Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD).

  15. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  16. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    USDA-ARS?s Scientific Manuscript database

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  17. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    PubMed

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  18. Optimal criteria and sampling interval to detect a V̇O2 plateau at V̇O2max in patients with metabolic syndrome.

    PubMed

    Thomson, Amara C; Ramos, Joyce S; Fassett, Robert G; Coombes, Jeff S; Dalleck, Lance C

    2015-01-01

    This study sought to determine the optimal criteria and sampling interval to detect a V̇O2 plateau at V̇O2max in patients with metabolic syndrome. Twenty-three participants with criteria-defined metabolic syndrome underwent a maximal graded exercise test. Four different sampling intervals and three different V̇O2 plateau criteria were analysed to determine the effect of each parameter on the incidence of V̇O2 plateau at V̇O2max. Seventeen tests were classified as maximal based on attainment of at least two out of three criteria. There was a significant (p < 0.05) effect of 15-breath (b) sampling interval on the incidence of V̇O2 plateau at V̇O2max across the ≤ 50 and ≤ 80 mL ∙ min(-1) conditions. Strength of association was established by the Cramer's V statistic (φc); (≤ 50 mL ∙ min(-1) [φc = 0.592, p < 0.05], ≤ 80 mL ∙ min(-1) [φc = 0.383, p < 0.05], ≤ 150 mL ∙ min(-1) [φc = 0.246, p > 0.05]). When conducting maximal stress tests on patients with metabolic syndrome, a 15-b sampling interval and ≤ 50 mL ∙ min(-1) criteria should be implemented to increase the likelihood of detecting V̇O2 plateau at V̇O2max.

  19. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    NASA Astrophysics Data System (ADS)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  20. The optimal process of self-sampling in fisheries: lessons learned in the Netherlands.

    PubMed

    Kraan, M; Uhlmann, S; Steenbergen, J; Van Helmond, A T M; Van Hoof, L

    2013-10-01

    At-sea sampling of commercial fishery catches by observers is a relatively expensive exercise. The fact that an observer has to stay on-board for the duration of the trip results in clustered samples and effectively small sample sizes, whereas the aim is to make inferences regarding several trips from an entire fleet. From this perspective, sampling by fishermen themselves (self-sampling) is an attractive alternative, because a larger number of trips can be sampled at lower cost. Self-sampling should not be used too casually, however, as there are often issues of data-acceptance related to it. This article shows that these issues are not easily dealt with in a statistical manner. Improvements might be made if self-sampling is understood as a form of cooperative research. Cooperative research has a number of dilemmas and benefits associated with it. This article suggests that if the guidelines for cooperative research are taken into account, the benefits are more likely to materialize. Secondly, acknowledging the dilemmas, and consciously dealing with them might lay the basis to trust-building, which is an essential element in the acceptance of data derived from self-sampling programmes. © 2013 The Fisheries Society of the British Isles.

  1. Optimal protein extraction methods from diverse sample types for protein profiling by using Two-Dimensional Electrophoresis (2DE).

    PubMed

    Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y

    2011-12-01

    There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample.

  2. Defining biobank.

    PubMed

    Hewitt, Robert; Watson, Peter

    2013-10-01

    The term "biobank" first appeared in the scientific literature in 1996 and for the next five years was used mainly to describe human population-based biobanks. In recent years, the term has been used in a more general sense and there are currently many different definitions to be found in reports, guidelines and regulatory documents. Some definitions are general, including all types of biological sample collection facilities. Others are specific and limited to collections of human samples, sometimes just to population-based collections. In order to help resolve the confusion on this matter, we conducted a survey of the opinions of people involved in managing sample collections of all types. This survey was conducted using an online questionnaire that attracted 303 responses. The results show that there is consensus that the term biobank may be applied to biological collections of human, animal, plant or microbial samples; and that the term biobank should only be applied to sample collections with associated sample data, and to collections that are managed according to professional standards. There was no consensus on whether a collection's purpose, size or level of access should determine whether it is called a biobank. Putting these findings into perspective, we argue that a general, broad definition of biobank is here to stay, and that attention should now focus on the need for a universally-accepted, systematic classification of the different biobank types.

  3. Low-thrust trajectory optimization of asteroid sample return mission with multiple revolutions and moon gravity assists

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, FanHuag; Li, JunFeng

    2015-11-01

    Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.

  4. Optimized nested Markov chain Monte Carlo sampling: application to the liquid nitrogen Hugoniot using density functional theory

    SciTech Connect

    Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D

    2009-01-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  5. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    PubMed

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  6. Optimization of left adrenal vein sampling in primary aldosteronism: Coping with asymmetrical cortisol secretion.

    PubMed

    Kishino, Mitsuhiro; Yoshimoto, Takanobu; Nakadate, Masashi; Katada, Yoshiaki; Kanda, Eiichiro; Nakaminato, Shuichiro; Saida, Yukihisa; Ogawa, Yoshihiro; Tateishi, Ukihide

    2017-03-31

    We evaluated the influence of catheter sampling position and size on left adrenal venous sampling (AVS) in patients with primary aldosteronism (PA) and analyzed their relationship to cortisol secretion. This retrospective study included 111 patients with a diagnosis of primary aldosteronism who underwent tetracosactide-stimulated AVS. Left AVS was obtained from two catheter positions - the central adrenal vein (CAV) and the common trunk. For common trunk sampling, 5-French catheters were used in 51 patients, and microcatheters were used in 60 patients. Autonomous cortisol secretion was evaluated with a low-dose dexamethasone suppression test in 87 patients. The adrenal/inferior vena cava cortisol concentration ratio [selectivity index (SI)] was significantly lower in samples from the left common trunk than those of the left CAV and right adrenal veins, but this difference was reduced when a microcatheter was used for common trunk sampling. Sample dilution in the common trunk of the left adrenal vein can be decreased by limiting sampling speed with the use of a microcatheter. Meanwhile, there was no significant difference in SI between the left CAV and right adrenal veins. Laterality, determined according to aldosterone/cortisol ratio (A/C ratio) based criteria, showed good reproducibility regardless of sampling position, unlike the absolute aldosterone value based criteria. However, in 11 cases with autonomous cortisol co-secretion, the cortisol hypersecreting side tended to be underestimated when using A/C ratio based criteria. Left CAV sampling enables symmetrical sampling, and may be essential when using absolute aldosterone value based criteria in cases where symmetrical cortisol secretion is uncertain.

  7. Optimization of microwave digestion for mercury determination in marine biological samples by cold vapour atomic absorption spectrometry.

    PubMed

    Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania

    2006-01-01

    Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.

  8. Diversity in Müllerian mimicry: The optimal predator sampling strategy explains both local and regional polymorphism in prey.

    PubMed

    Aubier, Thomas G; Sherratt, Thomas N

    2015-11-01

    The convergent evolution of warning signals in unpalatable species, known as Müllerian mimicry, has been observed in a wide variety of taxonomic groups. This form of mimicry is generally thought to have arisen as a consequence of local frequency-dependent selection imposed by sampling predators. However, despite clear evidence for local selection against rare warning signals, there appears an almost embarrassing amount of polymorphism in natural warning colors, both within and among populations. Because the model of predator cognition widely invoked to explain Müllerian mimicry (Müller's "fixed n(k)" model) is highly simplified and has not been empirically supported; here, we explore the dynamical consequences of the optimal strategy for sampling unfamiliar prey. This strategy, based on a classical exploration-exploitation trade-off, not only allows for a variable number of prey sampled, but also accounts for predator neophobia under some conditions. In contrast to Müller's "fixed n(k)" sampling rule, the optimal sampling strategy is capable of generating a variety of dynamical outcomes, including mimicry but also regional and local polymorphism. Moreover, the heterogeneity of predator behavior across space and time that a more nuanced foraging strategy allows, can even further facilitate the emergence of both local and regional polymorphism in prey warning color. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  9. Optimization of Plasma Sample Pretreatment for Quantitative Analysis Using iTRAQ Labeling and LC-MALDI-TOF/TOF

    PubMed Central

    Luczak, Magdalena; Marczak, Lukasz; Stobiecki, Maciej

    2014-01-01

    Shotgun proteomic methods involving iTRAQ (isobaric tags for relative and absolute quantitation) peptide labeling facilitate quantitative analyses of proteomes and searches for useful biomarkers. However, the plasma proteome's complexity and the highly dynamic plasma protein concentration range limit the ability of conventional approaches to analyze and identify a large number of proteins, including useful biomarkers. The goal of this paper is to elucidate the best approach for plasma sample pretreatment for MS- and iTRAQ-based analyses. Here, we systematically compared four approaches, which include centrifugal ultrafiltration, SCX chromatography with fractionation, affinity depletion, and plasma without fractionation, to reduce plasma sample complexity. We generated an optimized protocol for quantitative protein analysis using iTRAQ reagents and an UltrafleXtreme (Bruker Daltonics) MALDI TOF/TOF mass spectrometer. Moreover, we used a simple, rapid, efficient, but inexpensive sample pretreatment technique that generated an optimal opportunity for biomarker discovery. We discuss the results from the four sample pretreatment approaches and conclude that SCX chromatography without affinity depletion is the best plasma sample preparation pretreatment method for proteome analysis. Using this technique, we identified 1,780 unique proteins, including 1,427 that were quantified by iTRAQ with high reproducibility and accuracy. PMID:24988083

  10. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    PubMed

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  11. The two options for sample evaporation in hot GC injectors: thermospray and band formation. optimization of conditions and injector design.

    PubMed

    Grob, Koni; Biedermann, Maurus

    2002-01-01

    Although classical split and splitless injection is more than 30 years old, we only start to understand the vaporization process in the injector. Solvent evaporation determines much of the process and is the first obstacle to overcome. Videos recorded on devices imitating injectors showed that sample (solvent) evaporation is often a violent process which is poorly controlled and might well explain many of the puzzling quantitative results often obtained. We do not adequately take into account that two vaporization techniques are in use. Partial solvent evaporation inside the syringe needle (optimized as "hot needle injection") produces thermospray: the sample liquid is nebulized upon leaving the needle. The resulting fog is rapidly slowed and moves with the gas. Solute evaporation largely occurs from microparticles suspended in the gas phase. Empty liners are most suitable. Fast autosamplers suppress vaporization in the needle, i.e., nebulization, and shoot a band of liquid into the chamber that must be stopped by a packing or obstacles suitable to hold the liquid in place during the 0.2-5 s required for solvent evaporation. Solute evaporation largely occurs from the surfaces onto which the sample is deposited. Insights into these mechanisms help optimize conditions in a more rational manner. Methods should specify whether they were optimized and validated for injection with thermospray or band formation. The insights should also enable a significant improvement of the injector design, particularly for splitless injection.

  12. IMPROVEMENTS IN POLLUTANT MONITORING: OPTIMIZING SILICONE FOR CO-DEPLOYMENT WITH POLYETHYLENE PASSIVE SAMPLING DEVICES

    PubMed Central

    O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.

    2014-01-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960

  13. Improvements in pollutant monitoring: optimizing silicone for co-deployment with polyethylene passive sampling devices.

    PubMed

    O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A

    2014-10-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring.

  14. Developing an optimal sampling design. A case study in a coastal marine ecosystem.

    PubMed

    Kitsiou, D; Tsirtsis, G; Karydis, M

    2001-09-01

    The development of a sampling design for optimising sampling site locations collected from a coastal marine environment has been the purpose of the present work; application of statistical analysis and spatial autocorrelation methods have been carried out. The dataset included data collected from 34 sampling sites spaced out in the Strait of Lesbos, Greece, arranged in a 1 x 1 NM grid. The coastal shallow ecosystem was subdivided into three zones, an inner one (7 stations), a middle one (16 stations) and an offshore zone (11 stations). The standard error of the chlorophyll-a concentrations in each zone has been used as the criterion for the sampling design optimisation, resulting into reallocation of the sampling sites into the three zones. The positions of the reallocated stations have been assessed by estimation of the spatial heterogeneity and anisotropy of chlorophyll-a concentrations using variograms. Study of the variance of the initial dataset of the inner zone taking into account spatial heterogeneity, revealed two different sub-areas and therefore, the number of the inner stations has been reassessed. The proposed methodology eliminates the number of sampling sites and maximises the information of spatial data from marine ecosystems. It is described as a step-by-step procedure and could be widely applied in sampling design concerning coastal pollution problems.

  15. Association of body mass index (BMI) and percent body fat among BMI-defined non-obese middle-aged individuals: Insights from a population-based Canadian sample.

    PubMed

    Collins, Kelsey H; Sharif, Behnam; Sanmartin, Claudia; Reimer, Raylene A; Herzog, Walter; Chin, Rick; Marshall, Deborah A

    2017-03-01

    To evaluate the association between percent body fat (%BF) and body mass index (BMI) among BMI-defined non-obese individuals between 40 and 69 years of age using a population-based Canadian sample. Cross-sectional data from the Canadian Health Measures Survey (2007 and 2009) was used to select all middle-aged individuals with BMI < 30 kg/m2 (n = 2,656). %BF was determined from anthropometric skinfolds and categorized according to sex-specific equations. Association of other anthropometry measures and metabolic markers were evaluated across different %BF categories. Significance of proportions was evaluated using chi-squared and Bonferroni-adjusted Wald test. Diagnostic performance measures of BMI-defined overweight categories compared to those defined by %BF were reported. The majority (69%) of the sample was %BF-defined overweight/obese, while 55% were BMI-defined overweight. BMI category was not concordant with %BF classification for 30% of the population. The greatest discordance between %BF and BMI was observed among %BF-defined overweight/obese women (32%). Sensitivity and specificity of BMI-defined overweight compared to %BF-defined overweight/obese were (58%, 94%) among females and (82%, 59%) among males respectively. According to the estimated negative predictive value, if an individual is categorized as BMI-defined non-obese, he/she has a 52% chance of being in the %BF-defined overweight/obese category. Middle-aged individuals classified as normal by BMI may be overweight/obese based on measures of %BF. These individuals may be at risk for chronic diseases, but would not be identified as such based on their BMI classification. Quantifying %BF in this group could inform targeted strategies for disease prevention.

  16. Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation

    PubMed Central

    Suh, Young Soo; Ro, Young Sik; Kang, Hee Jun

    2010-01-01

    This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified. PMID:22163557

  17. OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER

    EPA Science Inventory

    The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...

  18. OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER

    EPA Science Inventory

    The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...

  19. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  20. Defining chaos

    SciTech Connect

    Hunt, Brian R.; Ott, Edward

    2015-09-15

    In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

  1. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  2. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    PubMed

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  3. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    PubMed Central

    2012-01-01

    Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base. PMID:23126469

  4. Fuel Optimal Low Thrust Trajectories for an Asteroid Sample Return Mission

    DTIC Science & Technology

    2005-03-01

    additional payload. Moreover, an ion thruster employs no moving parts and uses a chemically inert propellant. Energy is not stored in the form of...Ref. 5]. 1-5 m da.Fixed High Gai•n Anteza.. . Slar Array Gimbal Sclar Afray Release SV !afleine• Sam~4e Re=~u Capgul Thrust•r Cluster glfplh keC...generalized non-conservative force vector representing spacecraft thrust. System kinetic and potential energy terms define the LaGrangean, T=_V.V=ir2 .r262) 2

  5. Optimization of polymerase chain reaction for detection of Clostridium botulinum type C and D in bovine samples.

    PubMed

    Prévot, V; Tweepenninckx, F; Van Nerom, E; Linden, A; Content, J; Kimpe, A

    2007-01-01

    Botulism is a rare but serious paralytic illness caused by a nerve toxin that is produced by the bacterium Clostridium botulinum. The economic, medical and alimentary consequences can be catastrophic in case of an epizooty. A polymerase chain reaction (PCR)-based assay was developed for the detection of C. botulinum toxigenic strains type C and D in bovine samples. This assay has proved to be less expensive, faster and simpler to use than the mouse bioassay, the current reference method for diagnosis of C. botulinum toxigenic strains. Three pairs of primers were designed, one for global detection of C. botulinum types C and D (primer pair Y), and two strain-specific pairs specifically designed for types C (primer pair VC) and D (primer pair VD). The PCR amplification conditions were optimized and evaluated on 13 bovine and two duck samples that had been previously tested by the mouse bioassay. In order to assess the impact of sample treatment, both DNA extracted from crude samples and three different enrichment broths (TYG, CMM, CMM followed by TYG) were tested. A 100% sensitivity was observed when samples were enriched for 5 days in CMM followed by 1 day in TYG broth. False-negative results were encountered when C. botulinum was screened for in crude samples. These findings indicate that the current PCR is a reliable method for the detection of C. botulinum toxigenic strains type C and D in bovine samples but only after proper enrichment in CMM and TYG broth.

  6. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    NASA Astrophysics Data System (ADS)

    He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang

    2016-08-01

    In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.

  7. Dynamically optimized Wang-Landau sampling with adaptive trial moves and modification factors.

    PubMed

    Koh, Yang Wei; Lee, Hwee Kuan; Okabe, Yutaka

    2013-11-01

    The density of states of continuous models is known to span many orders of magnitudes at different energies due to the small volume of phase space near the ground state. Consequently, the traditional Wang-Landau sampling which uses the same trial move for all energies faces difficulties sampling the low-entropic states. We developed an adaptive variant of the Wang-Landau algorithm that very effectively samples the density of states of continuous models across the entire energy range. By extending the acceptance ratio method of Bouzida, Kumar, and Swendsen such that the step size of the trial move and acceptance rate are adapted in an energy-dependent fashion, the random walker efficiently adapts its sampling according to the local phase space structure. The Wang-Landau modification factor is also made energy dependent in accordance with the step size, enhancing the accumulation of the density of states. Numerical simulations show that our proposed method performs much better than the traditional Wang-Landau sampling.

  8. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples.

    PubMed

    Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S

    2016-03-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. © 2016 The Author(s).

  9. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples

    USGS Publications Warehouse

    Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David

    2016-01-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.

  10. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  11. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    PubMed

    Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  12. Optimized Design and Analysis of Sparse-Sampling fMRI Experiments

    PubMed Central

    Perrachione, Tyler K.; Ghosh, Satrajit S.

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  13. Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats

    PubMed Central

    Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579

  14. Optimization of the first order gradiometer for small sample magnetization measurements using pulse integrating magnetometer

    SciTech Connect

    Trojanowski, S.; Ciszek, M.

    2009-10-15

    In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.

  15. Optimization of the first order gradiometer for small sample magnetization measurements using pulse integrating magnetometer.

    PubMed

    Trojanowski, S; Ciszek, M

    2009-10-01

    In the paper we present an analytical calculation method for determination of the sensitivity of a pulse field magnetometer working with a first order gradiometer. Our considerations here are especially focused on a case of magnetic moment measurements of very small samples. Derived in the work analytical equations allow for a quick estimation of the magnetometer's sensitivity and give also the way to its calibration using the sample simulation coil method. On the base of the given in the paper calculations we designed and constructed a simple homemade magnetometer and performed its sensitivity calibration.

  16. Optimal sample size determinations from an industry perspective based on the expected value of information.

    PubMed

    Willan, Andrew R

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as type I and II errors. As an alternative, taking a societal perspective, and using the expected value of information based on Bayesian decision theory, a number of authors have recently shown how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of the trial and the value of the information gained from the results. Other authors have proposed Bayesian methods to determine sample sizes from an industry perspective. The purpose of this article is to propose a Bayesian approach to sample size calculations from an industry perspective that attempts to determine the sample size that maximizes expected profit. A model is proposed for expected total profit that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, discount rate, and the relationship between the results and the probability of regulatory approval. The expected value of information provided by trial data is related to the increase in expected profit from increasing the probability of regulatory approval. The methods are applied to an example, including an examination of robustness. The model is extended to consider market share as a function of observed treatment effect. The use of methods based on the expected value of information can provide, from an industry perspective, robust sample size solutions that maximize the difference between the expected cost of the trial and the expected value of information gained from the results. The method is only as good as the model for expected total profit. Although the model probably has all the right elements, it assumes that market share, per-patient profit, and incidence are insensitive to trial results. The method relies on the central limit theorem which assumes that the sample sizes involved ensure that the relevant test statistics

  17. Modular tube/plate-based sample management: a business model optimized for scalable storage and processing.

    PubMed

    Fillers, W Steven

    2004-12-01

    Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system.

  18. Evaluation of optimized b-value sampling schemas for diffusion kurtosis imaging with an application to stroke patient data

    PubMed Central

    Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong

    2013-01-01

    Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303

  19. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    NASA Astrophysics Data System (ADS)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  20. Shotgun Proteomics of Tomato Fruits: Evaluation, Optimization and Validation of Sample Preparation Methods and Mass Spectrometric Parameters

    PubMed Central

    Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju

    2016-01-01

    An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192

  1. Optimizing line intercept sampling and estimation for feral swine damage levels in ecologically sensitive wetland plant communities.

    PubMed

    Thomas, Jacob F; Engeman, Richard M; Tillman, Eric A; Fischer, Justin W; Orzell, Steve L; Glueck, Deborah H; Felix, Rodney K; Avery, Michael L

    2013-03-01

    Ecological sampling can be labor intensive, and logistically impractical in certain environments. We optimize line intercept sampling and compare estimation methods for assessing feral swine damage within fragile wetland ecosystems in Florida. Sensitive wetland sites, and the swine damage within them, were mapped using GPS technology. Evenly spaced parallel transect lines were simulated across a digital map of each site. The length of each transect and total swine damage under each transect were measured and percent swine damage within each site was estimated by two methods. The total length method (TLM) combined all transects as a single long transect, dividing the sum of all damage lengths across all transects by the combined length of all transect lines. The equal weight method (EWM) calculated the damage proportion for each transect line and averaged these proportions across all transects. Estimation was evaluated using transect spacings of 1, 3, 5, 10, 15, and 20 m. Based on relative root mean squared error and relative bias measures, the TLM produced higher quality estimates than EWM at all transect spacings. Estimation quality decreased as transect spacing increased, especially for TLM. Estimation quality also increased as the true proportion of swine damage increased. Diminishing improvements in estimation quality as transect spacings decreased suggested 5 m as an optimal tradeoff between estimation quality and labor. An inter-transect spacing of 5 m with TLM estimation appeared an optimal starting point when designing a plan for estimating swine damage, with practical, logistical, economic considerations determining final design details.

  2. Shotgun Proteomics of Tomato Fruits: Evaluation, Optimization and Validation of Sample Preparation Methods and Mass Spectrometric Parameters.

    PubMed

    Kilambi, Himabindu V; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K; Sharma, Rameshwar; Sreelakshmi, Yellamaraju

    2016-01-01

    An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues.

  3. Defining excellence.

    PubMed

    Mehl, B

    1993-05-01

    Excellence in the pharmacy profession, particularly pharmacy management, is defined. Several factors have a significant effect on the ability to reach a given level of excellence. The first is the economic and political climate in which pharmacists practice. Stricter controls, reduced resources, and the velocity of change all necessitate nurturing of values and a work ethic to maintain excellence. Excellence must be measured by the services provided with regard to the resources available; thus, the ability to achieve excellence is a true test of leadership and innovation. Excellence is also time dependent, and today's innovation becomes tomorrow's standard. Programs that raise the level of patient care, not those that aggrandize the profession, are the most important. In addition, basic services must be practiced at a level of excellence. Quality assessment is a way to improve care and bring medical treatment to a higher plane of excellence. For such assessment to be effective and not punitive, the philosophy of the program must be known, and the goal must be clear. Excellence in practice is dependent on factors such as political and social norms, standards of practice, available resources; perceptions, time, the motivation to progress to a higher level, and the continuous innovation required to reshape the profession to meet the needs of society.

  4. Smac mimetic induces cell death in a large proportion of primary acute myeloid leukemia samples, which correlates with defined molecular markers

    PubMed Central

    Lueck, Sonja C.; Russ, Annika C.; Botzenhardt, Ursula; Schlenk, Richard F.; Zobel, Kerry; Deshayes, Kurt; Vucic, Domagoj; Döhner, Hartmut; Döhner, Konstanze

    2016-01-01

    Apoptosis is deregulated in most, if not all, cancers, including hematological malignancies. Smac mimetics that antagonize Inhibitor of Apoptosis (IAP) proteins have so far largely been investigated in acute myeloid leukemia (AML) cell lines; however, little is yet known on the therapeutic potential of Smac mimetics in primary AML samples. In this study, we therefore investigated the antileukemic activity of the Smac mimetic BV6 in diagnostic samples of 67 adult AML patients and correlated the response to clinical, cytogenetic and molecular markers and gene expression profiles. Treatment with cytarabine (ara-C) was used as a standard chemotherapeutic agent. Interestingly, about half (51%) of primary AML samples are sensitive to BV6 and 21% intermediate responsive, while 28% are resistant. Notably, 69% of ara-C-resistant samples show a good to fair response to BV6. Furthermore, combination treatment with ara-C and BV6 exerts additive effects in most samples. Whole-genome gene expression profiling identifies cell death, TNFR1 and NF-κB signaling among the top pathways that are activated by BV6 in BV6-sensitive, but not in BV6-resistant cases. Furthermore, sensitivity of primary AML blasts to BV6 correlates with significantly elevated expression levels of TNF and lower levels of XIAP in diagnostic samples, as well as with NPM1 mutation. In a large set of primary AML samples, these data provide novel insights into factors regulating Smac mimetic response in AML and have important implications for the development of Smac mimetic-based therapies and related diagnostics in AML. PMID:27385100

  5. Optimizing conditions for methylmercury extraction from fish samples for GC analysis using response surface methodology.

    PubMed

    Hajeb, P; Jinap, S; Abu Bakar, F; Bakar, J

    2009-06-01

    Response surface methodology (RSM) was used to determine the optimum experimental conditions to extract methylmercury from fish samples for GC analysis. The influence of four variables - acid concentration (3-12 M), cysteine concentration (0.5-2% w/v), solvent volume (3-9 ml) and extraction time (10-30 min) - on recovery of methylmercury was evaluated. The detection limit for methylmercury analysis using a microelectron capture detector was 7 ng g(-1) in fish samples. The mean recovery under optimum conditions was 94%. Experimental data were adequately fitted into a second-order polynomial model with multiple regression coefficients (r(2)) of 0.977. The four variables had a significant effect (p < 0.05) on the recovery of methylmercury from a reference material (BCR-463). Optimum conditions for methylmercury extraction were found using an acid concentration of 12.2 M, cysteine concentration of 2.4%, solvent volume of 1.5 ml and extraction time of 35 min. The validation of the developed method to analyze methylmercury in fish samples exhibited good agreement with mercury content in the samples.

  6. An evaluation of optimal methods for avian influenza virus sample collection

    USDA-ARS?s Scientific Manuscript database

    Sample collection and transport are critical components of any diagnostic testing program and due to the amount of avian influenza virus (AIV) testing in the U.S. and worldwide, small improvements in sensitivity and specificity can translate into substantial cost savings from better test accuracy. ...

  7. Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2011-01-01

    Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…

  8. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    PubMed

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  9. Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.

    Treesearch

    Andrew B. Carey; Scott P. Horton; Janice A. Reid

    1989-01-01

    Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...

  10. Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2011-01-01

    Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…

  11. Validity of Lot Quality Assurance Sampling to optimize falciparum malaria surveys in low-transmission areas.

    PubMed

    Rabarijaona, L; Rakotomanana, F; Ranaivo, L; Raharimalala, L; Modiano, D; Boisier, P; De Giorgi, F; Raveloson, N; Jambou, R

    2001-01-01

    To control the reappearance of malaria in the Madagascan highlands, indoor house-spraying of DDT was conducted from 1993 until 1998. Before the end of the insecticide-spraying programme, a surveillance system was set up to allow rapid identification of new malaria epidemics. When the number of suspected clinical malaria cases notified to the surveillance system exceeds a predetermined threshold, a parasitological survey is carried out in the community to confirm whether or not transmission of falciparum malaria is increasing. Owing to the low specificity of the surveillance system, this confirmation stage is essential to guide the activities of the control programme. For this purpose, Lot Quality Assurance Sampling (LQAS), which usually requires smaller sample sizes, seemed to be a valuable alternative to conventional survey methods. In parallel to a conventional study of Plasmodium falciparum prevalence carried out in 1998, we investigated the ability of LQAS to rapidly classify zones according to a predetermined prevalence level. Two prevalence thresholds (5% and 15%) were tested using various sampling plans. A plan (36, 2), meaning that at least 2 individuals found to be positive among a random sample of 36, enabled us to classify a community correctly with a sensitivity of 100% and a specificity of 94%. LQAS is an effective tool for rapid assessment of falciparum malaria prevalence when monitoring malaria transmission.

  12. Pushpoint sampling for defining spatial and temporal variations in contaminant concentrations in sediment pore water near the ground-water / surface-water interface

    USGS Publications Warehouse

    Zimmerman, Marc J.; Massey, Andrew J.; Campo, Kimberly W.

    2005-01-01

    During four periods from April 2002 to June 2003, pore-water samples were taken from river sediment within a gaining reach (Mill Pond) of the Sudbury River in Ashland, Massachusetts, with a temporary pushpoint sampler to determine whether this device is an effective tool for measuring small-scale spatial variations in concentrations of volatile organic compounds and selected field parameters (specific conductance and dissolved oxygen concentration). The pore waters sampled were within a subsurface plume of volatile organic compounds extending from the nearby Nyanza Chemical Waste Dump Superfund site to the river. Samples were collected from depths of 10, 30, and 60 centimeters below the sediment surface along two 10-meter-long, parallel transects extending into the river. Twenty-five volatile organic compounds were detected at concentrations ranging from less than 1 microgram per liter to hundreds of micrograms per liter (for example, 1,2-dichlorobenzene, 490 micrograms per liter; cis-1,2-dichloroethene, 290 micrograms per liter). The most frequently detected compounds were either chlorobenzenes or chlorinated ethenes. Many of the compounds were detected only infrequently. Quality-control sampling indicated a low incidence of trace concentrations of contaminants. Additional samples collected with passive-water-diffusion-bag samplers yielded results comparable to those collected with the pushpoint sampler and to samples collected in previous studies at the site. The results demonstrate that the pushpoint sampler can yield distinct samples from sites in close proximity; in this case, sampling sites were 1 meter apart horizontally and 20 or 30 centimeters apart vertically. Moreover, the pushpoint sampler was able to draw pore water when inserted to depths as shallow as 10 centimeters below the sediment surface without entraining surface water. The simplicity of collecting numerous samples in a short time period (routinely, 20 to 30 per day) validates the use of a

  13. Determining the Optimal Spectral Sampling Frequency and Uncertainty Thresholds for Hyperspectral Remote Sensing of Ocean Color

    NASA Technical Reports Server (NTRS)

    Vandermeulen, Ryan A.; Mannino, Antonio; Neeley, Aimee; Werdell, Jeremy; Arnone, Robert

    2017-01-01

    Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse remote sensing reflectance and phytoplankton absorbance spectra to describe how data points are correlated with distance across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid river filaments, coastal waters, shelf waters, a dense Microcystis bloom, and oligotrophic waters), as well as individual and mixed phytoplankton functional types (PFTs; diatoms, chlorophytes, cyanobacteria, coccolithophores). Results show that a continuous spectrum of 5 to 7 nm spectral resolution is optimal to resolve the variability across mixed reflectance and absorbance spectra. In addition, the impact of uncertainty on subsequent derivative analysis is assessed, showing that a limit of 3 Gaussian noise (SNR 66) is tolerated without smoothing the spectrum, and 13 (SNR 15) noise is tolerated with smoothing.

  14. Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image

    PubMed Central

    Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.

    2014-01-01

    Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681

  15. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling.

    PubMed

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R; da Rocha, Andre M; Keller, Laura M; Schuster, Timothy G; Sonksen, Jens; Smith, Gary D

    2015-03-01

    To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. Prospective clinical laboratory study. University assisted reproductive technology (ART) laboratory. A total of 594 patients undergoing semen analysis and cryopreservation. Semen analysis, cryopreservation with different intermediate steps and in different volumes (50-1,000 μL), and long-term storage in LN2 or VN2. Optimal TV volume, prediction of cryosurvival (CS) in ART procedure vials (ARTVs) with pre-freeze semen parameters and TV CS, post-thaw motility after two- or three-step semen cryopreservation and cryostorage in VN2 and LN2. Test vial volume of 50 μL yielded lower CS than other volumes tested. Cryosurvival of 100 μL was similar to that of larger volumes tested. An intermediate temperature exposure (-88°C to -93°C for 20 minutes) during cryopreservation did not affect post-thaw motility. Cryosurvival of TVs and ARTVs from the same ejaculate were similar. Cryosurvival of the first TV in a series of cryopreserved ejaculates was similar to and correlated with that of TVs from different ejaculates within the same patient. Cryosurvival of the first TV was correlated with subsequent ARTVs. Long-term cryostorage in VN2 did not affect CS. This study provides experimental evidence for use of a single 100 μL TV per patient to predict CS when freezing multiple ejaculates over a short period of time (<10 days). Additionally, semen cryostorage in VN2 provides a stable and safe environment over time. Copyright © 2015. Published by Elsevier Inc.

  16. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  17. Optimization of proteomic sample preparation procedures for comprehensive protein characterization of pathogenic systems

    SciTech Connect

    Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.

    2008-12-19

    The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.

  18. Application of trajectory optimization techniques to upper atmosphere sampling flights using the F-15 Eagle aircraft

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1976-01-01

    Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.

  19. Optimization of Integrative Passive Sampling Approaches for Use in the Epibenthic Environment

    DTIC Science & Technology

    2016-12-23

    Conder et al. 2003). However, reporting limits can be a challenge as the amount of material sampled per length is minimal, and insertion of great ...insulating and water impermeable barrier which allowed the sensor to operate while submerged, and was formed thin enough to allow suitable heat exchange...Comparison of epibenthic POCIS measurements to sediment and pore water concentrations in controlled exposure scenarios will provide a great deal of

  20. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    PubMed

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  1. Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.

    PubMed

    Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam

    2009-04-28

    Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.

  2. An optimal sampling approach to modelling whole-body vibration exposure in all-terrain vehicle driving.

    PubMed

    Lü, Xiaoshu; Takala, Esa-Pekka; Toppila, Esko; Marjanen, Ykä; Kaila-Kangas, Leena; Lu, Tao

    2016-12-01

    Exposure to whole-body vibration (WBV) presents an occupational health risk and several safety standards obligate to measure WBV. The high cost of direct measurements in large epidemiological studies raises the question of the optimal sampling for estimating WBV exposures given by a large variation in exposure levels in real worksites. This paper presents a new approach to addressing this problem. A daily exposure to WBV was recorded for 9-24 days among 48 all-terrain vehicle drivers. Four data-sets based on root mean squared recordings were obtained from the measurement. The data were modelled using semi-variogram with spectrum analysis and the optimal sampling scheme was derived. The optimum sampling period was 140 min apart. The result was verified and validated in terms of its accuracy and statistical power. Recordings of two to three hours are probably needed to get a sufficiently unbiased daily WBV exposure estimate in real worksites. The developed model is general enough that is applicable to other cumulative exposures or biosignals. Practitioner Summary: Exposure to whole-body vibration (WBV) presents an occupational health risk and safety standards obligate to measure WBV. However, direct measurements can be expensive. This paper presents a new approach to addressing this problem. The developed model is general enough that is applicable to other cumulative exposures or biosignals.

  3. Sampling the Spatial Patterns of Cancer: Optimized Biopsy Procedures for Estimating Prostate Cancer Volume and Gleason Score

    PubMed Central

    Ou, Yangming; Shen, Dinggang; Zeng, Jianchao; Sun, Leon; Moul, Judd; Davatzikos, Christos

    2009-01-01

    Prostate biopsy is the current gold-standard procedure for prostate cancer diagnosis. Existing prostate biopsy procedures have been mostly focusing on detecting cancer presence. However, they often ignore the potential use of biopsy to estimate cancer volume (CV) and Gleason Score (GS, a cancer grade descriptor), the two surrogate markers for cancer aggressiveness and the two crucial factors for treatment planning. To fill up this vacancy, this paper assumes and demonstrates that, by optimally sampling the spatial patterns of cancer, biopsy procedures can be specifically designed for estimating CV and GS. Our approach combines image analysis and machine learning tools in an atlas-based population study that consists of three steps. First, the spatial distributions of cancer in a patient population are learned, by constructing statistical atlases from histological images of prostate specimens with known cancer ground truths. Then, the optimal biopsy locations are determined in a feature selection formulation, so that biopsy outcomes (either cancer presence or absence) at those locations could be used to differentiate, at the best rate, between the existing specimens having different (high v.s. low) CV/GS values. Finally, the optimized biopsy locations are utilized to estimate whether a new-coming prostate cancer patient has high or low CV/GS values, based on a binary classification formulation. The estimation accuracy and the generalization ability are evaluated by the classification rates and the associated receiver-operating-characteristic (ROC) curves in cross validations. The optimized biopsy procedures are also designed to be robust to the almost inevitable needle displacement errors in clinical practice, and are found to be robust to variations in the optimization parameters as well as the training populations. PMID:19524478

  4. Neuroticism moderates the effect of maximum smoking level on lifetime panic disorder: a test using an epidemiologically defined national sample of smokers.

    PubMed

    Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J

    2006-03-30

    The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems.

  5. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    PubMed

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  6. Optimization of plasma sampling depth and aerosol gas flow rates for single particle inductively coupled plasma mass spectrometry analysis.

    PubMed

    Kálomista, Ildikó; Kéri, Albert; Galbács, Gábor

    2017-09-01

    We performed experiments to assess the separate and also the combined effect of the sampling depth and the aerosol gas flow rates on the signal formation in single particle inductively coupled plasma mass spectrometry (spICP-MS) measurements by using dispersions containing Ag and Au NPs. It was found that the NP signal can significantly be improved by the optimization of the sampling depth. With respect to the "robust" setting, a signal improvement of nearly 100% could be achieved, which translates into a 25-30% improvement in size detection limits. It was also found that the shape of the spICP-MS signal histograms also change with the change of the plasma sampling depth. It was demonstrated that nanoparticle peak separation can also be significantly enhanced by using sampling depth optimization. The effect of the aerosol dilution gas flow, now standard in most ICP-MS instruments, on the spICP-MS signal formation was also studied for the first time in the literature, as this flow was hoped to make spICP-MS measurements more practical and faster via the on-line dilution of the aerosol generated from nano-dispersions. Our experimental results revealed that the dilution gas flow can only be used for a moderate aerosol dilution in spICP-MS measurements, if the gas flow going to the pneumatic nebulizer is proportionally lowered at the same time. This however was found to cause a significant worsening in the operation of the sample introduction system, which gives rise to a strong NP signal loss. Thus it was concluded that the use of the aerosol dilution gas flow, in its present form, can not be suggested for spICP-MS analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Optimization of clamped beam geometry for fracture toughness testing of micron-scale samples

    NASA Astrophysics Data System (ADS)

    Nagamani Jaya, B.; Bhowmick, Sanjit; Syed Asif, S. A.; Warren, Oden L.; Jayaram, Vikram

    2015-06-01

    Fracture toughness measurements at the small scale have gained prominence over the years due to the continuing miniaturization of structural systems. Measurements carried out on bulk materials cannot be extrapolated to smaller length scales either due to the complexity of the microstructure or due to the size and geometric effect. Many new geometries have been proposed for fracture property measurements at small-length scales depending on the material behaviour and the type of device used in service. In situ testing provides the necessary environment to observe fracture at these length scales so as to determine the actual failure mechanism in these systems. In this paper, several improvements are incorporated to a previously proposed geometry of bending a doubly clamped beam for fracture toughness measurements. Both monotonic and cyclic loading conditions have been imposed on the beam to study R-curve and fatigue effects. In addition to the advantages that in situ SEM-based testing offers in such tests, FEM has been used as a simulation tool to replace cumbersome and expensive experiments to optimize the geometry. A description of all the improvements made to this specific geometry of clamped beam bending to make a variety of fracture property measurements is given in this paper.

  8. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method

  9. Optimized Extraction Method To Remove Humic Acid Interferences from Soil Samples Prior to Microbial Proteome Measurements.

    PubMed

    Qian, Chen; Hettich, Robert L

    2017-07-07

    The microbial composition and their activities in soil environments play a critical role in organic matter transformation and nutrient cycling. Liquid chromatography coupled to high-performance mass spectrometry provides a powerful approach to characterize soil microbiomes; however, the limited microbial biomass and the presence of abundant interferences in soil samples present major challenges to proteome extraction and subsequent MS measurement. To this end, we have designed an experimental method to improve microbial proteome measurement by removing the soil-borne humic substances coextraction from soils. Our approach employs an in situ detergent-based microbial lysis/TCA precipitation coupled to an additional cleanup step involving acidified precipitation and filtering at the peptide level to remove most of the humic acid interferences prior to proteolytic peptide measurement. The novelty of this approach is an integration to exploit two different characteristics of humic acids: (1) Humic acids are insoluble in acidic solution but should not be removed at the protein level, as undesirable protein removal may also occur. Rather it is better to leave the humics acids in the samples until the peptide level, at which point the significant differential solubility of humic acids versus peptides at low pH can be exploited very efficiently. (2) Most of the humic acids have larger molecule weights than the peptides. Therefore, filtering a pH 2 to 3 peptide solution with a 10 kDa filter will remove most of the humic acids. This method is easily interfaced with normal proteolytic processing approaches and provides a reliable and straightforward protein extraction method that efficiently removes soil-borne humic substances without inducing proteome sample loss or biasing protein identification in mass spectrometry. In general, this humic acid removal step is universal and can be adopted by any workflow to effectively remove humic acids to avoid them negatively competing

  10. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    PubMed

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. The optimization of incident angles of low-energy oxygen ion beams for increasing sputtering rate on silicon samples

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.

    2008-12-01

    In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.

  12. Population pharmacokinetics of ceftazidime in cystic fibrosis patients analyzed by using a nonparametric algorithm and optimal sampling strategy.

    PubMed Central

    Vinks, A A; Mouton, J W; Touw, D J; Heijerman, H G; Danhof, M; Bakker, W

    1996-01-01

    Postinfusion data obtained from 17 patients with cystic fibrosis participating in two clinical trials were used to develop population models for ceftazidime pharmacokinetics during continuous infusion. Determinant (D)-optimal sampling strategy (OSS) was used to evaluate the benefits of merging four maximally informative sampling times with population modeling. Full and sparse D-optimal sampling data sets were analyzed with the nonparametric expectation maximization (NPEM) algorithm and compared with the model obtained by the traditional standard two-stage approach. Individual pharmacokinetic parameter estimates were calculated by weighted nonlinear least-squares regression and by maximum a posteriori probability Bayesian estimator. Individual parameter estimates obtained with four D-optimally timed serum samples (OSS4) showed excellent correlation with parameter estimates obtained by using full data sets. The parameters of interest, clearance and volume of distribution, showed excellent agreement (R2 = 0.89 and R2 = 0.86). The ceftazidime population models were described as two-compartment kslope models, relating elimination constants to renal function. The NPEM-OSS4 model was described by the equations kel = 0.06516+ (0.00708.CLCR) and V1 = 0.1773 +/- 0.0406 liter/kg where CLCR is creatinine clearance in milliliters per minute per 1.73 m2, V1 is the volume of distribution of the central compartment, and kel is the elimination rate constant. Predictive performance evaluation for 31 patients with data which were not part of the model data sets showed that the NPEM-ALL model performed best, with significantly better precision than that of the standard two-stage model (P < 0.001). Predictions with the NPEM-OSS4 model were as precise as those with the NPEM-ALL model but slightly biased (-2.2 mg/liter; P < 0.01). D-optimal monitoring strategies coupled with population modeling results in useful and cost-effective population models and will be of advantage in clinical

  13. Optimization of source-sample-detector geometries for bulk hydrogen analysis using epithermal neutrons.

    PubMed

    Csikai, J; Dóczi, R

    2009-01-01

    The advantages and limitations of epithermal neutrons in qualification of hydrocarbons via their H contents and C/H atomic ratios have been investigated systematically. Sensitivity of this method and the dimensions of the interrogated regions were determined for various types of hydrogenous samples. Results clearly demonstrate the advantages of direct neutron detection, e.g. by BF(3) counters as compared to the foil activation method in addition to using the hardness of the spectral shape of Pu-Be neutrons to that from a (252)Cf source.

  14. Optimal sample preparation to characterize corrosion in historical photographs with analytical TEM.

    PubMed

    Grieten, Eva; Caen, Joost; Schryvers, Dominique

    2014-10-01

    An alternative focused ion beam preparation method is used for sampling historical photographs containing metallic nanoparticles in a polymer matrix. We use the preparation steps of classical ultra-microtomy with an alternative final sectioning with a focused ion beam. Transmission electron microscopy techniques show that the lamella has a uniform thickness, which is an important factor for analytical transmission electron microscopy. Furthermore, the method maintains the spatial distribution of nanoparticles in the soft matrix. The results are compared with traditional preparation techniques such as ultra-microtomy and classical focused ion beam milling.

  15. Optimal sampling times for a drug and its metabolite using SIMCYP(®) simulations as prior information.

    PubMed

    Dumont, Cyrielle; Mentré, France; Gaynor, Clare; Brendel, Karl; Gesson, Charlotte; Chenel, Marylore

    2013-01-01

    Since 2007, it is mandatory for the pharmaceutical companies to submit a Paediatric Investigation Plan to the Paediatric Committee at the European Medicines Agency for any drug in development in adults, and it often leads to the need to conduct a pharmacokinetic study in children. Pharmacokinetic studies in children raise ethical and methodological issues. Because of limitation of sampling times, appropriate methods, such as the population approach, are necessary for analysis of the pharmacokinetic data. The choice of the pharmacokinetic sampling design has an important impact on the precision of population parameter estimates. Approaches for design evaluation and optimization based on the evaluation of the Fisher information matrix (M(F)) have been proposed and are now implemented in several software packages, such as PFIM in R. The objectives of this work were to (1) develop a joint population pharmacokinetic model to describe the pharmacokinetic characteristics of a drug S and its active metabolite in children after intravenous drug administration from simulated plasma concentration-time data produced using physiologically based pharmacokinetic (PBPK) predictions; (2) optimize the pharmacokinetic sampling times for an upcoming clinical study using a multi-response design approach, considering clinical constraints; and (3) evaluate the resulting design taking data below the lower limit of quantification (BLQ) into account. Plasma concentration-time profiles were simulated in children using a PBPK model previously developed with the software SIMCYP(®) for the parent drug and its active metabolite. Data were analysed using non-linear mixed-effect models with the software NONMEM(®), using a joint model for the parent drug and its metabolite. The population pharmacokinetic design, for the future study in 82 children from 2 to 18 years old, each receiving a single dose of the drug, was then optimized using PFIM, assuming identical times for parent and metabolite

  16. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    PubMed

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Optimal media for use in air sampling to detect cultivable bacteria and fungi in the pharmacy.

    PubMed

    Weissfeld, Alice S; Joseph, Riya Augustin; Le, Theresa V; Trevino, Ernest A; Schaeffer, M Frances; Vance, Paula H

    2013-10-01

    Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl.

  18. Optimal Media for Use in Air Sampling To Detect Cultivable Bacteria and Fungi in the Pharmacy

    PubMed Central

    Joseph, Riya Augustin; Le, Theresa V.; Trevino, Ernest A.; Schaeffer, M. Frances; Vance, Paula H.

    2013-01-01

    Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl. PMID:23903551

  19. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    PubMed

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m(3)) and benzene (1.7±0.5μg/m(3)) is lower when using TGDE compared to methanol, which was previously used (385μg/m(3) for TCE and 130μg/m(3) for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ(13)C analysis. Due to a different analytical procedure, the method detection limit associated with δ(37)Cl analysis was found to be 156±6μg/m(3) for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the

  20. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    PubMed Central

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes. PMID:28282878

  1. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas.

    PubMed

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M

    2017-03-08

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0-2 points/question. A combinations algorithm was developed to assess street segments' representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score(®), a validated neighborhood walkability measure. Street segment quality scores ranged 10-47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172-475 (Mean = 352.3 ± 63.6). Walk scores(®) ranged 0-91 (Mean = 46.7 ± 26.3). Street segment combinations' correlation coefficients ranged 0.75-1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores(®) (r = 0.62, p < 0.001). This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  2. Optimization of the treatment of wheat samples for the determination of phytic acid by HPLC with refractive index detection.

    PubMed

    Amaro, Rosa; Murillo, Miguel; González, Zurima; Escalona, Andrés; Hernández, Luís

    2009-01-01

    The treatment of wheat samples was optimized before the determination of phytic acid by high-performance liquid chromatography with refractive index detection. Drying by lyophilization and oven drying were studied; drying by lyophilization gave better results, confirming that this step is critical in preventing significant loss of analyte. In the extraction step, washing of the residue and collection of this water before retention of the phytates in the NH2 Sep-Pak cartridge were important. The retention of phytates in the NH2 Sep-Pak cartridge and elimination of the HCI did not produce significant loss (P = 0.05) in the phytic acid content of the sample. Recoveries of phytic acid averaged 91%, which is a substantial improvement with respect to values reported by others using this methodology.

  3. Design Of A Sorbent/desorbent Unit For Sample Pre-treatment Optimized For QMB Gas Sensors

    SciTech Connect

    Pennazza, G.; Cristina, S.; Santonico, M.; Martinelli, E.; Di Natale, C.; D'Amico, A.; Paolesse, R.

    2009-05-23

    Sample pre-treatment is a typical procedure in analytical chemistry aimed at improving the performance of analytical systems. In case of gas sensors sample pre-treatment systems are devised to overcome sensors limitations in terms of selectivity and sensitivity. For this purpose, systems based on adsorption and desorption processes driven by temperature conditioning have been illustrated. The involvement of large temperature ranges may pose problems when QMB gas sensors are used. In this work a study of such influences on the overall sensing properties of QMB sensors are illustrated. The results allowed the design of a pre-treatment unit coupled with a QMB gas sensors array optimized to operate in a suitable temperatures range. The performance of the system are illustrated by the partially separation of water vapor in a gas mixture, and by substantial improvement of the signal to noise ratio.

  4. A tale of two retinal domains: near-optimal sampling of achromatic contrasts in natural scenes through asymmetric photoreceptor distribution.

    PubMed

    Baden, Tom; Schubert, Timm; Chang, Le; Wei, Tao; Zaichuk, Mariana; Wissinger, Bernd; Euler, Thomas

    2013-12-04

    For efficient coding, sensory systems need to adapt to the distribution of signals to which they are exposed. In vision, natural scenes above and below the horizon differ in the distribution of chromatic and achromatic features. Consequently, many species differentially sample light in the sky and on the ground using an asymmetric retinal arrangement of short- (S, "blue") and medium- (M, "green") wavelength-sensitive photoreceptor types. Here, we show that in mice this photoreceptor arrangement provides for near-optimal sampling of natural achromatic contrasts. Two-photon population imaging of light-driven calcium signals in the synaptic terminals of cone-photoreceptors expressing a calcium biosensor revealed that S, b