Science.gov

Sample records for define optimal sampling

  1. Annotating user-defined abstractions for optimization

    SciTech Connect

    Quinlan, D; Schordan, M; Vuduc, R; Yi, Q

    2005-12-05

    This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.

  2. Optimal hemicube sampling

    SciTech Connect

    Max, N. |

    1992-12-17

    Radiosity algorithms for global illumination, either ``gathering`` or ``shooting`` versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.

  3. Optimal hemicube sampling

    SciTech Connect

    Max, N. California Univ., Davis, CA )

    1992-12-17

    Radiosity algorithms for global illumination, either gathering'' or shooting'' versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.

  4. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    SciTech Connect

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  5. Defining a region of optimization based on engine usage data

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  6. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  7. Defining Analytical Strategies for Mars Sample Return with Analogue Missions

    NASA Astrophysics Data System (ADS)

    Osinski, G. R.; Sapers, H. M.; Francis, R.; Pontefract, A.; Tornabene, L. L.; Haltigin, T.

    2016-05-01

    The characterization of biosignatures in MSR samples will require integrated, cross-platform laboratory analyses carefully correlated and calibrated with Rover-based technologies. Analogue missions provide context for implementation and assessment.

  8. (Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement

    NASA Astrophysics Data System (ADS)

    Lowery, C.; Fraass, A. J.

    2015-12-01

    Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.

  9. Defining the Mars Ascent Problem for Sample Return

    SciTech Connect

    Whitehead, J

    2008-07-31

    Lifting geology samples off of Mars is both a daunting technical problem for propulsion experts and a cultural challenge for the entire community that plans and implements planetary science missions. The vast majority of science spacecraft require propulsive maneuvers that are similar to what is done routinely with communication satellites, so most needs have been met by adapting hardware and methods from the satellite industry. While it is even possible to reach Earth from the surface of the moon using such traditional technology, ascending from the surface of Mars is beyond proven capability for either solid or liquid propellant rocket technology. Miniature rocket stages for a Mars ascent vehicle would need to be over 80 percent propellant by mass. It is argued that the planetary community faces a steep learning curve toward nontraditional propulsion expertise, in order to successfully accomplish a Mars sample return mission. A cultural shift may be needed to accommodate more technical risk acceptance during the technology development phase.

  10. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  11. Ecological and sampling constraints on defining landscape fire severity

    USGS Publications Warehouse

    Key, C.H.

    2006-01-01

    Ecological definition and detection of fire severity are influenced by factors of spatial resolution and timing. Resolution determines the aggregation of effects within a sampling unit or pixel (alpha variation), hence limiting the discernible ecological responses, and controlling the spatial patchiness of responses distributed throughout a burn (beta variation). As resolution decreases, alpha variation increases, extracting beta variation and complexity from the spatial model of the whole burn. Seasonal timing impacts the quality of radiometric data in terms of transmittance, sun angle, and potential contrast between responses within burns. Detection sensitivity candegrade toward the end of many fire seasons when low sun angles, vegetation senescence, incomplete burning, hazy conditions, or snow are common. Thus, a need exists to supersede many rapid response applications when remote sensing conditions improve. Lag timing, or timesince fire, notably shapes the ecological character of severity through first-order effects that only emerge with time after fire, including delayed survivorship and mortality. Survivorship diminishes the detected magnitude of severity, as burned vegetation remains viable and resprouts, though at first it may appear completely charred or consumed above ground. Conversely, delayed mortality increases the severity estimate when apparently healthy vegetation is in fact damaged by heat to the extent that it dies over time. Both responses dependon fire behavior and various species-specific adaptations to fire that are unique to the pre-firecomposition of each burned area. Both responses can lead initially to either over- or underestimating severity. Based on such implications, three sampling intervals for short-term burn severity are identified; rapid, initial, and extended assessment, sampled within about two weeks, two months, and depending on the ecotype, from three months to one year after fire, respectively. Spatial and temporal

  12. Resolution optimization with irregularly sampled Fourier data

    NASA Astrophysics Data System (ADS)

    Ferrara, Matthew; Parker, Jason T.; Cheney, Margaret

    2013-05-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer-Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications.

  13. Learning approach to sampling optimization: Applications in astrodynamics

    NASA Astrophysics Data System (ADS)

    Henderson, Troy Allen

    A new, novel numerical optimization algorithm is developed, tested, and used to solve difficult numerical problems from the field of astrodynamics. First, a brief review of optimization theory is presented and common numerical optimization techniques are discussed. Then, the new method, called the Learning Approach to Sampling Optimization (LA) is presented. Simple, illustrative examples are given to further emphasize the simplicity and accuracy of the LA method. Benchmark functions in lower dimensions are studied and the LA is compared, in terms of performance, to widely used methods. Three classes of problems from astrodynamics are then solved. First, the N-impulse orbit transfer and rendezvous problems are solved by using the LA optimization technique along with derived bounds that make the problem computationally feasible. This marriage between analytical and numerical methods allows an answer to be found for an order of magnitude greater number of impulses than are currently published. Next, the N-impulse work is applied to design periodic close encounters (PCE) in space. The encounters are defined as an open rendezvous, meaning that two spacecraft must be at the same position at the same time, but their velocities are not necessarily equal. The PCE work is extended to include N-impulses and other constraints, and new examples are given. Finally, a trajectory optimization problem is solved using the LA algorithm and comparing performance with other methods based on two models---with varying complexity---of the Cassini-Huygens mission to Saturn. The results show that the LA consistently outperforms commonly used numerical optimization algorithms.

  14. Continuous Cultivation for Apparent Optimization of Defined Media for Cellulomonas sp. and Bacillus cereus

    PubMed Central

    Summers, R. J.; Boudreaux, D. P.; Srinivasan, V. R.

    1979-01-01

    Steady-state continuous culture was used to optimize lean chemically defined media for a Cellulomonas sp. and Bacillus cereus strain T. Both organisms were extremely sensitive to variations in trace-metal concentrations. However, medium optimization by this technique proved rapid, and multifactor screening was easily conducted by using a minimum of instrumentation. The optimized media supported critical dilution rates of 0.571 and 0.467 h−1 for Cellulomonas and Bacillus, respectively. These values approximated maximum growth rate values observed in batch culture. PMID:16345417

  15. Realization theory and quadratic optimal controllers for systems defined over Banach and Frechet algebras

    NASA Technical Reports Server (NTRS)

    Byrnes, C. I.

    1980-01-01

    It is noted that recent work by Kamen (1979) on the stability of half-plane digital filters shows that the problem of the existence of a feedback law also arises for other Banach algebras in applications. This situation calls for a realization theory and stabilizability criteria for systems defined over Banach for Frechet algebra A. Such a theory is developed here, with special emphasis placed on the construction of finitely generated realizations, the existence of coprime factorizations for T(s) defined over A, and the solvability of the quadratic optimal control problem and the associated algebraic Riccati equation over A.

  16. A Source-to-Source Architecture for User-Defined Optimizations

    SciTech Connect

    Schordan, M; Quinlan, D

    2003-02-06

    The performance of object-oriented applications often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are user-defined and not part of the programming language itself only limited information on their high-level semantics can be leveraged through program analysis by the compiler and thus most often no appropriate high-level optimizations are performed. In this paper we outline an approach based on source-to-source transformation to allow users to define optimizations which are not performed by the compiler they use. These techniques are intended to be as easy and intuitive as possible for potential users; i.e. for designers of object-oriented libraries, people most often only with basic compiler expertise.

  17. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385

  18. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  19. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  20. Towards optimal sampling schedules for integral pumping tests

    NASA Astrophysics Data System (ADS)

    Leschik, Sebastian; Bayer-Raich, Marti; Musolff, Andreas; Schirmer, Mario

    2011-06-01

    Conventional point sampling may miss plumes in groundwater due to an insufficient density of sampling locations. The integral pumping test (IPT) method overcomes this problem by increasing the sampled volume. One or more wells are pumped for a long duration (several days) and samples are taken during pumping. The obtained concentration-time series are used for the estimation of average aquifer concentrations Cav and mass flow rates MCP. Although the IPT method is a well accepted approach for the characterization of contaminated sites, no substantiated guideline for the design of IPT sampling schedules (optimal number of samples and optimal sampling times) is available. This study provides a first step towards optimal IPT sampling schedules by a detailed investigation of 30 high-frequency concentration-time series. Different sampling schedules were tested by modifying the original concentration-time series. The results reveal that the relative error in the Cav estimation increases with a reduced number of samples and higher variability of the investigated concentration-time series. Maximum errors of up to 22% were observed for sampling schedules with the lowest number of samples of three. The sampling scheme that relies on constant time intervals ∆t between different samples yielded the lowest errors.

  1. In-depth analysis of sampling optimization methods

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Han, Sangjun; Kim, Myoungsoo; Habets, Boris; Buhl, Stefan; Guhlemann, Steffen; Rößiger, Martin; Bellmann, Enrico; Kim, Seop

    2016-03-01

    High order overlay and alignment models require good coverage of overlay or alignment marks on the wafer. But dense sampling plans are not possible for throughput reasons. Therefore, sampling plan optimization has become a key issue. We analyze the different methods for sampling optimization and discuss the different knobs to fine-tune the methods to constraints of high volume manufacturing. We propose a method to judge sampling plan quality with respect to overlay performance, run-to-run stability and dispositioning criteria using a number of use cases from the most advanced lithography processes.

  2. Optimal sampling schedule for chemical exchange saturation transfer.

    PubMed

    Tee, Y K; Khrapitchev, A A; Sibson, N R; Payne, S J; Chappell, M A

    2013-11-01

    The sampling schedule for chemical exchange saturation transfer imaging is normally uniformly distributed across the saturation frequency offsets. When this kind of evenly distributed sampling schedule is used to quantify the chemical exchange saturation transfer effect using model-based analysis, some of the collected data are minimally informative to the parameters of interest. For example, changes in labile proton exchange rate and concentration mainly affect the magnetization near the resonance frequency of the labile pool. In this study, an optimal sampling schedule was designed for a more accurate quantification of amine proton exchange rate and concentration, and water center frequency shift based on an algorithm previously applied to magnetization transfer and arterial spin labeling. The resulting optimal sampling schedule samples repeatedly around the resonance frequency of the amine pool and also near to the water resonance to maximize the information present within the data for quantitative model-based analysis. Simulation and experimental results on tissue-like phantoms showed that greater accuracy and precision (>30% and >46%, respectively, for some cases) were achieved in the parameters of interest when using optimal sampling schedule compared with evenly distributed sampling schedule. Hence, the proposed optimal sampling schedule could replace evenly distributed sampling schedule in chemical exchange saturation transfer imaging to improve the quantification of the chemical exchange saturation transfer effect and parameter estimation. PMID:23315799

  3. Protocol optimization for long-term liquid storage of goat semen in a chemically defined extender.

    PubMed

    Zhao, B-T; Han, D; Xu, C-L; Luo, M-J; Chang, Z-L; Tan, J-H

    2009-12-01

    A specific problem in the preservation of goat semen has been the detrimental effect of seminal plasma on the viability of spermatozoa in extenders containing egg yolk or milk. The use of chemically defined extenders will have obvious advantages in liquid storage of buck semen. Our previous study showed that the self-made mZAP extender performed better than commercial extenders, and maintained a sperm motility of 34% for 9 days and a fertilizing potential for successful pregnancies for 7 days. The aim of this study was to extend the viability and fertilizing potential of liquid-stored goat spermatozoa by optimizing procedures for semen processing and storage in the mZAP extender. Semen samples collected from five goat bucks of the Lubei White and Boer breeds were diluted with the extender, cooled and stored at 5 degrees C. Stored semen was evaluated for sperm viability parameters, every 48 h of storage. Data from three ejaculates of different bucks were analysed for each treatment. The percentage data were arcsine-transformed before being analysed with anova and Duncan's multiple comparison test. While cooling at the rate of 0.1-0.25 degrees C/min did not affect sperm viability parameters, doing so at the rate of 0.6 degrees C/min from 30 to 15 degrees C reduced goat sperm motility and membrane integrity. Sperm motility and membrane integrity were significantly higher in semen coated with the extender containing 20% egg yolk than in non-coated semen. Sperm motility, membrane integrity and acrosomal intactness were significantly higher when coated semen was 21-fold diluted than when it was 11- or 51-fold diluted and when extender was renewed at 48-h intervals than when it was not renewed during storage. When goat semen coated with the egg yolk-containing extender was 21-fold diluted, cooled at the rate of 0.07-0.25 degrees C/min, stored at 5 degrees C and the extender renewed every 48 h, a sperm motility of 48% was maintained for 13 days, and an in vitro

  4. Optimal sampling and quantization of synthetic aperture radar signals

    NASA Technical Reports Server (NTRS)

    Wu, C.

    1978-01-01

    Some theoretical and experimental results on optimal sampling and quantization of synthetic aperture radar (SAR) signals are presented. It includes a description of a derived theoretical relationship between the pixel signal to noise ratio of processed SAR images and the number of quantization bits per sampled signal, assuming homogeneous extended targets. With this relationship known, a solution may be realized for the problem of optimal allocation of a fixed data bit-volume (for specified surface area and resolution criterion) between the number of samples and the number of bits per sample. The results indicate that to achieve the best possible image quality for a fixed bit rate and a given resolution criterion, one should quantize individual samples coarsely and thereby maximize the number of multiple looks. The theoretical results are then compared with simulation results obtained by processing aircraft SAR data.

  5. Optimization of chemically defined feed media for monoclonal antibody production in Chinese hamster ovary cells.

    PubMed

    Kishishita, Shohei; Katayama, Satoshi; Kodaira, Kunihiko; Takagi, Yoshinori; Matsuda, Hiroki; Okamoto, Hiroshi; Takuma, Shinya; Hirashima, Chikashi; Aoyagi, Hideki

    2015-07-01

    Chinese hamster ovary (CHO) cells are the most commonly used mammalian host for large-scale commercial production of therapeutic monoclonal antibodies (mAbs). Chemically defined media are currently used for CHO cell-based mAb production. An adequate supply of nutrients, especially specific amino acids, is required for cell growth and mAb production, and chemically defined fed-batch processes that support rapid cell growth, high cell density, and high levels of mAb production is still challenging. Many studies have highlighted the benefits of various media designs, supplements, and feed addition strategies in cell cultures. In the present study, we used a strategy involving optimization of a chemically defined feed medium to improve mAb production. Amino acids that were consumed in substantial amounts during a control culture were added to the feed medium as supplements. Supplementation was controlled to minimize accumulation of waste products such as lactate and ammonia. In addition, we evaluated supplementation with tyrosine, which has poor solubility, in the form of a dipeptide or tripeptide to improve its solubility. Supplementation with serine, cysteine, and tyrosine enhanced mAb production, cell viability, and metabolic profiles. A cysteine-tyrosine-serine tripeptide showed high solubility and produced beneficial effects similar to those observed with the free amino acids and with a dipeptide in improving mAb titers and metabolic profiles. PMID:25678240

  6. Systematic development and optimization of chemically defined medium supporting high cell density growth of Bacillus coagulans.

    PubMed

    Chen, Yu; Dong, Fengqing; Wang, Yonghong

    2016-09-01

    With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved. PMID:27262567

  7. Site-Wide Integrated Water Monitoring - Defining and Implementing Sampling Objectives to Support Site Closure - 13060

    SciTech Connect

    Wilborn, Bill; Knapp, Kathryn; Farnham, Irene; Marutzky, Sam

    2013-07-01

    The Underground Test Area (UGTA) activity is responsible for assessing and evaluating the effects of the underground nuclear weapons tests on groundwater at the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), and implementing a corrective action closure strategy. The UGTA strategy is based on a combination of characterization, modeling studies, monitoring, and institutional controls (i.e., monitored natural attenuation). The closure strategy verifies through appropriate monitoring activities that contaminants of concern do not exceed the SDWA at the regulatory boundary and that adequate institutional controls are established and administered to ensure protection of the public. Other programs conducted at the NNSS supporting the environmental mission include the Routine Radiological Environmental Monitoring Program (RREMP), Waste Management, and the Infrastructure Program. Given the current programmatic and operational demands for various water-monitoring activities at the same locations, and the ever-increasing resource challenges, cooperative and collaborative approaches to conducting the work are necessary. For this reason, an integrated sampling plan is being developed by the UGTA activity to define sampling and analysis objectives, reduce duplication, eliminate unnecessary activities, and minimize costs. The sampling plan will ensure the right data sets are developed to support closure and efficient transition to long-term monitoring. The plan will include an integrated reporting mechanism for communicating results and integrating process improvements within the UGTA activity as well as between other U.S. Department of Energy (DOE) Programs. (authors)

  8. Site-Wide Integrated Water Monitoring -- Defining and Implementing Sampling Objectives to Support Site Closure

    SciTech Connect

    Wilborn, Bill; Marutzky, Sam; Knapp, Kathryn

    2013-02-24

    The Underground Test Area (UGTA) activity is responsible for assessing and evaluating the effects of the underground nuclear weapons tests on groundwater at the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), and implementing a corrective action closure strategy. The UGTA strategy is based on a combination of characterization, modeling studies, monitoring, and institutional controls (i.e., monitored natural attenuation). The closure strategy verifies through appropriate monitoring activities that contaminants of concern do not exceed the SDWA at the regulatory boundary and that adequate institutional controls are established and administered to ensure protection of the public. Other programs conducted at the NNSS supporting the environmental mission include the Routine Radiological Environmental Monitoring Program (RREMP), Waste Management, and the Infrastructure Program. Given the current programmatic and operational demands for various water-monitoring activities at the same locations, and the ever-increasing resource challenges, cooperative and collaborative approaches to conducting the work are necessary. For this reason, an integrated sampling plan is being developed by the UGTA activity to define sampling and analysis objectives, reduce duplication, eliminate unnecessary activities, and minimize costs. The sampling plan will ensure the right data sets are developed to support closure and efficient transition to long-term monitoring. The plan will include an integrated reporting mechanism for communicating results and integrating process improvements within the UGTA activity as well as between other U.S. Department of Energy (DOE) Programs.

  9. spsann - optimization of sample patterns using spatial simulated annealing

    NASA Astrophysics Data System (ADS)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  10. Optimization of protein samples for NMR using thermal shift assays.

    PubMed

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-04-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor(®) provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies. PMID:26984476

  11. Estimation of the Optimal Statistical Quality Control Sampling Time Intervals Using a Residual Risk Measure

    PubMed Central

    Hatjimihail, Aristides T.

    2009-01-01

    Background An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. Methodology/Principal Findings Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. Conclusions/Significance It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed. PMID:19513124

  12. Investigation of Archean microfossil preservation for defining science objectives for Mars sample return missions

    NASA Astrophysics Data System (ADS)

    Lorber, K.; Czaja, A. D.

    2014-12-01

    Recent studies suggest that Mars contains more potentially life-supporting habitats (either in the present or past), than once thought. The key to finding life on Mars, whether extinct or extant, is to first understand which biomarkers and biosignatures are strictly biogenic in origin. Studying ancient habitats and fossil organisms of the early Earth can help to characterize potential Martian habitats and preserved life. This study, which focuses on the preservation of fossil microorganisms from the Archean Eon, aims to help define in part the science methods needed for a Mars sample return mission, of which, the Mars 2020 rover mission is the first step.Here is reported variations in the geochemical and morphological preservation of filamentous fossil microorganisms (microfossils) collected from the 2.5-billion-year-old Gamohaan Formation of the Kaapvaal Craton of South Africa. Samples of carbonaceous chert were collected from outcrop and drill core within ~1 km of each other. Specimens from each location were located within thin sections and their biologic morphologies were confirmed using confocal laser scanning microscopy. Raman spectroscopic analyses documented the carbonaceous nature of the specimens and also revealed variations in the level of geochemical preservation of the kerogen that comprises the fossils. The geochemical preservation of kerogen is principally thought to be a function of thermal alteration, but the regional geology indicates all of the specimens experienced the same thermal history. It is hypothesized that the fossils contained within the outcrop samples were altered by surface weathering, whereas the drill core samples, buried to a depth of ~250 m, were not. This differential weathering is unusual for cherts that have extremely low porosities. Through morphological and geochemical characterization of the earliest known forms of fossilized life on the earth, a greater understanding of the origin of evolution of life on Earth is gained

  13. 'Optimal thermal range' in ectotherms: Defining criteria for tests of the temperature-size-rule.

    PubMed

    Walczyńska, Aleksandra; Kiełbasa, Anna; Sobczyk, Mateusz

    2016-08-01

    Thermal performance curves for population growth rate r (a measure of fitness) were estimated over a wide range of temperature for three species: Coleps hirtus (Protista), Lecane inermis (Rotifera) and Aeolosoma hemprichi (Oligochaeta). We measured individual body size and examined if predictions for the temperature-size rule (TSR) were valid for different temperatures. All three organisms investigated follow the TSR, but only over a specific range between minimal and optimal temperatures, while maintenance at temperatures beyond this range showed the opposite pattern in these taxa. We consider minimal and optimal temperatures to be species-specific, and moreover delineate a physiological range outside of which an ectotherm is constrained against displaying size plasticity in response to temperature. This thermal range concept has important implications for general size-temperature studies. Furthermore, the concept of 'operating thermal conditions' may provide a new approach to (i) defining criteria required for investigating and interpreting temperature effects, and (ii) providing a novel interpretation for many cases in which species do not conform to the TSR. PMID:27503715

  14. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  15. Defining an optimal surface chemistry for pluripotent stem cell culture in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Zonca, Michael R., Jr.

    Surface chemistry is critical for growing pluripotent stem cells in an undifferentiated state. There is great potential to engineer the surface chemistry at the nanoscale level to regulate stem cell adhesion. However, the challenge is to identify the optimal surface chemistry of the substrata for ES cell attachment and maintenance. Using a high-throughput polymerization and screening platform, a chemically defined, synthetic polymer grafted coating that supports strong attachment and high expansion capacity of pluripotent stem cells has been discovered using mouse embryonic stem (ES) cells as a model system. This optimal substrate, N-[3-(Dimethylamino)propyl] methacrylamide (DMAPMA) that is grafted on 2D synthetic poly(ether sulfone) (PES) membrane, sustains the self-renewal of ES cells (up to 7 passages). DMAPMA supports cell attachment of ES cells through integrin beta1 in a RGD-independent manner and is similar to another recently reported polymer surface. Next, DMAPMA has been able to be transferred to 3D by grafting to synthetic, polymeric, PES fibrous matrices through both photo-induced and plasma-induced polymerization. These 3D modified fibers exhibited higher cell proliferation and greater expression of pluripotency markers of mouse ES cells than 2D PES membranes. Our results indicated that desirable surfaces in 2D can be scaled to 3D and that both surface chemistry and structural dimension strongly influence the growth and differentiation of pluripotent stem cells. Lastly, the feasibility of incorporating DMAPMA into a widely used natural polymer, alginate, has been tested. Novel adhesive alginate hydrogels have been successfully synthesized by either direct polymerization of DMAPMA and methacrylic acid blended with alginate, or photo-induced DMAPMA polymerization on alginate nanofibrous hydrogels. In particular, DMAPMA-coated alginate hydrogels support strong ES cell attachment, exhibiting a concentration dependency of DMAPMA. This research provides a

  16. Optimized Sample Handling Strategy for Metabolic Profiling of Human Feces.

    PubMed

    Gratton, Jasmine; Phetcharaburanin, Jutarop; Mullish, Benjamin H; Williams, Horace R T; Thursz, Mark; Nicholson, Jeremy K; Holmes, Elaine; Marchesi, Julian R; Li, Jia V

    2016-05-01

    Fecal metabolites are being increasingly studied to unravel the host-gut microbial metabolic interactions. However, there are currently no guidelines for fecal sample collection and storage based on a systematic evaluation of the effect of time, storage temperature, storage duration, and sampling strategy. Here we derive an optimized protocol for fecal sample handling with the aim of maximizing metabolic stability and minimizing sample degradation. Samples obtained from five healthy individuals were analyzed to assess topographical homogeneity of feces and to evaluate storage duration-, temperature-, and freeze-thaw cycle-induced metabolic changes in crude stool and fecal water using a (1)H NMR spectroscopy-based metabolic profiling approach. Interindividual variation was much greater than that attributable to storage conditions. Individual stool samples were found to be heterogeneous and spot sampling resulted in a high degree of metabolic variation. Crude fecal samples were remarkably unstable over time and exhibited distinct metabolic profiles at different storage temperatures. Microbial fermentation was the dominant driver in time-related changes observed in fecal samples stored at room temperature and this fermentative process was reduced when stored at 4 °C. Crude fecal samples frozen at -20 °C manifested elevated amino acids and nicotinate and depleted short chain fatty acids compared to crude fecal control samples. The relative concentrations of branched-chain and aromatic amino acids significantly increased in the freeze-thawed crude fecal samples, suggesting a release of microbial intracellular contents. The metabolic profiles of fecal water samples were more stable compared to crude samples. Our recommendation is that intact fecal samples should be collected, kept at 4 °C or on ice during transportation, and extracted ideally within 1 h of collection, or a maximum of 24 h. Fecal water samples should be extracted from a representative amount (∼15 g

  17. SamACO: variable sampling ant colony optimization algorithm for continuous optimization.

    PubMed

    Hu, Xiao-Min; Zhang, Jun; Chung, Henry Shu-Hung; Li, Yun; Liu, Ou

    2010-12-01

    An ant colony optimization (ACO) algorithm offers algorithmic techniques for optimization by simulating the foraging behavior of a group of ants to perform incremental solution constructions and to realize a pheromone laying-and-following mechanism. Although ACO is first designed for solving discrete (combinatorial) optimization problems, the ACO procedure is also applicable to continuous optimization. This paper presents a new way of extending ACO to solving continuous optimization problems by focusing on continuous variable sampling as a key to transforming ACO from discrete optimization to continuous optimization. The proposed SamACO algorithm consists of three major steps, i.e., the generation of candidate variable values for selection, the ants' solution construction, and the pheromone update process. The distinct characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efficient incremental solution construction method based on the sampled values. The performance of SamACO is tested using continuous numerical functions with unimodal and multimodal features. Compared with some state-of-the-art algorithms, including traditional ant-based algorithms and representative computational intelligence algorithms for continuous optimization, the performance of SamACO is seen competitive and promising. PMID:20371409

  18. A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics

    SciTech Connect

    Pikulski, M. Shiroka, T.; Ott, H.-R.; Mesot, J.

    2014-09-15

    We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs.

  19. A firmware-defined digital direct-sampling NMR spectrometer for condensed matter physics.

    PubMed

    Pikulski, M; Shiroka, T; Ott, H-R; Mesot, J

    2014-09-01

    We report on the design and implementation of a new digital, broad-band nuclear magnetic resonance (NMR) spectrometer suitable for probing condensed matter. The spectrometer uses direct sampling in both transmission and reception. It relies on a single, commercially-available signal processing device with a user-accessible field-programmable gate array (FPGA). Its functions are defined exclusively by the FPGA firmware and the application software. Besides allowing for fast replication, flexibility, and extensibility, our software-based solution preserves the option to reuse the components for other projects. The device operates up to 400 MHz without, and up to 800 MHz with undersampling, respectively. Digital down-conversion with ±10 MHz passband is provided on the receiver side. The system supports high repetition rates and has virtually no intrinsic dead time. We describe briefly how the spectrometer integrates into the experimental setup and present test data which demonstrates that its performance is competitive with that of conventional designs. PMID:25273738

  20. Automatic optimization of metrology sampling scheme for advanced process control

    NASA Astrophysics Data System (ADS)

    Chue, Chuei-Fu; Huang, Chun-Yen; Shih, Chiang-Lin

    2011-03-01

    In order to ensure long-term profitability, driving the operational costs down and improving the yield of a DRAM manufacturing process are continuous efforts. This includes optimal utilization of the capital equipment. The costs of metrology needed to ensure yield are contributing to the overall costs. As the shrinking of device dimensions continues, the costs of metrology are increasing because of the associated tightening of the on-product specifications requiring more metrology effort. The cost-of-ownership reduction is tackled by increasing the throughput and availability of metrology systems. However, this is not the only way to reduce metrology effort. In this paper, we discuss how the costs of metrology can be improved by optimizing the recipes in terms of the sampling layout, thereby eliminating metrology that does not contribute to yield. We discuss results of sampling scheme optimization for on-product overlay control of two DRAM manufacturing processes at Nanya Technology Corporation. For a 6x DRAM production process, we show that the reduction of metrology waste can be as high as 27% and overlay can be improved by 36%, comparing with a baseline sampling scheme. For a 4x DRAM process, having tighter overlay specs, a gain of ca. 0.5nm on-product overlay could be achieved, without increasing the metrology effort relative to the original sampling plan.

  1. Defining optimal cutoff scores for cognitive impairment using MDS Task Force PD-MCI criteria

    PubMed Central

    Goldman, Jennifer G.; Holden, Samantha; Bernard, Bryan; Ouyang, Bichun; Goetz, Christopher G.; Stebbins, Glenn T.

    2014-01-01

    Background The recently proposed Movement Disorder Society (MDS) Task Force diagnostic criteria for mild cognitive impairment in Parkinson’s disease (PD-MCI) represent a first step towards a uniform definition of PD-MCI across multiple clinical and research settings. Several questions regarding specific criteria, however, remain unanswered including optimal cutoff scores by which to define impairment on neuropsychological tests. Methods Seventy-six non-demented PD patients underwent comprehensive neuropsychological assessment and were classified as PD-MCI or PD with normal cognition (PD-NC). Concordance of PD-MCI diagnosis by MDS Task Force Level II criteria (comprehensive assessment), using a range of standard deviation (SD) cutoff scores, was compared to our consensus diagnosis of PD-MCI or PD-NC. Sensitivity, specificity, positive and negative predictive values were examined for each cutoff score. PD-MCI subtype classification and distribution of cognitive domains impaired were evaluated. Results Concordance for PD-MCI diagnosis was greatest for defining impairment on neuropsychological tests using a 2 SD cutoff score below appropriate norms. This cutoff also provided the best discriminatory properties for separating PD-MCI from PD-NC, compared to other cutoff scores. With the MDS PD-MCI criteria, multiple domain impairment was more frequent than single domain impairment, with predominant executive function, memory, and visuospatial function deficits. Conclusions Application of the MDS Task Force PD-MCI Level II diagnostic criteria demonstrates good sensitivity and specificity at a 2 SD cutoff score. The predominance of multiple domain impairment in PD-MCI with the Level II criteria suggests not only influences of testing abnormality requirements, but also the widespread nature of cognitive deficits within PD-MCI. PMID:24123267

  2. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  3. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-01

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis. PMID:24921717

  4. Classifier-Guided Sampling for Complex Energy System Optimization

    SciTech Connect

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  5. Efficient infill sampling for unconstrained robust optimization problems

    NASA Astrophysics Data System (ADS)

    Rehman, Samee Ur; Langelaar, Matthijs

    2016-08-01

    A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.

  6. Optimizing passive acoustic sampling of bats in forests.

    PubMed

    Froidevaux, Jérémy S P; Zellweger, Florian; Bollmann, Kurt; Obrist, Martin K

    2014-12-01

    Passive acoustic methods are increasingly used in biodiversity research and monitoring programs because they are cost-effective and permit the collection of large datasets. However, the accuracy of the results depends on the bioacoustic characteristics of the focal taxa and their habitat use. In particular, this applies to bats which exhibit distinct activity patterns in three-dimensionally structured habitats such as forests. We assessed the performance of 21 acoustic sampling schemes with three temporal sampling patterns and seven sampling designs. Acoustic sampling was performed in 32 forest plots, each containing three microhabitats: forest ground, canopy, and forest gap. We compared bat activity, species richness, and sampling effort using species accumulation curves fitted with the clench equation. In addition, we estimated the sampling costs to undertake the best sampling schemes. We recorded a total of 145,433 echolocation call sequences of 16 bat species. Our results indicated that to generate the best outcome, it was necessary to sample all three microhabitats of a given forest location simultaneously throughout the entire night. Sampling only the forest gaps and the forest ground simultaneously was the second best choice and proved to be a viable alternative when the number of available detectors is limited. When assessing bat species richness at the 1-km(2) scale, the implementation of these sampling schemes at three to four forest locations yielded highest labor cost-benefit ratios but increasing equipment costs. Our study illustrates that multiple passive acoustic sampling schemes require testing based on the target taxa and habitat complexity and should be performed with reference to cost-benefit ratios. Choosing a standardized and replicated sampling scheme is particularly important to optimize the level of precision in inventories, especially when rare or elusive species are expected. PMID:25558363

  7. Defining Adult Experiences: Perspectives of a Diverse Sample of Young Adults

    PubMed Central

    Lowe, Sarah R.; Dillon, Colleen O.; Rhodes, Jean E.; Zwiebach, Liza

    2013-01-01

    This study explored the roles and psychological experiences identified as defining adult moments using mixed methods with a racially, ethnically, and socioeconomically diverse sample of young adults both enrolled and not enrolled in college (N = 726; ages 18-35). First, we evaluated results from a single survey item that asked participants to rate how adult they feel. Consistent with previous research, the majority of participants (56.9%) reported feeling “somewhat like an adult,” and older participants had significantly higher subjective adulthood, controlling for other demographic variables. Next, we analyzed responses from an open-ended question asking participants to describe instances in which they felt like an adult. Responses covered both traditional roles (e.g., marriage, childbearing; 36.1%) and nontraditional social roles and experiences (e.g., moving out of parent’s home, cohabitation; 55.6%). Although we found no differences by age and college status in the likelihood of citing a traditional or nontraditional role, participants who had achieved more traditional roles were more likely to cite them in their responses. In addition, responses were coded for psychological experiences, including responsibility for self (19.0%), responsibility for others (15.3%), self-regulation (31.1%), and reflected appraisals (5.1%). Older participants were significantly more likely to include self-regulation and reflected appraisals, whereas younger participants were more likely to include responsibility for self. College students were more likely than noncollege students to include self-regulation and reflected appraisals. Implications for research and practice are discussed. PMID:23554545

  8. Simultaneous beam sampling and aperture shape optimization for SPORT

    SciTech Connect

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  9. Defining the National Interest: A Sample Lesson on Current International Affairs from Choices Education Project.

    ERIC Educational Resources Information Center

    Brown Univ., Providence, RI. Thomas J. Watson, Jr. Inst. for International Studies.

    Clearly, the United States cannot respond to every crisis, but what is meant precisely by the phrase "American interests"? How is the U.S. national interest defined and by whom? Does its definition affect the decision of how to respond to a crisis? This lesson deals with these complex and intertwined questions. By defining the national interest…

  10. Optimized robust plasma sampling for glomerular filtration rate studies.

    PubMed

    Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L

    2012-09-01

    In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement. PMID:22825040

  11. Test samples for optimizing STORM super-resolution microscopy.

    PubMed

    Metcalf, Daniel J; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E

    2013-01-01

    STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752

  12. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    SciTech Connect

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  13. Rate-distortion optimization for compressive video sampling

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  14. Test Samples for Optimizing STORM Super-Resolution Microscopy

    PubMed Central

    Metcalf, Daniel J.; Edwards, Rebecca; Kumarswami, Neelam; Knight, Alex E.

    2013-01-01

    STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon. PMID:24056752

  15. A General Investigation of Optimized Atmospheric Sample Duration

    SciTech Connect

    Eslinger, Paul W.; Miley, Harry S.

    2012-11-28

    ABSTRACT The International Monitoring System (IMS) consists of up to 80 aerosol and xenon monitoring systems spaced around the world that have collection systems sensitive enough to detect nuclear releases from underground nuclear tests at great distances (CTBT 1996; CTBTO 2011). Although a few of the IMS radionuclide stations are closer together than 1,000 km (such as the stations in Kuwait and Iran), many of them are 2,000 km or more apart. In the absence of a scientific basis for optimizing the duration of atmospheric sampling, historically scientists used a integration times from 24 hours to 14 days for radionuclides (Thomas et al. 1977). This was entirely adequate in the past because the sources of signals were far away and large, meaning that they were smeared over many days by the time they had travelled 10,000 km. The Fukushima event pointed out the unacceptable delay time (72 hours) between the start of sample acquisition and final data being shipped. A scientific basis for selecting a sample duration time is needed. This report considers plume migration of a nondecaying tracer using archived atmospheric data for 2011 in the HYSPLIT (Draxler and Hess 1998; HYSPLIT 2011) transport model. We present two related results: the temporal duration of the majority of the plume as a function of distance and the behavior of the maximum plume concentration as a function of sample collection duration and distance. The modeled plume behavior can then be combined with external information about sampler design to optimize sample durations in a sampling network.

  16. Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Thompson, David R.; Hsiang, Kian

    2013-01-01

    This work was designed to find a way to optimally (or near optimally) sample spatiotemporal phenomena based on limited sensing capability, and to create a model that can be run to estimate uncertainties, as well as to estimate covariances. The goal was to maximize (or minimize) some function of the overall uncertainty. The uncertainties and covariances were modeled presuming a parametric distribution, and then the model was used to approximate the overall information gain, and consequently, the objective function from each potential sense. These candidate sensings were then crosschecked against operation costs and feasibility. Consequently, an operations plan was derived that combined both operational constraints/costs and sensing gain. Probabilistic modeling was used to perform an approximate inversion of the model, which enabled calculation of sensing gains, and subsequent combination with operational costs. This incorporation of operations models to assess cost and feasibility for specific classes of vehicles is unique.

  17. Fixed-sample optimization using a probability density function

    SciTech Connect

    Barnett, R.N.; Sun, Zhiwei; Lester, W.A. Jr. |

    1997-12-31

    We consider the problem of optimizing parameters in a trial function that is to be used in fixed-node diffusion Monte Carlo calculations. We employ a trial function with a Boys-Handy correlation function and a one-particle basis set of high quality. By employing sample points picked from a positive definite distribution, parameters that determine the nodes of the trial function can be varied without introducing singularities into the optimization. For CH as a test system, we find that a trial function of high quality is obtained and that this trial function yields an improved fixed-node energy. This result sheds light on the important question of how to improve the nodal structure and, thereby, the accuracy of diffusion Monte Carlo.

  18. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    PubMed

    Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  19. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality

    PubMed Central

    Gossner, Martin M.; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W.; Zytynska, Sharon E.

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  20. Modified Direct Insertion/Cancellation Method Based Sample Rate Conversion for Software Defined Radio

    NASA Astrophysics Data System (ADS)

    Bostamam, Anas Muhamad; Sanada, Yukitoshi; Minami, Hideki

    In this paper, a new fractional sample rate conversion (SRC) scheme based on a direct insertion/cancellation scheme is proposed. This scheme is suitable for signals that are sampled at a high sample rate and converted to a lower sample rate. The direct insertion/cancellation scheme may achieve low-complexity and lower power consumption as compared to the other SRC techniques. However, the direct insertion/cancellation technique suffers from large aliasing and distortion. The aliasing from an adjacent channel interferes the desired signal and degrades the performance. Therefore, a modified direct insertion/cancellation scheme is proposed in order to realize high performance resampling.

  1. Decision Models for Determining the Optimal Life Test Sampling Plans

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Strelchonok, Vladimir F.

    2010-11-01

    Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one-parameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow-up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can positively affect the image of the product, and thus attract more consumers to buy it. This paper develops the frequentist (non-Bayesian) decision models for determining the optimal life test sampling plans with an aim of cost minimization by identifying the appropriate number of product failures in a sample that should be used as a threshold in judging the rejection of a batch. The two-parameter exponential and Weibull distributions with two unknown parameters are assumed to be appropriate for modelling the lifetime of a product. A practical numerical application is employed to demonstrate the proposed approach.

  2. Optimization of Evans blue quantitation in limited rat tissue samples

    NASA Astrophysics Data System (ADS)

    Wang, Hwai-Lee; Lai, Ted Weita

    2014-10-01

    Evans blue dye (EBD) is an inert tracer that measures plasma volume in human subjects and vascular permeability in animal models. Quantitation of EBD can be difficult when dye concentration in the sample is limited, such as when extravasated dye is measured in the blood-brain barrier (BBB) intact brain. The procedure described here used a very small volume (30 µl) per sample replicate, which enabled high-throughput measurements of the EBD concentration based on a standard 96-well plate reader. First, ethanol ensured a consistent optic path length in each well and substantially enhanced the sensitivity of EBD fluorescence spectroscopy. Second, trichloroacetic acid (TCA) removed false-positive EBD measurements as a result of biological solutes and partially extracted EBD into the supernatant. Moreover, a 1:2 volume ratio of 50% TCA ([TCA final] = 33.3%) optimally extracted EBD from the rat plasma protein-EBD complex in vitro and in vivo, and 1:2 and 1:3 weight-volume ratios of 50% TCA optimally extracted extravasated EBD from the rat brain and liver, respectively, in vivo. This procedure is particularly useful in the detection of EBD extravasation into the BBB-intact brain, but it can also be applied to detect dye extravasation into tissues where vascular permeability is less limiting.

  3. Optimal CCD readout by digital correlated double sampling

    NASA Astrophysics Data System (ADS)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  4. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  5. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. PMID:26775132

  6. Clever particle filters, sequential importance sampling and the optimal proposal

    NASA Astrophysics Data System (ADS)

    Snyder, Chris

    2014-05-01

    Particle filters rely on sequential importance sampling and it is well known that their performance can depend strongly on the choice of proposal distribution from which new ensemble members (particles) are drawn. The use of clever proposals has seen substantial recent interest in the geophysical literature, with schemes such as the implicit particle filter and the equivalent-weights particle filter. Both these schemes employ proposal distributions at time tk+1 that depend on the state at tk and the observations at time tk+1. I show that, beginning with particles drawn randomly from the conditional distribution of the state at tk given observations through tk, the optimal proposal (the distribution of the state at tk+1 given the state at tk and the observations at tk+1) minimizes the variance of the importance weights for particles at tk overall all possible proposal distributions. This means that bounds on the performance of the optimal proposal, such as those given by Snyder (2011), also bound the performance of the implicit and equivalent-weights particle filters. In particular, in spite of the fact that they may be dramatically more effective than other particle filters in specific instances, those schemes will suffer degeneracy (maximum importance weight approaching unity) unless the ensemble size is exponentially large in a quantity that, in the simplest case that all degrees of freedom in the system are i.i.d., is proportional to the system dimension. I will also discuss the behavior to be expected in more general cases, such as global numerical weather prediction, and how that behavior depends qualitatively on the observing network. Snyder, C., 2012: Particle filters, the "optimal" proposal and high-dimensional systems. Proceedings, ECMWF Seminar on Data Assimilation for Atmosphere and Ocean., 6-9 September 2011.

  7. Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties

    NASA Astrophysics Data System (ADS)

    Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens

    2013-04-01

    The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field

  8. Problems associated with using filtration to define dissolved trace element concentrations in natural water samples

    USGS Publications Warehouse

    Horowitz, A.J.; Lum, K.R.; Garbarino, J.R.; Hall, G.E.M.; Lemieux, C.; Demas, C.R.

    1996-01-01

    Field and laboratory experiments indicate that a number of factors associated with filtration other than just pore size (e.g., diameter, manufacturer, volume of sample processed, amount of suspended sediment in the sample) can produce significant variations in the 'dissolved' concentrations of such elements as Fe, Al, Cu, Zn, Pb, Co, and Ni. The bulk of these variations result from the inclusion/exclusion of colloidally associated trace elements in the filtrate, although dilution and sorption/desorption from filters also may be factors. Thus, dissolved trace element concentrations quantitated by analyzing filtrates generated by processing whole water through similar pore-sized filters may not be equal or comparable. As such, simple filtration of unspecified volumes of natural water through unspecified 0.45-??m membrane filters may no longer represent an acceptable operational definition for a number of dissolved chemical constituents.

  9. Determination of optimal sampling times for a two blood sample clearance method using (51)Cr-EDTA in cats.

    PubMed

    Vandermeulen, Eva; De Sadeleer, Carlos; Piepsz, Amy; Ham, Hamphrey R; Dobbeleir, André A; Vermeire, Simon T; Van Hoek, Ingrid M; Daminet, Sylvie; Slegers, Guido; Peremans, Kathelijne Y

    2010-08-01

    Estimation of the glomerular filtration rate (GFR) is a useful tool in the evaluation of kidney function in feline medicine. GFR can be determined by measuring the rate of tracer disappearance from the blood, and although these measurements are generally performed by multi-sampling techniques, simplified methods are more convenient in clinical practice. The optimal times for a simplified sampling strategy with two blood samples (2BS) for GFR measurement in cats using plasma (51)chromium ethylene diamine tetra-acetic acid ((51)Cr-EDTA) clearance were investigated. After intravenous administration of (51)Cr-EDTA, seven blood samples were obtained in 46 cats (19 euthyroid and 27 hyperthyroid cats, none with previously diagnosed chronic kidney disease (CKD)). The plasma clearance was then calculated from the seven point blood kinetics (7BS) and used for comparison to define the optimal sampling strategy by correlating different pairs of time points to the reference method. Mean GFR estimation for the reference method was 3.7+/-2.5 ml/min/kg (mean+/-standard deviation (SD)). Several pairs of sampling times were highly correlated with this reference method (r(2) > or = 0.980), with the best results when the first sample was taken 30 min after tracer injection and the second sample between 198 and 222 min after injection; or with the first sample at 36 min and the second at 234 or 240 min (r(2) for both combinations=0.984). Because of the similarity of GFR values obtained with the 2BS method in comparison to the values obtained with the 7BS reference method, the simplified method may offer an alternative for GFR estimation. Although a wide range of GFR values was found in the included group of cats, the applicability should be confirmed in cats suspected of renal disease and with confirmed CKD. Furthermore, although no indications of age-related effect were found in this study, a possible influence of age should be included in future studies. PMID:20452793

  10. Optimization of the development process for air sampling filter standards

    NASA Astrophysics Data System (ADS)

    Mena, RaJah Marie

    Air monitoring is an important analysis technique in health physics. However, creating standards which can be used to calibrate detectors used in the analysis of the filters deployed for air monitoring can be challenging. The activity of a standard should be well understood, this includes understanding how the location within the filter affects the final surface emission rate. The purpose of this research is to determine the parameters which most affect uncertainty in an air filter standard and optimize these parameters such that calibrations made with them most accurately reflect the true activity contained inside. A deposition pattern was chosen from literature to provide the best approximation of uniform deposition of material across the filter. Samples sets were created varying the type of radionuclide, amount of activity (high activity at 6.4 -- 306 Bq/filter and one low activity 0.05 -- 6.2 Bq/filter, and filter type. For samples analyzed for gamma or beta contaminants, the standards created with this procedure were deemed sufficient. Additional work is needed to reduce errors to ensure this is a viable procedure especially for alpha contaminants.

  11. Homosexual, gay, and lesbian: defining the words and sampling the populations.

    PubMed

    Donovan, J M

    1992-01-01

    The lack of both specificity and consensus about definitions for homosexual, homosexuality, gay, and lesbian are first shown to confound comparative research and cumulative understanding because criteria for inclusion within the subject populations are often not consistent. The Description section examines sociolinguistic variables which determine patterns of preferred choice of terminology, and considers how these might impact gay and lesbian studies. Attitudes and style are found to influence word choice. These results are used in the second section to devise recommended definitional limits which would satisfy both communication needs and methodological purposes, especially those of sampling. PMID:1299702

  12. Optimization for Peptide Sample Preparation for Urine Peptidomics

    SciTech Connect

    Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.

    2014-02-25

    when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.

  13. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  14. Defining the Enterovirus Diversity Landscape of a Fecal Sample: A Methodological Challenge?

    PubMed

    Faleye, Temitope Oluwasegun Cephas; Adewumi, Moses Olubusuyi; Adeniji, Johnson Adekunle

    2016-01-01

    Enteroviruses are a group of over 250 naked icosahedral virus serotypes that have been associated with clinical conditions that range from intrauterine enterovirus transmission withfataloutcome through encephalitis and meningitis, to paralysis. Classically, enterovirus detection was done by assaying for the development of the classic enterovirus-specific cytopathic effect in cell culture. Subsequently, the isolates were historically identified by a neutralization assay. More recently, identification has been done by reverse transcriptase-polymerase chain reaction (RT-PCR). However, in recent times, there is a move towards direct detection and identification of enteroviruses from clinical samples using the cell culture-independent RT semi-nested PCR (RT-snPCR) assay. This RT-snPCR procedure amplifies the VP1 gene, which is then sequenced and used for identification. However, while cell culture-based strategies tend to show a preponderance of certain enterovirus species depending on the cell lines included in the isolation protocol, the RT-snPCR strategies tilt in a different direction. Consequently, it is becoming apparent that the diversity observed in certain enterovirus species, e.g., enterovirus species B(EV-B), might not be because they are the most evolutionarily successful. Rather, it might stem from cell line-specific bias accumulated over several years of use of the cell culture-dependent isolation protocols. Furthermore, it might also be a reflection of the impact of the relative genome concentration on the result of pan-enterovirus VP1 RT-snPCR screens used during the identification of cell culture isolates. This review highlights the impact of these two processes on the current diversity landscape of enteroviruses and the need to re-assess enterovirus detection and identification algorithms in a bid to better balance our understanding of the enterovirus diversity landscape. PMID:26771630

  15. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  16. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    NASA Astrophysics Data System (ADS)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  17. Defining the optimal animal model for translational research using gene set enrichment analysis.

    PubMed

    Weidner, Christopher; Steinfath, Matthias; Opitz, Elisa; Oelgeschläger, Michael; Schönfelder, Gilbert

    2016-01-01

    The mouse is the main model organism used to study the functions of human genes because most biological processes in the mouse are highly conserved in humans. Recent reports that compared identical transcriptomic datasets of human inflammatory diseases with datasets from mouse models using traditional gene-to-gene comparison techniques resulted in contradictory conclusions regarding the relevance of animal models for translational research. To reduce susceptibility to biased interpretation, all genes of interest for the biological question under investigation should be considered. Thus, standardized approaches for systematic data analysis are needed. We analyzed the same datasets using gene set enrichment analysis focusing on pathways assigned to inflammatory processes in either humans or mice. The analyses revealed a moderate overlap between all human and mouse datasets, with average positive and negative predictive values of 48 and 57% significant correlations. Subgroups of the septic mouse models (i.e., Staphylococcus aureus injection) correlated very well with most human studies. These findings support the applicability of targeted strategies to identify the optimal animal model and protocol to improve the success of translational research. PMID:27311961

  18. Defining the face processing network: optimization of the functional localizer in fMRI.

    PubMed

    Fox, Christopher J; Iaria, Giuseppe; Barton, Jason J S

    2009-05-01

    Functional localizers that contrast brain signal when viewing faces versus objects are commonly used in functional magnetic resonance imaging studies of face processing. However, current protocols do not reliably show all regions of the core system for face processing in all subjects when conservative statistical thresholds are used, which is problematic in the study of single subjects. Furthermore, arbitrary variations in the applied thresholds are associated with inconsistent estimates of the size of face-selective regions-of-interest (ROIs). We hypothesized that the use of more natural dynamic facial images in localizers might increase the likelihood of identifying face-selective ROIs in individual subjects, and we also investigated the use of a method to derive the statistically optimal ROI cluster size independent of thresholds. We found that dynamic facial stimuli were more effective than static stimuli, identifying 98% (versus 72% for static) of ROIs in the core face processing system and 69% (versus 39% for static) of ROIs in the extended face processing system. We then determined for each core face processing ROI, the cluster size associated with maximum statistical face-selectivity, which on average was approximately 50 mm(3) for the fusiform face area, the occipital face area, and the posterior superior temporal sulcus. We suggest that the combination of (a) more robust face-related activity induced by a dynamic face localizer and (b) a cluster-size determination based on maximum face-selectivity increases both the sensitivity and the specificity of the characterization of face-related ROIs in individual subjects. PMID:18661501

  19. Intentional sampling by goal optimization with decoupling by stochastic perturbation

    NASA Astrophysics Data System (ADS)

    Lauretto, Marcelo De Souza; Nakano, Fábio; Pereira, Carlos Alberto de Bragança; Stern, Julio Michael

    2012-10-01

    Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications.

  20. Defining a sample preparation workflow for advanced virus detection and understanding sensitivity by next-generation sequencing.

    PubMed

    Wang, Christopher J; Feng, Szi Fei; Duncan, Paul

    2014-01-01

    The application of next-generation sequencing (also known as deep sequencing or massively parallel sequencing) for adventitious agent detection is an evolving field that is steadily gaining acceptance in the biopharmaceutical industry. In order for this technology to be successfully applied, a robust method that can isolate viral nucleic acids from a variety of biological samples (such as host cell substrates, cell-free culture fluids, viral vaccine harvests, and animal-derived raw materials) must be established by demonstrating recovery of model virus spikes. In this report, we implement the sample preparation workflow developed by Feng et. al. and assess the sensitivity of virus detection in a next-generation sequencing readout using the Illumina MiSeq platform. We describe a theoretical model to estimate the detection of a target virus in a cell lysate or viral vaccine harvest sample. We show that nuclease treatment can be used for samples that contain a high background of non-relevant nucleic acids (e.g., host cell DNA) in order to effectively increase the sensitivity of sequencing target viruses and reduce the complexity of data analysis. Finally, we demonstrate that at defined spike levels, nucleic acids from a panel of model viruses spiked into representative cell lysate and viral vaccine harvest samples can be confidently recovered by next-generation sequencing. PMID:25475632

  1. Investigation of optimized wafer sampling with multiple integrated metrology modules within photolithography equipment

    NASA Astrophysics Data System (ADS)

    Taylor, Ted L.; Makimura, Eri

    2007-03-01

    Micron Technology, Inc., explores the challenges of defining specific wafer sampling scenarios for users of multiple integrated metrology modules within a Tokyo Electron Limited (TEL) CLEAN TRACK TM LITHIUS TM. With the introduction of integrated metrology (IM) into the photolithography coater/developer, users are faced with the challenge of determining what type of data is required to collect to adequately monitor the photolithography tools and the manufacturing process. Photolithography coaters/developers have a metrology block that is capable of integrating three metrology modules into the standard wafer flow. Taking into account the complexity of multiple metrology modules and varying across-wafer sampling plans per metrology module, users must optimize the module wafer sampling to obtain their desired goals. Users must also understand the complexity of the coater/developer handling systems to deliver wafers to each module. Coater/developer systems typically process wafers sequentially through each module to ensure consistent processing. In these systems, the first wafer must process through a module before the next wafer can process through a module, and the first wafer must return to the cassette before the second wafer can return to the cassette. IM modules within this type of system can reduce throughput and limit flexible wafer selections. Finally, users must have the ability to select specific wafer samplings for each IM module. This case study explores how to optimize wafer sampling plans and how to identify limitations with the complexity of multiple integrated modules to ensure maximum metrology throughput without impact to the productivity of processing wafers through the photolithography cell (litho cell).

  2. Estimating optimal sampling unit sizes for satellite surveys

    NASA Technical Reports Server (NTRS)

    Hallum, C. R.; Perry, C. R., Jr.

    1984-01-01

    This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.

  3. Analysis and optimization of bulk DNA sampling with binary scoring for germplasm characterization.

    PubMed

    Reyes-Valdés, M Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso

    2013-01-01

    The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321

  4. Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization

    PubMed Central

    Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso

    2013-01-01

    The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321

  5. Defining Optimal Head-Tilt Position of Resuscitation in Neonates and Young Infants Using Magnetic Resonance Imaging Data.

    PubMed

    Bhalala, Utpal S; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A G M; Allen, Robert H; Acharya, Soumyadipta

    2016-01-01

    Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0-28 days) and young infants (age: 29 days-4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144-150°. PMID:27003759

  6. Defining Optimal Head-Tilt Position of Resuscitation in Neonates and Young Infants Using Magnetic Resonance Imaging Data

    PubMed Central

    Bhalala, Utpal S.; Hemani, Malvi; Shah, Meehir; Kim, Barbara; Gu, Brian; Cruz, Angelo; Arunachalam, Priya; Tian, Elli; Yu, Christine; Punnoose, Joshua; Chen, Steven; Petrillo, Christopher; Brown, Alisa; Munoz, Karina; Kitchen, Grant; Lam, Taylor; Bosemani, Thangamadhan; Huisman, Thierry A. G. M.; Allen, Robert H.; Acharya, Soumyadipta

    2016-01-01

    Head-tilt maneuver assists with achieving airway patency during resuscitation. However, the relationship between angle of head-tilt and airway patency has not been defined. Our objective was to define an optimal head-tilt position for airway patency in neonates (age: 0–28 days) and young infants (age: 29 days–4 months). We performed a retrospective study of head and neck magnetic resonance imaging (MRI) of neonates and infants to define the angle of head-tilt for airway patency. We excluded those with an artificial airway or an airway malformation. We defined head-tilt angle a priori as the angle between occipito-ophisthion line and ophisthion-C7 spinous process line on the sagittal MR images. We evaluated medical records for Hypoxic Ischemic Encephalopathy (HIE) and exposure to sedation during MRI. We analyzed MRI of head and neck regions of 63 children (53 neonates and 10 young infants). Of these 63 children, 17 had evidence of airway obstruction and 46 had a patent airway on MRI. Also, 16/63 had underlying HIE and 47/63 newborn infants had exposure to sedative medications during MRI. In spontaneously breathing and neurologically depressed newborn infants, the head-tilt angle (median ± SD) associated with patent airway (125.3° ± 11.9°) was significantly different from that of blocked airway (108.2° ± 17.1°) (Mann Whitney U-test, p = 0.0045). The logistic regression analysis showed that the proportion of patent airways progressively increased with an increasing head-tilt angle, with > 95% probability of a patent airway at head-tilt angle 144–150°. PMID:27003759

  7. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    PubMed Central

    Brunelli, Davide; Caione, Carlo

    2015-01-01

    Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring. PMID:26184203

  8. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  9. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L.) cultivars using AFLP.

    PubMed

    Khanlou, Khosro Mehdi; Vandepitte, Katrien; Asl, Leila Kheibarshekan; Van Bockstaele, Erik

    2011-04-01

    Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci) from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He) and Shannon diversity index (I) were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation. PMID:21734826

  10. Ant colony optimization as a method for strategic genotype sampling.

    PubMed

    Spangler, M L; Robbins, K R; Bertrand, J K; Macneil, M; Rekaya, R

    2009-06-01

    A simulation study was carried out to develop an alternative method of selecting animals to be genotyped. Simulated pedigrees included 5000 animals, each assigned genotypes for a bi-allelic single nucleotide polymorphism (SNP) based on assumed allelic frequencies of 0.7/0.3 and 0.5/0.5. In addition to simulated pedigrees, two beef cattle pedigrees, one from field data and the other from a research population, were used to test selected methods using simulated genotypes. The proposed method of ant colony optimization (ACO) was evaluated based on the number of alleles correctly assigned to ungenotyped animals (AK(P)), the probability of assigning true alleles (AK(G)) and the probability of correctly assigning genotypes (APTG). The proposed animal selection method of ant colony optimization was compared to selection using the diagonal elements of the inverse of the relationship matrix (A(-1)). Comparisons of these two methods showed that ACO yielded an increase in AK(P) ranging from 4.98% to 5.16% and an increase in APTG from 1.6% to 1.8% using simulated pedigrees. Gains in field data and research pedigrees were slightly lower. These results suggest that ACO can provide a better genotyping strategy, when compared to A(-1), with different pedigree sizes and structures. PMID:19220227

  11. A Sample Time Optimization Problem in a Digital Control System

    NASA Astrophysics Data System (ADS)

    Mitkowski, Wojciech; Oprzędkiewicz, Krzysztof

    In the paper a phenomenon of the existence of a sample time minimizing the settling time in a digital control system is described. As a control plant an experimental heat object was used. The control system was built with the use of a soft PLC system SIEMENS SIMATIC. As the control algorithm a finite dimensional dynamic compensator was applied. During tests of the control system it was observed that there exists a value of the sample time which minimizes the settling time in the system. This phenomenon is tried to explain.

  12. Optimization of strawberry volatile sampling by odor representativeness

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this work was to choose a suitable sampling headspace technique to study 'Festival' aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...

  13. Optimization of strawberry volatile sampling by direct gas chromatography olfactometry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this work was to choose a suitable sampling headspace technique to study ‘Festival’ aroma, the main strawberry cultivar grown in Florida. For that, the aromatic quality of extracts from different headspace techniques was evaluated using direct gas chromatography-olfactometry (D-GC-O), a s...

  14. Sample of CFD optimization of a centrifugal compressor stage

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.

    2015-08-01

    Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.

  15. Determination and optimization of spatial samples for distributed measurements.

    SciTech Connect

    Huo, Xiaoming; Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  16. Optimizing analog-to-digital converters for sampling extracellular potentials.

    PubMed

    Artan, N Sertac; Xu, Xiaoxiang; Shi, Wei; Chao, H Jonathan

    2012-01-01

    In neural implants, an analog-to-digital converter (ADC) provides the delicate interface between the analog signals generated by neurological processes and the digital signal processor that is tasked to interpret these signals for instance for epileptic seizure detection or limb control. In this paper, we propose a low-power ADC architecture for neural implants that process extracellular potentials. The proposed architecture uses the spike detector that is readily available on most of these implants in a closed-loop with an ADC. The spike detector determines whether the current input signal is part of a spike or it is part of noise to adaptively determine the instantaneous sampling rate of the ADC. The proposed architecture can reduce the power consumption of a traditional ADC by 62% when sampling extracellular potentials without any significant impact on spike detection accuracy. PMID:23366227

  17. Statistically optimal analysis of samples from multiple equilibrium states

    PubMed Central

    Shirts, Michael R.; Chodera, John D.

    2008-01-01

    We present a new estimator for computing free energy differences and thermodynamic expectations as well as their uncertainties from samples obtained from multiple equilibrium states via either simulation or experiment. The estimator, which we call the multistate Bennett acceptance ratio estimator (MBAR) because it reduces to the Bennett acceptance ratio estimator (BAR) when only two states are considered, has significant advantages over multiple histogram reweighting methods for combining data from multiple states. It does not require the sampled energy range to be discretized to produce histograms, eliminating bias due to energy binning and significantly reducing the time complexity of computing a solution to the estimating equations in many cases. Additionally, an estimate of the statistical uncertainty is provided for all estimated quantities. In the large sample limit, MBAR is unbiased and has the lowest variance of any known estimator for making use of equilibrium data collected from multiple states. We illustrate this method by producing a highly precise estimate of the potential of mean force for a DNA hairpin system, combining data from multiple optical tweezer measurements under constant force bias. PMID:19045004

  18. [Study on the optimization methods of common-batch identification of amphetamine samples].

    PubMed

    Zhang, Jianxin; Zhang, Daming

    2008-07-01

    The essay introduced the technology of amphetamine identification and its optimization method. Impurity profiling of amphetamine was analyzed by GC-MS. Identification of common-batch amphetamine samples could be successfully finished by the data transition and pre-treating of the peak areas. The analytical method was improved by optimizing the techniques of sample extraction, gas chromatograph, sample separation and detection. PMID:18839544

  19. Optimized Sampling Strategies For Non-Proliferation Monitoring: Report

    SciTech Connect

    Kurzeja, R.; Buckley, R.; Werth, D.; Chiswell, S.

    2015-10-20

    Concentration data collected from the 2013 H-Canyon effluent reprocessing experiment were reanalyzed to improve the source term estimate. When errors in the model-predicted wind speed and direction were removed, the source term uncertainty was reduced to 30% of the mean. This explained the factor of 30 difference between the source term size derived from data at 5 km and 10 km downwind in terms of the time history of dissolution. The results show a path forward to develop a sampling strategy for quantitative source term calculation.

  20. Optimal sampling of visual information for lightness judgments

    PubMed Central

    Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.

    2013-01-01

    The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object’s luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251

  1. Optimizing fish sampling for fish - mercury bioaccumulation factors

    USGS Publications Warehouse

    Scudder Eikenberry, Barbara C.; Riva-Murray, Karen; Knightes, Christopher D.; Journey, Celeste A.; Chasar, Lia C.; Brigham, Mark E.; Bradley, Paul M.

    2015-01-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements.

  2. Optimal sampling of visual information for lightness judgments.

    PubMed

    Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R

    2013-07-01

    The variable resolution and limited processing capacity of the human visual system requires us to sample the world with eye movements and attentive processes. Here we show that where observers look can strongly modulate their reports of simple surface attributes, such as lightness. When observers matched the color of natural objects they based their judgments on the brightest parts of the objects; at the same time, they tended to fixate points with above-average luminance. When we forced participants to fixate a specific point on the object using a gaze-contingent display setup, the matched lightness was higher when observers fixated bright regions. This finding indicates a causal link between the luminance of the fixated region and the lightness match for the whole object. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. This sampling strategy is an efficient and simple heuristic for the visual system to achieve accurate and invariant judgments of lightness. PMID:23776251

  3. Optimizing fish sampling for fish-mercury bioaccumulation factors.

    PubMed

    Scudder Eikenberry, Barbara C; Riva-Murray, Karen; Knightes, Christopher D; Journey, Celeste A; Chasar, Lia C; Brigham, Mark E; Bradley, Paul M

    2015-09-01

    Fish Bioaccumulation Factors (BAFs; ratios of mercury (Hg) in fish (Hgfish) and water (Hgwater)) are used to develop total maximum daily load and water quality criteria for Hg-impaired waters. Both applications require representative Hgfish estimates and, thus, are sensitive to sampling and data-treatment methods. Data collected by fixed protocol from 11 streams in 5 states distributed across the US were used to assess the effects of Hgfish normalization/standardization methods and fish-sample numbers on BAF estimates. Fish length, followed by weight, was most correlated to adult top-predator Hgfish. Site-specific BAFs based on length-normalized and standardized Hgfish estimates demonstrated up to 50% less variability than those based on non-normalized Hgfish. Permutation analysis indicated that length-normalized and standardized Hgfish estimates based on at least 8 trout or 5 bass resulted in mean Hgfish coefficients of variation less than 20%. These results are intended to support regulatory mercury monitoring and load-reduction program improvements. PMID:25592462

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. Fast and Statistically Optimal Period Search in Uneven Sampled Observations

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    1996-04-01

    The classical methods for searching for a periodicity in uneven sampled observations suffer from a poor match of the model and true signals and/or use of a statistic with poor properties. We present a new method employing periodic orthogonal polynomials to fit the observations and the analysis of variance (ANOVA) statistic to evaluate the quality of the fit. The orthogonal polynomials constitute a flexible and numerically efficient model of the observations. Among all popular statistics, ANOVA has optimum detection properties as the uniformly most powerful test. Our recurrence algorithm for expansion of the observations into the orthogonal polynomials is fast and numerically stable. The expansion is equivalent to an expansion into Fourier series. Aside from its use of an inefficient statistic, the Lomb-Scargle power spectrum can be considered a special case of our method. Tests of our new method on simulated and real light curves of nonsinusoidal pulsators demonstrate its excellent performance. In particular, dramatic improvements are gained in detection sensitivity and in the damping of alias periods.

  6. Optimal Sampling of a Reaction Coordinate in Molecular Dynamics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2005-01-01

    Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.

  7. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282

  8. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136

  9. Optimal designs of the median run length based double sampling X chart for minimizing the average sample size.

    PubMed

    Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin

    2013-01-01

    Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873

  10. Optimal Designs of the Median Run Length Based Double Sampling X̄ Chart for Minimizing the Average Sample Size

    PubMed Central

    Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin

    2013-01-01

    Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873

  11. Balancing sample accumulation and DNA degradation rates to optimize noninvasive genetic sampling of sympatric carnivores.

    PubMed

    Lonsinger, Robert C; Gese, Eric M; Dempsey, Steven J; Kluever, Bryan M; Johnson, Timothy R; Waits, Lisette P

    2015-07-01

    Noninvasive genetic sampling, or noninvasive DNA sampling (NDS), can be an effective monitoring approach for elusive, wide-ranging species at low densities. However, few studies have attempted to maximize sampling efficiency. We present a model for combining sample accumulation and DNA degradation to identify the most efficient (i.e. minimal cost per successful sample) NDS temporal design for capture-recapture analyses. We use scat accumulation and faecal DNA degradation rates for two sympatric carnivores, kit fox (Vulpes macrotis) and coyote (Canis latrans) across two seasons (summer and winter) in Utah, USA, to demonstrate implementation of this approach. We estimated scat accumulation rates by clearing and surveying transects for scats. We evaluated mitochondrial (mtDNA) and nuclear (nDNA) DNA amplification success for faecal DNA samples under natural field conditions for 20 fresh scats/species/season from <1-112 days. Mean accumulation rates were nearly three times greater for coyotes (0.076 scats/km/day) than foxes (0.029 scats/km/day) across seasons. Across species and seasons, mtDNA amplification success was ≥95% through day 21. Fox nDNA amplification success was ≥70% through day 21 across seasons. Coyote nDNA success was ≥70% through day 21 in winter, but declined to <50% by day 7 in summer. We identified a common temporal sampling frame of approximately 14 days that allowed species to be monitored simultaneously, further reducing time, survey effort and costs. Our results suggest that when conducting repeated surveys for capture-recapture analyses, overall cost-efficiency for NDS may be improved with a temporal design that balances field and laboratory costs along with deposition and degradation rates. PMID:25454561

  12. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs. PMID:26316105

  13. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    SciTech Connect

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  14. Optimal sampling efficiency in Monte Carlo sampling with an approximate potential

    SciTech Connect

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Building on the work of Iftimie et al., Boltzmann sampling of an approximate potential (the 'reference' system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is evaluated at a higher level of approximation (the 'full' system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory (DFT) potentials are discussed.

  15. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  16. A normative inference approach for optimal sample sizes in decisions from experience.

    PubMed

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  17. Defining the Optimal Planning Target Volume in Image-Guided Stereotactic Radiosurgery of Brain Metastases: Results of a Randomized Trial

    SciTech Connect

    Kirkpatrick, John P.; Wang, Zhiheng; Sampson, John H.; McSherry, Frances; Herndon, James E.; Allen, Karen J.; Duffy, Eileen; Hoang, Jenny K.; Chang, Zheng; Yoo, David S.; Kelsey, Chris R.; Yin, Fang-Fang

    2015-01-01

    Purpose: To identify an optimal margin about the gross target volume (GTV) for stereotactic radiosurgery (SRS) of brain metastases, minimizing toxicity and local recurrence. Methods and Materials: Adult patients with 1 to 3 brain metastases less than 4 cm in greatest dimension, no previous brain radiation therapy, and Karnofsky performance status (KPS) above 70 were eligible for this institutional review board–approved trial. Individual lesions were randomized to 1- or 3- mm uniform expansion of the GTV defined on contrast-enhanced magnetic resonance imaging (MRI). The resulting planning target volume (PTV) was treated to 24, 18, or 15 Gy marginal dose for maximum PTV diameters less than 2, 2 to 2.9, and 3 to 3.9 cm, respectively, using a linear accelerator–based image-guided system. The primary endpoint was local recurrence (LR). Secondary endpoints included neurocognition Mini-Mental State Examination, Trail Making Test Parts A and B, quality of life (Functional Assessment of Cancer Therapy-Brain), radionecrosis (RN), need for salvage radiation therapy, distant failure (DF) in the brain, and overall survival (OS). Results: Between February 2010 and November 2012, 49 patients with 80 brain metastases were treated. The median age was 61 years, the median KPS was 90, and the predominant histologies were non–small cell lung cancer (25 patients) and melanoma (8). Fifty-five, 19, and 6 lesions were treated to 24, 18, and 15 Gy, respectively. The PTV/GTV ratio, volume receiving 12 Gy or more, and minimum dose to PTV were significantly higher in the 3-mm group (all P<.01), and GTV was similar (P=.76). At a median follow-up time of 32.2 months, 11 patients were alive, with median OS 10.6 months. LR was observed in only 3 lesions (2 in the 1 mm group, P=.51), with 6.7% LR 12 months after SRS. Biopsy-proven RN alone was observed in 6 lesions (5 in the 3-mm group, P=.10). The 12-month DF rate was 45.7%. Three months after SRS, no significant change in

  18. Protocol for optimal quality and quantity pollen DNA isolation from honey samples.

    PubMed

    Lalhmangaihi, Ralte; Ghatak, Souvik; Laha, Ramachandra; Gurusubramanian, Guruswami; Kumar, Nachimuthu Senthil

    2014-12-01

    The present study illustrates an optimized sample preparation method for an efficient DNA isolation from low quantities of honey samples. A conventional PCR-based method was validated, which potentially enables characterization of plant species from as low as 3 ml bee-honey samples. In the present study, an anionic detergent was used to lyse the hard outer pollen shell, and DTT was used for isolation of thiolated DNA, as it might facilitate protein digestion and assists in releasing the DNA into solution, as well as reduce cross-links between DNA and other biomolecules. Optimization of both the quantity of honey sample and time duration for DNA isolation was done during development of this method. With the use of this method, chloroplast DNA was successfully PCR amplified and sequenced from honey DNA samples. PMID:25365793

  19. Optimal number of samples to test for institutional respiratory infection outbreaks in Ontario.

    PubMed

    Peci, A; Marchand-Austin, A; Winter, A-L; Winter, A-J; Gubbay, J B

    2013-08-01

    The objective of this study was to determine the optimal number of respiratory samples per outbreak to be tested for institutional respiratory outbreaks in Ontario. We reviewed respiratory samples tested for respiratory viruses by multiplex PCR as part of outbreak investigations. We documented outbreaks that were positive for any respiratory viruses and for influenza alone. At least one virus was detected in 1454 (85∙2%) outbreaks. The ability to detect influenza or any respiratory virus increased as the number of samples tested increased. When analysed by chronological order of when samples were received at the laboratory, percent positivity of outbreaks testing positive for any respiratory virus including influenza increased with the number of samples tested up to the ninth sample, with minimal benefit beyond the fourth sample tested. Testing up to four respiratory samples per outbreak was sufficient to detect viral organisms and resulted in significant savings for outbreak investigations. PMID:23146341

  20. XAFSmass: a program for calculating the optimal mass of XAFS samples

    NASA Astrophysics Data System (ADS)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  1. Optimization of dielectrophoretic separation and concentration of pathogens in complex biological samples

    NASA Astrophysics Data System (ADS)

    Bisceglia, E.; Cubizolles, M.; Mallard, F.; Pineda, F.; Francais, O.; Le Pioufle, B.

    2013-05-01

    Sample preparation is a key issue of modern analytical methods for in vitro diagnostics of diseases with microbiological origins: methods to separate bacteria from other elements of the complex biological samples are of great importance. In the present study, we investigated the DEP force as a way to perform such a de-complexification of the sample by extracting micro-organisms from a complex biological sample under a highly non-uniform electric field in a micro-system based on an interdigitated electrodes array. Different parameters were investigated to optimize the capture efficiency, such as the size of the gap between the electrodes and the height of the capture channel. These parameters are decisive for the distribution of the electric field inside the separation chamber. To optimize these relevant parameters, we performed numerical simulations using COMSOL Multiphysics and correlated them with experimental results. The optimization of the capture efficiency of the device has first been tested on micro-organisms solution but was also investigated on human blood samples spiked with micro-organisms, thereby mimicking real biological samples.

  2. Optimization of Sampling Positions for Measuring Ventilation Rates in Naturally Ventilated Buildings Using Tracer Gas

    PubMed Central

    Shen, Xiong; Zong, Chao; Zhang, Guoqiang

    2012-01-01

    Finding out the optimal sampling positions for measurement of ventilation rates in a naturally ventilated building using tracer gas is a challenge. Affected by the wind and the opening status, the representative positions inside the building may change dynamically at any time. An optimization procedure using the Response Surface Methodology (RSM) was conducted. In this method, the concentration field inside the building was estimated by a three-order RSM polynomial model. The experimental sampling positions to develop the model were chosen from the cross-section area of a pitched-roof building. The Optimal Design method which can decrease the bias of the model was adopted to select these sampling positions. Experiments with a scale model building were conducted in a wind tunnel to achieve observed values of those positions. Finally, the models in different cases of opening states and wind conditions were established and the optimum sampling position was obtained with a desirability level up to 92% inside the model building. The optimization was further confirmed by another round of experiments.

  3. Optimal sample preparation conditions for the determination of uranium in biological samples by kinetic phosphorescence analysis (KPA).

    PubMed

    Ejnik, J W; Hamilton, M M; Adams, P R; Carmichael, A J

    2000-12-15

    Kinetic phosphorescence analysis (KPA) is a proven technique for rapid, precise, and accurate determination of uranium in aqueous solutions. Uranium analysis of biological samples require dry-ashing in a muffle furnace between 400 and 600 degrees C followed by wet-ashing with concentrated nitric acid and hydrogen peroxide to digest the organic component in the sample that interferes with uranium determination by KPA. The optimal dry-ashing temperature was determined to be 450 degrees C. At dry-ashing temperatures greater than 450 degrees C, uranium loss was attributed to vaporization. High temperatures also caused increased background values that were attributed to uranium leaching from the glass vials. Dry-ashing temperatures less than 450 degrees C result in the samples needing additional wet-ashing steps. The recovery of uranium in urine samples was 99.2+/-4.02% between spiked concentrations of 1.98-1980 ng (0.198-198 microg l(-1)) uranium, whereas the recovery in whole blood was 89.9+/-7.33% between the same spiked concentrations. The limit of quantification in which uranium in urine and blood could be accurately measured above the background was determined to be 0.05 and 0.6 microg l(-1), respectively. PMID:11130202

  4. Optimization of low-background alpha spectrometers for analysis of thick samples.

    PubMed

    Misiaszek, M; Pelczar, K; Wójcik, M; Zuzel, G; Laubenstein, M

    2013-11-01

    Results of alpha spectrometric measurements performed deep underground and above ground with and without active veto show that the underground measurement of thick samples is the most sensitive method due to significant reduction of the muon-induced background. In addition, the polonium diffusion requires for some samples an appropriate selection of an energy region in the registered spectrum. On the basis of computer simulations the best counting conditions are selected for a thick lead sample in order to optimize the detection limit. PMID:23628514

  5. Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case

    PubMed Central

    Schmerling, Edward; Janson, Lucas; Pavone, Marco

    2015-01-01

    Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041

  6. Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples

    NASA Astrophysics Data System (ADS)

    Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav

    2014-05-01

    Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).

  7. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  8. Sampling design optimization for multivariate soil mapping, case study from Hungary

    NASA Astrophysics Data System (ADS)

    Szatmári, Gábor; Pásztor, László; Barta, Károly

    2014-05-01

    Direct observations of the soil are important for two main reasons in Digital Soil Mapping (DSM). First, they are used to characterize the relationship between the soil property of interest and the auxiliary information. Second, they are used to improve the predictions based on the auxiliary information. Hence there is a strong necessity to elaborate a well-established soil sampling strategy based on geostatistical tools, prior knowledge and available resources before the samples are actually collected from the area of interest. Fieldwork and laboratory analyses are the most expensive and labor-intensive part of DSM, meanwhile the collected samples and the measured data have a remarkable influence on the spatial predictions and their uncertainty. Numerous sampling strategy optimization techniques developed in the past decades. One of these optimization techniques is Spatial Simulated Annealing (SSA) that has been frequently used in soil surveys to minimize the average universal kriging variance. The benefit of the technique is, that the surveyor can optimize the sampling design for fixed number of observations taking auxiliary information, previously collected samples and inaccessible areas into account. The requirements are the known form of the regression model and the spatial structure of the residuals of the model. Another restriction is, that the technique is able to optimize the sampling design for just one target soil variable. However, in practice a soil survey usually aims to describe the spatial distribution of not just one but several pedological variables. In the recent paper we present a procedure developed in R-code to simultaneously optimize the sampling design by SSA for two soil variables using spatially averaged universal kriging variance as optimization criterion. Soil Organic Matter (SOM) content and rooting depth were chosen for this purpose. The methodology is illustrated with a legacy data set from a study area in Central Hungary. Legacy soil

  9. An Asymptotically-Optimal Sampling-Based Algorithm for Bi-directional Motion Planning

    PubMed Central

    Starek, Joseph A.; Gomez, Javier V.; Schmerling, Edward; Janson, Lucas; Moreno, Luis; Pavone, Marco

    2015-01-01

    Bi-directional search is a widely used strategy to increase the success and convergence rates of sampling-based motion planning algorithms. Yet, few results are available that merge both bi-directional search and asymptotic optimality into existing optimal planners, such as PRM*, RRT*, and FMT*. The objective of this paper is to fill this gap. Specifically, this paper presents a bi-directional, sampling-based, asymptotically-optimal algorithm named Bi-directional FMT* (BFMT*) that extends the Fast Marching Tree (FMT*) algorithm to bidirectional search while preserving its key properties, chiefly lazy search and asymptotic optimality through convergence in probability. BFMT* performs a two-source, lazy dynamic programming recursion over a set of randomly-drawn samples, correspondingly generating two search trees: one in cost-to-come space from the initial configuration and another in cost-to-go space from the goal configuration. Numerical experiments illustrate the advantages of BFMT* over its unidirectional counterpart, as well as a number of other state-of-the-art planners. PMID:27004130

  10. A method to optimize sampling locations for measuring indoor air distributions

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Shen, Xiong; Li, Jianmin; Li, Bingye; Duan, Ran; Lin, Chao-Hsin; Liu, Junjie; Chen, Qingyan

    2015-02-01

    Indoor air distributions, such as the distributions of air temperature, air velocity, and contaminant concentrations, are very important to occupants' health and comfort in enclosed spaces. When point data is collected for interpolation to form field distributions, the sampling locations (the locations of the point sensors) have a significant effect on time invested, labor costs and measuring accuracy on field interpolation. This investigation compared two different sampling methods: the grid method and the gradient-based method, for determining sampling locations. The two methods were applied to obtain point air parameter data in an office room and in a section of an economy-class aircraft cabin. The point data obtained was then interpolated to form field distributions by the ordinary Kriging method. Our error analysis shows that the gradient-based sampling method has 32.6% smaller error of interpolation than the grid sampling method. We acquired the function between the interpolation errors and the sampling size (the number of sampling points). According to the function, the sampling size has an optimal value and the maximum sampling size can be determined by the sensor and system errors. This study recommends the gradient-based sampling method for measuring indoor air distributions.

  11. Optimizing Spatio-Temporal Sampling Designs of Synchronous, Static, or Clustered Measurements

    NASA Astrophysics Data System (ADS)

    Helle, Kristina; Pebesma, Edzer

    2010-05-01

    When sampling spatio-temporal random variables, the cost of a measurement may differ according to the setup of the whole sampling design: static measurements, i.e. repeated measurements at the same location, synchronous measurements or clustered measurements may be cheaper per measurement than completely individual sampling. Such "grouped" measurements may however not be as good as individually chosen ones because of redundancy. Often, the overall cost rather than the total number of measurements is fixed. A sampling design with grouped measurements may allow for a larger number of measurements thus outweighing the drawback of redundancy. The focus of this paper is to include the tradeoff between the number of measurements and the freedom of their location in sampling design optimisation. For simple cases, optimal sampling designs may be fully determined. To predict e.g. the mean over a spatio-temporal field having known covariance, the optimal sampling design often is a grid with density determined by the sampling costs [1, Ch. 15]. For arbitrary objective functions sampling designs can be optimised relocating single measurements, e.g. by Spatial Simulated Annealing [2]. However, this does not allow to take advantage of lower costs when using grouped measurements. We introduce a heuristic that optimises an arbitrary objective function of sampling designs, including static, synchronous, or clustered measurements, to obtain better results at a given sampling budget. Given the cost for a measurement, either within a group or individually, the algorithm first computes affordable sampling design configurations. The number of individual measurements as well as kind and number of grouped measurements are determined. Random locations and dates are assigned to the measurements. Spatial Simulated Annealing is used on each of these initial sampling designs (in parallel) to improve them. In grouped measurements either the whole group is moved or single measurements within the

  12. Sampling of Ostreopsis cf. ovata using artificial substrates: Optimization of methods for the monitoring of benthic harmful algal blooms.

    PubMed

    Jauzein, Cécile; Fricke, Anna; Mangialajo, Luisa; Lemée, Rodolphe

    2016-06-15

    In the framework of monitoring of benthic harmful algal blooms (BHABs), the most commonly reported sampling strategy is based on the collection of macrophytes. However, this methodology has some inherent problems. A potential alternative method uses artificial substrates that collect resuspended benthic cells. The current study defines main improvements in this technique, through the use of fiberglass screens during a bloom of Ostreopsis cf. ovata. A novel set-up for the deployment of artificial substrates in the field was tested, using an easy clip-in system that helped restrain substrates perpendicular to the water flow. An experiment was run in order to compare the cell collection efficiency of different mesh sizes of fiberglass screens and results suggested an optimal porosity of 1-3mm. The present study goes further on showing artificial substrates, such as fiberglass screens, as efficient tools for the monitoring and mitigation of BHABs. PMID:27048690

  13. Optimizing Diagnostic Yield for EUS-Guided Sampling of Solid Pancreatic Lesions: A Technical Review

    PubMed Central

    Weston, Brian R.

    2013-01-01

    Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has a higher diagnostic accuracy for pancreatic cancer than other techniques. This article will review the current advances and considerations for optimizing diagnostic yield for EUS-guided sampling of solid pancreatic lesions. Preprocedural considerations include patient history, confirmation of appropriate indication, review of imaging, method of sedation, experience required by the endoscopist, and access to rapid on-site cytologic evaluation. New EUS imaging techniques that may assist with differential diagnoses include contrast-enhanced harmonic EUS, EUS elastography, and EUS spectrum analysis. FNA techniques vary, and multiple FNA needles are now commercially available; however, neither techniques nor available FNA needles have been definitively compared. The need for suction depends on the lesion, and the need for a stylet is equivocal. No definitive endosonographic finding can predict the optimal number of passes for diagnostic yield. Preparation of good smears and communication with the cytopathologist are essential to optimize yield. PMID:23935542

  14. Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

    SciTech Connect

    BILISOLY, ROGER L.; MCKENNA, SEAN A.

    2003-01-01

    Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no

  15. Time-Dependent Selection of an Optimal Set of Sources to Define a Stable Celestial Reference Frame

    NASA Technical Reports Server (NTRS)

    Le Bail, Karine; Gordon, David

    2010-01-01

    Temporal statistical position stability is required for VLBI sources to define a stable Celestial Reference Frame (CRF) and has been studied in many recent papers. This study analyzes the sources from the latest realization of the International Celestial Reference Frame (ICRF2) with the Allan variance, in addition to taking into account the apparent linear motions of the sources. Focusing on the 295 defining sources shows how they are a good compromise of different criteria, such as statistical stability and sky distribution, as well as having a sufficient number of sources, despite the fact that the most stable sources of the entire ICRF2 are mostly in the Northern Hemisphere. Nevertheless, the selection of a stable set is not unique: studying different solutions (GSF005a and AUG24 from GSFC and OPA from the Paris Observatory) over different time periods (1989.5 to 2009.5 and 1999.5 to 2009.5) leads to selections that can differ in up to 20% of the sources. Observing, recording, and network improvement are some of the causes, showing better stability for the CRF over the last decade than the last twenty years. But this may also be explained by the assumption of stationarity that is not necessarily right for some sources.

  16. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    PubMed

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084

  17. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples

    PubMed Central

    Riediger, Irina N.; Hoffmaster, Alex R.; Biondo, Alexander W.; Ko, Albert I.; Stoddard, Robyn A.

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from <1 cell /ml for river water to 36 cells/mL for ultrapure water with E. coli as a carrier. In conclusion, we optimized a method to quantify pathogenic Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden. PMID:27487084

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512

  19. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  20. Sampling optimization, at site scale, in contamination monitoring with moss, pine and oak.

    PubMed

    Aboal, J R; Fernández, J A; Carballeira, A

    2001-01-01

    With the aim of optimizing protocols for sampling moss, pine and oak for biomonitoring of atmospheric contamination and also for inclusion in an Environmental Specimen Bank, 50 sampling units of each species were collected from the study area for individual analysis. Levels of Ca, Cu, Fe, Hg, Ni, and Zn in the plants were determined and the distributions of the concentrations studied. In moss samples, the concentrations of Cu, Ni and Zn, considered to be trace pollutants in this species, showed highly variable long-normal distributions; in pine and oak samples only Ni concentrations were log-normally distributed. In addition to analytical error, the two main source of error found to be associated with making a collective sample were: (1) not carrying out measurements on individual sampling units; and (2) the number of sampling units collected and the corresponding sources of variation (microspatial, age and interindividual). We recommend that a minimum of 30 sampling units are collected when contamination is suspected. PMID:11706804

  1. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    SciTech Connect

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-06-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.

  2. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    SciTech Connect

    JR Bontha; GR Golcar; N Hannigan

    2000-08-29

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.

  3. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  4. Defining Optimal Aerobic Exercise Parameters to Affect Complex Motor and Cognitive Outcomes after Stroke: A Systematic Review and Synthesis

    PubMed Central

    Hasan, S. M. Mahmudul; Rancourt, Samantha N.; Austin, Mark W.; Ploughman, Michelle

    2016-01-01

    Although poststroke aerobic exercise (AE) increases markers of neuroplasticity and protects perilesional tissue, the degree to which it enhances complex motor or cognitive outcomes is unknown. Previous research suggests that timing and dosage of exercise may be important. We synthesized data from clinical and animal studies in order to determine optimal AE training parameters and recovery outcomes for future research. Using predefined criteria, we included clinical trials of stroke of any type or duration and animal studies employing any established models of stroke. Of the 5,259 titles returned, 52 articles met our criteria, measuring the effects of AE on balance, lower extremity coordination, upper limb motor skills, learning, processing speed, memory, and executive function. We found that early-initiated low-to-moderate intensity AE improved locomotor coordination in rodents. In clinical trials, AE improved balance and lower limb coordination irrespective of intervention modality or parameter. In contrast, fine upper limb recovery was relatively resistant to AE. In terms of cognitive outcomes, poststroke AE in animals improved memory and learning, except when training was too intense. However, in clinical trials, combined training protocols more consistently improved cognition. We noted a paucity of studies examining the benefits of AE on recovery beyond cessation of the intervention. PMID:26881101

  5. Defining Optimal Aerobic Exercise Parameters to Affect Complex Motor and Cognitive Outcomes after Stroke: A Systematic Review and Synthesis.

    PubMed

    Hasan, S M Mahmudul; Rancourt, Samantha N; Austin, Mark W; Ploughman, Michelle

    2016-01-01

    Although poststroke aerobic exercise (AE) increases markers of neuroplasticity and protects perilesional tissue, the degree to which it enhances complex motor or cognitive outcomes is unknown. Previous research suggests that timing and dosage of exercise may be important. We synthesized data from clinical and animal studies in order to determine optimal AE training parameters and recovery outcomes for future research. Using predefined criteria, we included clinical trials of stroke of any type or duration and animal studies employing any established models of stroke. Of the 5,259 titles returned, 52 articles met our criteria, measuring the effects of AE on balance, lower extremity coordination, upper limb motor skills, learning, processing speed, memory, and executive function. We found that early-initiated low-to-moderate intensity AE improved locomotor coordination in rodents. In clinical trials, AE improved balance and lower limb coordination irrespective of intervention modality or parameter. In contrast, fine upper limb recovery was relatively resistant to AE. In terms of cognitive outcomes, poststroke AE in animals improved memory and learning, except when training was too intense. However, in clinical trials, combined training protocols more consistently improved cognition. We noted a paucity of studies examining the benefits of AE on recovery beyond cessation of the intervention. PMID:26881101

  6. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  7. Quality Control Methods for Optimal BCR-ABL1 Clinical Testing in Human Whole Blood Samples

    PubMed Central

    Stanoszek, Lauren M.; Crawford, Erin L.; Blomquist, Thomas M.; Warns, Jessica A.; Willey, Paige F.S.; Willey, James C.

    2014-01-01

    Reliable breakpoint cluster region (BCR)–Abelson (ABL) 1 measurement is essential for optimal management of chronic myelogenous leukemia. There is a need to optimize quality control, sensitivity, and reliability of methods used to measure a major molecular response and/or treatment failure. The effects of room temperature storage time, different primers, and RNA input in the reverse transcription (RT) reaction on BCR-ABL1 and β-glucuronidase (GUSB) cDNA yield were assessed in whole blood samples mixed with K562 cells. BCR-ABL1 was measured relative to GUSB to control for sample loading, and each gene was measured relative to known numbers of respective internal standard molecules to control for variation in quality and quantity of reagents, thermal cycler conditions, and presence of PCR inhibitors. Clinical sample and reference material measurements with this test were concordant with results reported by other laboratories. BCR-ABL1 per 103 GUSB values were significantly reduced (P = 0.004) after 48-hour storage. Gene-specific primers yielded more BCR-ABL1 cDNA than random hexamers at each RNA input. In addition, increasing RNA inhibited the RT reaction with random hexamers but not with gene-specific primers. Consequently, the yield of BCR-ABL1 was higher with gene-specific RT primers at all RNA inputs tested, increasing to as much as 158-fold. We conclude that optimal measurement of BCR-ABL1 per 103 GUSB in whole blood is obtained when gene-specific primers are used in RT and samples are analyzed within 24 hours after blood collection. PMID:23541592

  8. Dynamic reconstruction of sub-sampled data using Optimal Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Krol, Jakub; Wynn, Andrew

    2015-11-01

    The Nyquist-Shannon criterion indicates the sample rate necessary to identify information with particular frequency content from a dynamical system. However, in experimental applications such as the interrogation of a flow field using Particle Image Velocimetry (PIV), it may be expensive to obtain data at the desired temporal resolution. To address this problem, we propose a new approach to identify temporal information from undersampled data, using ideas from modal decomposition algorithms such as Dynamic Mode Decomposition (DMD) and Optimal Mode Decomposition (OMD). The novel method takes a vector-valued signal sampled at random time instances (but at Sub-Nyquist rate) and projects onto a low-order subspace. Subsequently, dynamical characteristics are approximated by iteratively approximating the flow evolution by a low order model and solving a certain convex optimization problem. Furthermore, it is shown that constraints may be added to the optimization problem to improve spatial resolution of missing data points. The methodology is demonstrated on two dynamical systems, a cylinder flow at Re = 60 and Kuramoto-Sivashinsky equation. In both cases the algorithm correctly identifies the characteristic frequencies and oscillatory structures present in the flow.

  9. Optimization methods for multi-scale sampling of soil moisture and snow in the Southern Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Oroza, C.; Zheng, Z.; Zhang, Z.; Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2015-12-01

    Recent advancements in wireless sensing technologies are enabling real-time application of spatially representative point-scale measurements to model hydrologic processes at the basin scale. A major impediment to the large-scale deployment of these networks is the difficulty of finding representative sensor locations and resilient wireless network topologies in complex terrain. Currently, observatories are structured manually in the field, which provides no metric for the number of sensors required for extrapolation, does not guarantee that point measurements are representative of the basin as a whole, and often produces unreliable wireless networks. We present a methodology that combines LiDAR data, pattern recognition, and stochastic optimization to simultaneously identify representative sampling locations, optimal sensor number, and resilient network topologies prior to field deployment. We compare the results of the algorithm to an existing 55-node wireless snow and soil network at the Southern Sierra Critical Zone Observatory. Existing data show that the algorithm is able to capture a broader range of key attributes affecting snow and soil moisture, defined by a combination of terrain, vegetation and soil attributes, and thus is better suited to basin-wide monitoring. We believe that adopting this structured, analytical approach could improve data quality, increase reliability, and decrease the cost of deployment for future networks.

  10. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  11. Sampling plan optimization for detection of lithography and etch CD process excursions

    NASA Astrophysics Data System (ADS)

    Elliott, Richard C.; Nurani, Raman K.; Lee, Sung Jin; Ortiz, Luis G.; Preil, Moshe E.; Shanthikumar, J. G.; Riley, Trina; Goodwin, Greg A.

    2000-06-01

    Effective sample planning requires a careful combination of statistical analysis and lithography engineering. In this paper, we present a complete sample planning methodology including baseline process characterization, determination of the dominant excursion mechanisms, and selection of sampling plans and control procedures to effectively detect the yield- limiting excursions with a minimum of added cost. We discuss the results of our novel method in identifying critical dimension (CD) process excursions and present several examples of poly gate Photo and Etch CD excursion signatures. Using these results in a Sample Planning model, we determine the optimal sample plan and statistical process control (SPC) chart metrics and limits for detecting these excursions. The key observations are that there are many different yield- limiting excursion signatures in photo and etch, and that a given photo excursion signature turns into a different excursion signature at etch with different yield and performance impact. In particular, field-to-field variance excursions are shown to have a significant impact on yield. We show how current sampling plan and monitoring schemes miss these excursions and suggest an improved procedure for effective detection of CD process excursions.

  12. Model reduction algorithms for optimal control and importance sampling of diffusions

    NASA Astrophysics Data System (ADS)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  13. An S/H circuit with parasitics optimized for IF-sampling

    NASA Astrophysics Data System (ADS)

    Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue

    2016-06-01

    An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).

  14. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    SciTech Connect

    Stemkens, Bjorn; Tijssen, Rob H.N.; Senneville, Baudouin D. de

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  15. Optimization of Sample Site Selection Imaging for OSIRIS-REx Using Asteroid Surface Analog Images

    NASA Astrophysics Data System (ADS)

    Tanquary, Hannah E.; Sahr, Eric; Habib, Namrah; Hawley, Christopher; Weber, Nathan; Boynton, William V.; Kinney-Spano, Ellyne; Lauretta, Dante

    2014-11-01

    OSIRIS-REx will return a sample of regolith from the surface of asteroid 101955 Bennu. The mission will obtain high resolution images of the asteroid in order to create detailed maps which will satisfy multiple mission objectives. To select a site, we must (i) identify hazards to the spacecraft and (ii) characterize a number of candidate sites to determine the optimal location for sampling. To further characterize the site, a long-term science campaign will be undertaken to constrain the geologic properties. To satisfy these objectives, the distribution and size of blocks at the sample site and backup sample site must be determined. This will be accomplished through the creation of rock size frequency distribution maps. The primary goal of this study is to optimize the creation of these map products by assessing techniques for counting blocks on small bodies, and assessing the methods of analysis of the resulting data. We have produced a series of simulated surfaces of Bennu which have been imaged, and the images processed to simulate Polycam images during the Reconnaissance phase. These surface analog images allow us to explore a wide range of imaging conditions, both ideal and non-ideal. The images have been “degraded”, and are displayed as thumbnails representing the limits of Polycam resolution from an altitude of 225 meters. Specifically, this study addresses the mission requirement that the rock size frequency distribution of regolith grains < 2cm in longest dimension must be determined for the sample sites during Reconnaissance. To address this requirement, we focus on the range of available lighting angles. Varying illumination and phase angles in the simulated images, we can compare the size-frequency distributions calculated from the degraded images with the known size frequency distributions of the Bennu simulant material, and thus determine the optimum lighting conditions for satisfying the 2 cm requirement.

  16. Quality assessment and optimization of purified protein samples: why and how?

    PubMed

    Raynal, Bertrand; Lenormand, Pascal; Baron, Bruno; Hoos, Sylviane; England, Patrick

    2014-01-01

    Purified protein quality control is the final and critical check-point of any protein production process. Unfortunately, it is too often overlooked and performed hastily, resulting in irreproducible and misleading observations in downstream applications. In this review, we aim at proposing a simple-to-follow workflow based on an ensemble of widely available physico-chemical technologies, to assess sequentially the essential properties of any protein sample: purity and integrity, homogeneity and activity. Approaches are then suggested to optimize the homogeneity, time-stability and storage conditions of purified protein preparations, as well as methods to rapidly evaluate their reproducibility and lot-to-lot consistency. PMID:25547134

  17. Optimizing the Operating Temperature for an array of MOX Sensors on an Open Sampling System

    NASA Astrophysics Data System (ADS)

    Trincavelli, M.; Vergara, A.; Rulkov, N.; Murguia, J. S.; Lilienthal, A.; Huerta, R.

    2011-09-01

    Chemo-resistive transduction is essential for capturing the spatio-temporal structure of chemical compounds dispersed in different environments. Due to gas dispersion mechanisms, namely diffusion, turbulence and advection, the sensors in an open sampling system, i.e. directly exposed to the environment to be monitored, are exposed to low concentrations of gases with many fluctuations making, as a consequence, the identification and monitoring of the gases even more complicated and challenging than in a controlled laboratory setting. Therefore, tuning the value of the operating temperature becomes crucial for successfully identifying and monitoring the pollutant gases, particularly in applications such as exploration of hazardous areas, air pollution monitoring, and search and rescue1. In this study we demonstrate the benefit of optimizing the sensor's operating temperature when the sensors are deployed in an open sampling system, i.e. directly exposed to the environment to be monitored.

  18. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  19. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  20. Optimization of arsenic extraction in rice samples by Plackett-Burman design and response surface methodology.

    PubMed

    Ma, Li; Wang, Lin; Tang, Jie; Yang, Zhaoguang

    2016-08-01

    Statistical experimental designs were employed to optimize the extraction condition of arsenic species (As(III), As(V), monomethylarsonic acid (MMA) and dimethylarsonic acid (DMA)) in paddy rice by a simple solvent extraction using water as an extraction reagent. The effect of variables were estimated by a two-level Plackett-Burman factorial design. A five-level central composite design was subsequently employed to optimize the significant factors. The desirability parameters of the significant factors were confirmed to 60min of shaking time and 85°C of extraction temperature by compromising the experimental period and extraction efficiency. The analytical performances, such as linearity, method detection limits, relative standard deviation and recovery were examined, and these data exhibited broad linear range, high sensitivity and good precision. The proposed method was applied for real rice samples. The species of As(III), As(V) and DMA were detected in all the rice samples mostly in the order As(III)>As(V)>DMA. PMID:26988503

  1. Optimization of multi-channel neutron focusing guides for extreme sample environments

    NASA Astrophysics Data System (ADS)

    Di Julio, D. D.; Lelièvre-Berna, E.; Courtois, P.; Andersen, K. H.; Bentley, P. M.

    2014-07-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  2. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  3. Optimization of a pre-MEKC separation SPE procedure for steroid molecules in human urine samples.

    PubMed

    Olędzka, Ilona; Kowalski, Piotr; Dziomba, Szymon; Szmudanowski, Piotr; Bączek, Tomasz

    2013-01-01

    Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE) with dichloromethane and compared to solid phase extraction (SPE) with C18 and hydrophilic-lipophilic balance (HLB) columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK) technique was employed. For full separation of all the analytes a running buffer (pH 9.2), composed of 10 mM sodium tetraborate decahydrate (borax), 50 mM sodium dodecyl sulfate (SDS), and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers--both men and women (students, amateur bodybuilders, using and not applying steroid doping). The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples. PMID:24232737

  4. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.

    PubMed

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène

    2016-06-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. PMID:26998570

  5. Motivational predictors of psychometrically-defined schizotypy in a non-clinical sample: goal process representation, approach-avoid temperament, and aberrant salience.

    PubMed

    Karoly, Paul; Jung Mun, Chung; Okun, Morris

    2015-03-30

    Patterns of problematic volitional control in schizotypal personality disorder pertaining to goal process representation (GPR), approach and avoidance temperament, and aberrant salience have not been widely investigated in emerging adults. The present study aimed to provide preliminary evidence for the utility of examining these three motivational constructs as predictors of high versus low levels of psychometrically-defined schizotypy in a non-clinic sample. When college students with high levels of self-reported schizotypy (n = 88) were compared to those with low levels (n = 87) by means of logistic regression, aberrant salience, avoidant temperament, and the self-criticism component of GPR together accounted for 51% of the variance in schizotypy group assignment. Higher score on these three motivational dimensions reflected a proclivity toward higher levels of schizotypy. The current findings justify the continued exploration of goal-related constructs as useful motivational elements in psychopathology research. PMID:25638536

  6. [Optimization of processing and storage of clinical samples to be used for the molecular diagnosis of pertussis].

    PubMed

    Pianciola, L; Mazzeo, M; Flores, D; Hozbor, D

    2010-01-01

    Pertussis or whooping cough is an acute, highly contagious respiratory infection, which is particularly severe in infants under one year old. In classic disease, clinical diagnosis may present no difficulties. In other cases, it requires laboratory confirmation. Generally used methods are: culture, serology and PCR. For the latter, the sample of choice is a nasopharyngeal aspirate, and the simplest method for processing these samples uses proteinase K. Although results are generally satisfactory, difficulties often arise regarding the mucosal nature of the specimens. Moreover, uncertainties exist regarding the optimal conditions for sample storage. This study evaluated various technologies for processing and storing samples. Results enabled us to select a method for optimizing sample processing, with performance comparable to commercial methods and far lower costs. The experiments designed to assess the conservation of samples enabled us to obtain valuable information to guide the referral of samples from patient care centres to laboratories where such samples are processed by molecular methods. PMID:20589331

  7. Spatially-Optimized Sequential Sampling Plan for Cabbage Aphids Brevicoryne brassicae L. (Hemiptera: Aphididae) in Canola Fields.

    PubMed

    Severtson, Dustin; Flower, Ken; Nansen, Christian

    2016-08-01

    The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact. PMID:27371709

  8. Optimization and image quality assessment of the alpha-image reconstruction algorithm: iterative reconstruction with well-defined image quality metrics

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergej; Sawall, Stefan; Kuchenbecker, Stefan; Faby, Sebastian; Knaup, Michael; Kachelrieß, Marc

    2015-03-01

    The reconstruction of CT images with low noise and highest spatial resolution is a challenging task. Usually, a trade-off between at least these two demands has to be found or several reconstructions with mutually exclusive properties, i.e. either low noise or high spatial resolution, have to be performed. Iterative reconstruction methods might be suitable tools to overcome these limitations and provide images of highest diagnostic quality with formerly mutually exclusive image properties. While image quality metrics like the modulation transfer function (MTF) or the point spread function (PSF) are well-defined in case of standard reconstructions, e.g. filtered backprojection, the iterative algorithms lack these metrics. To overcome this issue alternate methodologies like the model observers have been proposed recently to allow a quantification of a usually task-dependent image quality metric.1 As an alternative we recently proposed an iterative reconstruction method, the alpha-image reconstruction (AIR), providing well-defined image quality metrics on a per-voxel basis.2 In particular, the AIR algorithm seeks to find weighting images, the alpha-images, that are used to blend between basis images with mutually exclusive image properties. The result is an image with highest diagnostic quality that provides a high spatial resolution and a low noise level. As the estimation of the alpha-images is computationally demanding we herein aim at optimizing this process and highlight the favorable properties of AIR using patient measurements.

  9. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  10. Dynamic simulation tools for the analysis and optimization of novel collection, filtration and sample preparation systems

    SciTech Connect

    Clague, D; Weisgraber, T; Rockway, J; McBride, K

    2006-02-12

    The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.

  11. A sampling optimization analysis of soil-bugs diversity (Crustacea, Isopoda, Oniscidea).

    PubMed

    Messina, Giuseppina; Cazzolla Gatti, Roberto; Droutsa, Angeliki; Barchitta, Martina; Pezzino, Elisa; Agodi, Antonella; Lombardo, Bianca Maria

    2016-01-01

    Biological diversity analysis is among the most informative approaches to describe communities and regional species compositions. Soil ecosystems include large numbers of invertebrates, among which soil bugs (Crustacea, Isopoda, Oniscidea) play significant ecological roles. The aim of this study was to provide advices to optimize the sampling effort, to efficiently monitor the diversity of this taxon, to analyze its seasonal patterns of species composition, and ultimately to understand better the coexistence of so many species over a relatively small area. Terrestrial isopods were collected at the Natural Reserve "Saline di Trapani e Paceco" (Italy), using pitfall traps monthly monitored over 2 years. We analyzed parameters of α- and β-diversity and calculated a number of indexes and measures to disentangle diversity patterns. We also used various approaches to analyze changes in biodiversity over time, such as distributions of species abundances and accumulation and rarefaction curves. As concerns species richness and total abundance of individuals, spring resulted the best season to monitor Isopoda, to reduce sampling efforts, and to save resources without losing information, while in both years abundances were maximum between summer and autumn. This suggests that evaluations of β-diversity are maximized if samples are first collected during the spring and then between summer and autumn. Sampling during these coupled seasons allows to collect a number of species close to the γ-diversity (24 species) of the area. Finally, our results show that seasonal shifts in community composition (i.e., dynamic fluctuations in species abundances during the four seasons) may minimize competitive interactions, contribute to stabilize total abundances, and allow the coexistence of phylogenetically close species within the ecosystem. PMID:26811784

  12. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  13. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples.

    PubMed

    Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S

    2016-03-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. PMID:26965231

  14. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    NASA Astrophysics Data System (ADS)

    Saavedra, Y.; González, A.; Fernández, P.; Blanco, J.

    2004-04-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good.

  15. Small sample training and test selection method for optimized anomaly detection algorithms in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Mindrup, Frank M.; Friend, Mark A.; Bauer, Kenneth W.

    2012-01-01

    There are numerous anomaly detection algorithms proposed for hyperspectral imagery. Robust parameter design (RPD) techniques provide an avenue to select robust settings capable of operating consistently across a large variety of image scenes. Many researchers in this area are faced with a paucity of data. Unfortunately, there are no data splitting methods for model validation of datasets with small sample sizes. Typically, training and test sets of hyperspectral images are chosen randomly. Previous research has developed a framework for optimizing anomaly detection in HSI by considering specific image characteristics as noise variables within the context of RPD; these characteristics include the Fisher's score, ratio of target pixels and number of clusters. We have developed method for selecting hyperspectral image training and test subsets that yields consistent RPD results based on these noise features. These subsets are not necessarily orthogonal, but still provide improvements over random training and test subset assignments by maximizing the volume and average distance between image noise characteristics. The small sample training and test selection method is contrasted with randomly selected training sets as well as training sets chosen from the CADEX and DUPLEX algorithms for the well known Reed-Xiaoli anomaly detector.

  16. Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors.

    PubMed

    Riva-Murray, Karen; Bradley, Paul M; Scudder Eikenberry, Barbara C; Knightes, Christopher D; Journey, Celeste A; Brigham, Mark E; Button, Daniel T

    2013-06-01

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hg(fish)) divided by the water Hg concentration (Hg(water)) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hg(water) sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hg(water) estimates. Models were evaluated for parsimony, using Akaike's Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg-UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics. PMID:23668662

  17. Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors

    USGS Publications Warehouse

    Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.

    2013-01-01

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.

  18. Optimization of the polar organic chemical integrative sampler for the sampling of acidic and polar herbicides.

    PubMed

    Fauvelle, Vincent; Mazzella, Nicolas; Belles, Angel; Moreira, Aurélie; Allan, Ian J; Budzinski, Hélène

    2014-05-01

    This paper presents an optimization of the pharmaceutical Polar Organic Chemical Integrative Sampler (POCIS-200) under controlled laboratory conditions for the sampling of acidic (2,4-dichlorophenoxyacetic acid (2,4-D), acetochlor ethanesulfonic acid (ESA), acetochlor oxanilic acid, bentazon, dicamba, mesotrione, and metsulfuron) and polar (atrazine, diuron, and desisopropylatrazine) herbicides in water. Indeed, the conventional configuration of the POCIS-200 (46 cm(2) exposure window, 200 mg of Oasis® hydrophilic lipophilic balance (HLB) receiving phase) is not appropriate for the sampling of very polar and acidic compounds because they rapidly reach a thermodynamic equilibrium with the Oasis HLB receiving phase. Thus, we investigated several ways to extend the initial linear accumulation. On the one hand, increasing the mass of sorbent to 600 mg resulted in sampling rates (R s s) twice as high as those observed with 200 mg (e.g., 287 vs. 157 mL day(-1) for acetochlor ESA). Although detection limits could thereby be reduced, most acidic analytes followed a biphasic uptake, proscribing the use of the conventional first-order model and preventing us from estimating time-weighted average concentrations. On the other hand, reducing the exposure window (3.1 vs. 46 cm(2)) allowed linear accumulations of all analytes over 35 days, but R s s were dramatically reduced (e.g., 157 vs. 11 mL day(-1) for acetochlor ESA). Otherwise, the observation of biphasic releases of performance reference compounds (PRC), though mirroring acidic herbicide biphasic uptake, might complicate the implementation of the PRC approach to correct for environmental exposure conditions. PMID:24691721

  19. Defining the Optimal Window for Cranial Transplantation of Human Induced Pluripotent Stem Cell-Derived Cells to Ameliorate Radiation-Induced Cognitive Impairment

    PubMed Central

    Acharya, Munjal M.; Martirosian, Vahan; Christie, Lori-Ann; Riparip, Lara; Strnadel, Jan; Parihar, Vipan K.

    2015-01-01

    Past preclinical studies have demonstrated the capability of using human stem cell transplantation in the irradiated brain to ameliorate radiation-induced cognitive dysfunction. Intrahippocampal transplantation of human embryonic stem cells and human neural stem cells (hNSCs) was found to functionally restore cognition in rats 1 and 4 months after cranial irradiation. To optimize the potential therapeutic benefits of human stem cell transplantation, we have further defined optimal transplantation windows for maximizing cognitive benefits after irradiation and used induced pluripotent stem cell-derived hNSCs (iPSC-hNSCs) that may eventually help minimize graft rejection in the host brain. For these studies, animals given an acute head-only dose of 10 Gy were grafted with iPSC-hNSCs at 2 days, 2 weeks, or 4 weeks following irradiation. Animals receiving stem cell grafts showed improved hippocampal spatial memory and contextual fear-conditioning performance compared with irradiated sham-surgery controls when analyzed 1 month after transplantation surgery. Importantly, superior performance was evident when stem cell grafting was delayed by 4 weeks following irradiation compared with animals grafted at earlier times. Analysis of the 4-week cohort showed that the surviving grafted cells migrated throughout the CA1 and CA3 subfields of the host hippocampus and differentiated into neuronal (∼39%) and astroglial (∼14%) subtypes. Furthermore, radiation-induced inflammation was significantly attenuated across multiple hippocampal subfields in animals receiving iPSC-hNSCs at 4 weeks after irradiation. These studies expand our prior findings to demonstrate that protracted stem cell grafting provides improved cognitive benefits following irradiation that are associated with reduced neuroinflammation. PMID:25391646

  20. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    SciTech Connect

    Nie Xiaobo; Liang Jian; Yan Di

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  1. Quality analysis of salmon calcitonin in a polymeric bioadhesive pharmaceutical formulation: sample preparation optimization by DOE.

    PubMed

    D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart

    2010-12-01

    A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer. PMID:20655159

  2. Population pharmacokinetics of mycophenolic acid and dose optimization with limited sampling strategy in liver transplant children

    PubMed Central

    Barau, Caroline; Furlan, Valérie; Debray, Dominique; Taburet, Anne-Marie; Barrail-Tran, Aurélie

    2012-01-01

    AIMS The aims were to estimate the mycophenolic acid (MPA) population pharmacokinetic parameters in paediatric liver transplant recipients, to identify the factors affecting MPA pharmacokinetics and to develop a limited sampling strategy to estimate individual MPA AUC(0,12 h). METHODS Twenty-eight children, 1.1 to 18.0 years old, received oral mycophenolate mofetil (MMF) therapy combined with either tacrolimus (n= 23) or ciclosporin (n= 5). The population parameters were estimated from a model-building set of 16 intensive pharmacokinetic datasets obtained from 16 children. The data were analyzed by nonlinear mixed effect modelling, using a one compartment model with first order absorption and first order elimination and random effects on the absorption rate (ka), the apparent volume of distribution (V/F) and apparent clearance (CL/F). RESULTS Two covariates, time since transplantation (≤ and >6 months) and age affected MPA pharmacokinetics. ka, estimated at 1.7 h−1 at age 8.7 years, exhibited large interindividual variability (308%). V/F, estimated at 64.7 l, increased about 2.3 times in children during the immediate post transplantation period. This increase was due to the increase in the unbound MPA fraction caused by the low albumin concentration. CL/F was estimated at 12.7 l h−1. To estimate individual AUC(0,12 h), the pharmacokinetic parameters obtained with the final model, including covariates, were coded in Adapt II® software, using the Bayesian approach. The AUC(0,12 h) estimated from concentrations measured 0, 1 and 4 h after administration of MMF did not differ from reference values. CONCLUSIONS This study allowed the estimation of the population pharmacokinetic MPA parameters. A simple sampling procedure is suggested to help to optimize pediatric patient care. PMID:22329639

  3. Defining the Optimal Selenium Dose for Prostate Cancer Risk Reduction: Insights from the U-Shaped Relationship between Selenium Status, DNA Damage, and Apoptosis.

    PubMed

    Chiang, Emily C; Shen, Shuren; Kengeri, Seema S; Xu, Huiping; Combs, Gerald F; Morris, J Steven; Bostwick, David G; Waters, David J

    2009-01-01

    Our work in dogs has revealed a U-shaped dose response between selenium status and prostatic DNA damage that remarkably parallels the relationship between dietary selenium and prostate cancer risk in men, suggesting that more selenium is not necessarily better. Herein, we extend this canine work to show that the selenium dose that minimizes prostatic DNA damage also maximizes apoptosis-a cancer-suppressing death switch used by prostatic epithelial cells. These provocative findings suggest a new line of thinking about how selenium can reduce cancer risk. Mid-range selenium status (.67-.92 ppm in toenails) favors a process we call "homeostatic housecleaning"-an upregulated apoptosis that preferentially purges damaged prostatic cells. Also, the U-shaped relationship provides valuable insight into stratifying individuals as selenium-responsive or selenium-refractory, based upon the likelihood of reducing their cancer risk by additional selenium. By studying elderly dogs, the only non-human animal model of spontaneous prostate cancer, we have established a robust experimental approach bridging the gap between laboratory and human studies that can help to define the optimal doses of cancer preventives for large-scale human trials. Moreover, our observations bring much needed clarity to the null results of the Selenium and Vitamin E Cancer Prevention Trial (SELECT) and set a new research priority: testing whether men with low, suboptimal selenium levels less than 0.8 ppm in toenails can achieve cancer risk reduction through daily supplementation. PMID:20877485

  4. Acoustic investigations of lakes as justification for the optimal location of core sampling

    NASA Astrophysics Data System (ADS)

    Krylov, P.; Nourgaliev, D.; Yasonov, P.; Kuzin, D.

    2014-12-01

    Lacustrine sediments contain a long, high-resolution record of sedimentation processes associated with changes in the environment. Paleomagnetic study of the properties of these sediments provide a detailed trace the changes in the paleoenvironment. However, there are factors such as landslides, earthquakes, the presence of gas in the sediments affecting the disturbing sediment stratification. Seismic profiling allows to investigate in detail the bottom relief and get information about the thickness and structure of the deposits, which makes this method ideally suited for determining the configuration of the lake basin and the overlying lake sediment stratigraphy. Most seismic studies have concentrated on large and deep lakes containing a thick sedimentary sequence, but small and shallow lakes containing a thinner sedimentary column located in key geographic locations and geological settings can also provide a valuable record of Holocene history. Seimic data is crucial when choosing the optimal location of core sampling. Thus, continuous seismic profiling should be used regularly before coring lake sediments for the reconstruction of paleoclimate. We have carried out seismic profiling on lakes Balkhash (Kazakhstan), Yarovoye, Beloe, Aslykul and Chebarkul (Russia). The results of the field work will be presented in the report. The work is performed according to the Russian Government Program of Competitive Growth of Kazan Federal University also by RFBR research projects No. 14-05-31376 -a, 14-05-00785-a.

  5. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  6. Optimization of the in-needle extraction device for the direct flow of the liquid sample through the sorbent layer.

    PubMed

    Pietrzyńska, Monika; Voelkel, Adam

    2014-11-01

    In-needle extraction was applied for preparation of aqueous samples. This technique was used for direct isolation of analytes from liquid samples which was achieved by forcing the flow of the sample through the sorbent layer: silica or polymer (styrene/divinylbenzene). Specially designed needle was packed with three different sorbents on which the analytes (phenol, p-benzoquinone, 4-chlorophenol, thymol and caffeine) were retained. Acceptable sampling conditions for direct analysis of liquid sample were selected. Experimental data collected from the series of liquid samples analysis made with use of in-needle device showed that the effectiveness of the system depends on various parameters such as breakthrough volume and the sorption capacity, effect of sampling flow rate, solvent effect on elution step, required volume of solvent for elution step. The optimal sampling flow rate was in range of 0.5-2 mL/min, the minimum volume of solvent was at 400 µL level. PMID:25127610

  7. Defining “Normophilic” and “Paraphilic” Sexual Fantasies in a Population‐Based Sample: On the Importance of Considering Subgroups

    PubMed Central

    2015-01-01

    criteria for paraphilia are too inclusive. Suggestions are given to improve the definition of pathological sexual interests, and the crucial difference between SF and sexual interest is underlined. Joyal CC. Defining “normophilic” and “paraphilic” sexual fantasies in a population‐based sample: On the importance of considering subgroups. Sex Med 2015;3:321–330. PMID:26797067

  8. Defining "Development".

    PubMed

    Pradeu, Thomas; Laplane, Lucie; Prévot, Karine; Hoquet, Thierry; Reynaud, Valentine; Fusco, Giuseppe; Minelli, Alessandro; Orgogozo, Virginie; Vervoort, Michel

    2016-01-01

    Is it possible, and in the first place is it even desirable, to define what "development" means and to determine the scope of the field called "developmental biology"? Though these questions appeared crucial for the founders of "developmental biology" in the 1950s, there seems to be no consensus today about the need to address them. Here, in a combined biological, philosophical, and historical approach, we ask whether it is possible and useful to define biological development, and, if such a definition is indeed possible and useful, which definition(s) can be considered as the most satisfactory. PMID:26969977

  9. Sampling Optimization in Pharmacokinetic Bridging Studies: Example of the Use of Deferiprone in Children With β-Thalassemia.

    PubMed

    Bellanti, Francesco; Di Iorio, Vincenzo Luca; Danhof, Meindert; Della Pasqua, Oscar

    2016-09-01

    Despite wide clinical experience with deferiprone, the optimum dosage in children younger than 6 years remains to be established. This analysis aimed to optimize the design of a prospective clinical study for the evaluation of deferiprone pharmacokinetics in children. A 1-compartment model with first-order oral absorption was used for the purposes of the analysis. Different sampling schemes were evaluated under the assumption of a constrained population size. A sampling scheme with 5 samples per subject was found to be sufficient to ensure accurate characterization of the pharmacokinetics of deferiprone. Whereas the accuracy of parameters estimates was high, precision was slightly reduced because of the small sample size (CV% >30% for Vd/F and KA). Mean AUC ± SD was found to be 33.4 ± 19.2 and 35.6 ± 20.2 mg · h/mL, and mean Cmax ± SD was found to be 10.2 ± 6.1 and 10.9 ± 6.7 mg/L based on sparse and frequent sampling, respectively. The results showed that typical frequent sampling schemes and sample sizes do not warrant accurate model and parameter identifiability. Expectation of the determinant (ED) optimality and simulation-based optimization concepts can be used to support pharmacokinetic bridging studies. Of importance is the accurate estimation of the magnitude of the covariate effects, as they partly determine the dose recommendation for the population of interest. PMID:26785826

  10. Defining Infertility

    MedlinePlus

    ... of the American Society for Reproductive Medicine Defining infertility What is infertility? Infertility is “the inability to conceive after 12 months ... to conceive after 6 months is generally considered infertility. How common is it? Infertility affects 10%-15% ...

  11. Defining Risk.

    ERIC Educational Resources Information Center

    Tholkes, Ben F.

    1998-01-01

    Defines camping risks and lists types and examples: (1) objective risk beyond control; (2) calculated risk based on personal choice; (3) perceived risk; and (4) reckless risk. Describes campers to watch ("immortals" and abdicators), and several "treatments" of risk: avoidance, safety procedures and well-trained staff, adequate insurance, and a…

  12. Persistent Organic Pollutant Determination in Killer Whale Scat Samples: Optimization of a Gas Chromatography/Mass Spectrometry Method and Application to Field Samples.

    PubMed

    Lundin, Jessica I; Dills, Russell L; Ylitalo, Gina M; Hanson, M Bradley; Emmons, Candice K; Schorr, Gregory S; Ahmad, Jacqui; Hempelmann, Jennifer A; Parsons, Kim M; Wasser, Samuel K

    2016-01-01

    Biologic sample collection in wild cetacean populations is challenging. Most information on toxicant levels is obtained from blubber biopsy samples; however, sample collection is invasive and strictly regulated under permit, thus limiting sample numbers. Methods are needed to monitor toxicant levels that increase temporal and repeat sampling of individuals for population health and recovery models. The objective of this study was to optimize measuring trace levels (parts per billion) of persistent organic pollutants (POPs), namely polychlorinated-biphenyls (PCBs), polybrominated-diphenyl-ethers (PBDEs), dichlorodiphenyltrichloroethanes (DDTs), and hexachlorocyclobenzene, in killer whale scat (fecal) samples. Archival scat samples, initially collected, lyophilized, and extracted with 70 % ethanol for hormone analyses, were used to analyze POP concentrations. The residual pellet was extracted and analyzed using gas chromatography coupled with mass spectrometry. Method detection limits ranged from 11 to 125 ng/g dry weight. The described method is suitable for p,p'-DDE, PCBs-138, 153, 180, and 187, and PBDEs-47 and 100; other POPs were below the limit of detection. We applied this method to 126 scat samples collected from Southern Resident killer whales. Scat samples from 22 adult whales also had known POP concentrations in blubber and demonstrated significant correlations (p < 0.01) between matrices across target analytes. Overall, the scat toxicant measures matched previously reported patterns from blubber samples of decreased levels in reproductive-age females and a decreased p,p'-DDE/∑PCB ratio in J-pod. Measuring toxicants in scat samples provides an unprecedented opportunity to noninvasively evaluate contaminant levels in wild cetacean populations; these data have the prospect to provide meaningful information for vital management decisions. PMID:26298464

  13. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  14. Defining cure.

    PubMed

    Hilton, Paul; Robinson, Dudley

    2011-06-01

    This paper is a summary of the presentations made as Proposal 2-"Defining cure" to the 2nd Annual meeting of the ICI-Research Society, in Bristol, 16th June 2010. It reviews definitions of 'cure' and 'outcome', and considers the impact that varying definition may have on prevalence studies and cure rates. The difference between subjective and objective outcomes is considered, and the significance that these different outcomes may have for different stakeholders (e.g. clinicians, patients, carers, industry etc.) is discussed. The development of patient reported outcome measures and patient defined goals is reviewed, and consideration given to the use of composite end-points. A series of proposals are made by authors and discussants as to how currently validated outcomes should be applied, and where our future research activity in this area might be directed. PMID:21661023

  15. Defining biobank.

    PubMed

    Hewitt, Robert; Watson, Peter

    2013-10-01

    The term "biobank" first appeared in the scientific literature in 1996 and for the next five years was used mainly to describe human population-based biobanks. In recent years, the term has been used in a more general sense and there are currently many different definitions to be found in reports, guidelines and regulatory documents. Some definitions are general, including all types of biological sample collection facilities. Others are specific and limited to collections of human samples, sometimes just to population-based collections. In order to help resolve the confusion on this matter, we conducted a survey of the opinions of people involved in managing sample collections of all types. This survey was conducted using an online questionnaire that attracted 303 responses. The results show that there is consensus that the term biobank may be applied to biological collections of human, animal, plant or microbial samples; and that the term biobank should only be applied to sample collections with associated sample data, and to collections that are managed according to professional standards. There was no consensus on whether a collection's purpose, size or level of access should determine whether it is called a biobank. Putting these findings into perspective, we argue that a general, broad definition of biobank is here to stay, and that attention should now focus on the need for a universally-accepted, systematic classification of the different biobank types. PMID:24835262

  16. The Optimal Anatomic Sites for Sampling Heterosexual Men for Human Papillomavirus (HPV) Detection: The HPV Detection in Men Study

    PubMed Central

    Giuliano, Anna R.; Nielson, Carrie M.; Flores, Roberto; Dunne, Eileen F.; Abrahamsen, Martha; Papenfuss, Mary R.; Markowitz, Lauri E.; Smith, Danelle; Harris, Robin B.

    2014-01-01

    Background Human papillomavirus (HPV) infection in men contributes to infection and cervical disease in women as well as to disease in men. This study aimed to determine the optimal anatomic site(s) for HPV detection in heterosexual men. Methods A cross-sectional study of HPV infection was conducted in 463 men from 2003 to 2006. Urethral, glans penis/coronal sulcus, penile shaft/prepuce, scrotal, perianal, anal canal, semen, and urine samples were obtained. Samples were analyzed for sample adequacy and HPV DNA by polymerase chain reaction and genotyping. To determine the optimal sites for estimating HPV prevalence, site-specific prevalences were calculated and compared with the overall prevalence. Sites and combinations of sites were excluded until a recalculated prevalence was reduced by <5% from the overall prevalence. Results The overall prevalence of HPV was 65.4%. HPV detection was highest at the penile shaft (49.9% for the full cohort and 47.9% for the subcohort of men with complete sampling), followed by the glans penis/coronal sulcus (35.8% and 32.8%) and scrotum (34.2% and 32.8%). Detection was lowest in urethra (10.1% and 10.2%) and semen (5.3% and 4.8%) samples. Exclusion of urethra, semen, and either perianal, scrotal, or anal samples resulted in a <5% reduction in prevalence. Conclusions At a minimum, the penile shaft and the glans penis/coronal sulcus should be sampled in heterosexual men. A scrotal, perianal, or anal sample should also be included for optimal HPV detection. PMID:17955432

  17. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    PubMed

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them. PMID

  18. ROLE OF LABORATORY SAMPLING DEVICES AND LABORATORY SUBSAMPLING METHODS IN OPTIMIZING REPRESENTATIVENESS STRATEGIES

    EPA Science Inventory

    Sampling is the act of selecting items from a specified population in order to estimate the parameters of that population (e.g., selecting soil samples to characterize the properties at an environmental site). Sampling occurs at various levels and times throughout an environmenta...

  19. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  20. Defining chaos

    SciTech Connect

    Hunt, Brian R.; Ott, Edward

    2015-09-15

    In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

  1. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    PubMed Central

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula

    2015-01-01

    Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567

  2. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry.

    PubMed

    Harju, Kirsi; Rapinoja, Marja-Leena; Avondet, Marc-André; Arnold, Werner; Schär, Martin; Burrell, Stephen; Luginbühl, Werner; Vanninen, Paula

    2015-12-01

    Saxitoxin (STX) and some selected paralytic shellfish poisoning (PSP) analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT) within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk). Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists) method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD). PMID:26610567

  3. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy.

    PubMed

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E; Kaminski, Clemens F; Szabó, Gábor; Erdélyi, Miklós

    2014-03-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  4. Optimal protein extraction methods from diverse sample types for protein profiling by using Two-Dimensional Electrophoresis (2DE).

    PubMed

    Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y

    2011-12-01

    There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample. PMID:22433892

  5. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  6. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    NASA Astrophysics Data System (ADS)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  7. The optimal process of self-sampling in fisheries: lessons learned in the Netherlands.

    PubMed

    Kraan, M; Uhlmann, S; Steenbergen, J; Van Helmond, A T M; Van Hoof, L

    2013-10-01

    At-sea sampling of commercial fishery catches by observers is a relatively expensive exercise. The fact that an observer has to stay on-board for the duration of the trip results in clustered samples and effectively small sample sizes, whereas the aim is to make inferences regarding several trips from an entire fleet. From this perspective, sampling by fishermen themselves (self-sampling) is an attractive alternative, because a larger number of trips can be sampled at lower cost. Self-sampling should not be used too casually, however, as there are often issues of data-acceptance related to it. This article shows that these issues are not easily dealt with in a statistical manner. Improvements might be made if self-sampling is understood as a form of cooperative research. Cooperative research has a number of dilemmas and benefits associated with it. This article suggests that if the guidelines for cooperative research are taken into account, the benefits are more likely to materialize. Secondly, acknowledging the dilemmas, and consciously dealing with them might lay the basis to trust-building, which is an essential element in the acceptance of data derived from self-sampling programmes. PMID:24090557

  8. Optimizing Sampling Strategies for Riverine Nitrate Using High-Frequency Data in Agricultural Watersheds.

    PubMed

    Reynolds, Kaycee N; Loecke, Terrance D; Burgin, Amy J; Davis, Caroline A; Riveros-Iregui, Diego; Thomas, Steven A; St Clair, Martin A; Ward, Adam S

    2016-06-21

    Understanding linked hydrologic and biogeochemical processes such as nitrate loading to agricultural streams requires that the sampling bias and precision of monitoring strategies be known. An existing spatially distributed, high-frequency nitrate monitoring network covering ∼40% of Iowa provided direct observations of in situ nitrate concentrations at a temporal resolution of 15 min. Systematic subsampling of nitrate records allowed for quantification of uncertainties (bias and precision) associated with estimates of various nitrate parameters, including: mean nitrate concentration, proportion of samples exceeding the nitrate drinking water standard (DWS), peak (>90th quantile) nitrate concentration, and nitrate flux. We subsampled continuous records for 47 site-year combinations mimicking common, but labor-intensive, water-sampling regimes (e.g., time-interval, stage-triggered, and dynamic-discharge storm sampling). Our results suggest that time-interval sampling most efficiently characterized all nitrate parameters, except at coarse frequencies for nitrate flux. Stage-triggered storm sampling most precisely captured nitrate flux when less than 0.19% of possible 15 min observations for a site-year were used. The time-interval strategy had the greatest return on sampling investment by most precisely and accurately quantifying nitrate parameters per sampling effort. These uncertainty estimates can aid in designing sampling strategies focused on nitrate monitoring in the tile-drained Midwest or similar agricultural regions. PMID:27192208

  9. Optimized Nested Markov Chain Monte Carlo Sampling: Application to the Liquid Nitrogen Hugoniot Using Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.

    2009-06-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The ``full'' system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The ``reference'' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  10. Optimized Nested Markov Chain Monte Carlo Sampling: Application to the Liquid Nitrogen Hugoniot Using Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Shaw, M. Sam; Coe, Joshua D.; Sewell, Thomas D.

    2009-12-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The "full" system of interest is calculated using density functional theory (DFT) with a 6-31G* basis set for the configurational energies. The "reference" system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  11. Optimized nested Markov chain Monte Carlo sampling: application to the liquid nitrogen Hugoniot using density functional theory

    SciTech Connect

    Shaw, Milton Sam; Coe, Joshua D; Sewell, Thomas D

    2009-01-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  12. Low-thrust trajectory optimization of asteroid sample return mission with multiple revolutions and moon gravity assists

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, FanHuag; Li, JunFeng

    2015-11-01

    Near-Earth asteroids have gained a lot of interest and the development in low-thrust propulsion technology makes complex deep space exploration missions possible. A mission from low-Earth orbit using low-thrust electric propulsion system to rendezvous with near-Earth asteroid and bring sample back is investigated. By dividing the mission into five segments, the complex mission is solved separately. Then different methods are used to find optimal trajectories for every segment. Multiple revolutions around the Earth and multiple Moon gravity assists are used to decrease the fuel consumption to escape from the Earth. To avoid possible numerical difficulty of indirect methods, a direct method to parameterize the switching moment and direction of thrust vector is proposed. To maximize the mass of sample, optimal control theory and homotopic approach are applied to find the optimal trajectory. Direct methods of finding proper time to brake the spacecraft using Moon gravity assist are also proposed. Practical techniques including both direct and indirect methods are investigated to optimize trajectories for different segments and they can be easily extended to other missions and more precise dynamic model.

  13. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    PubMed

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils. PMID:26449847

  14. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    PubMed Central

    Ramyachitra, D.; Sofia, M.; Manikandan, P.

    2015-01-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  15. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    PubMed

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions. PMID:26484222

  16. Nonlinear microscopy pulse optimization at the sample plane using second-harmonic generation from starch

    NASA Astrophysics Data System (ADS)

    Amat-Roldan, Ivan; Cormack, Iain G.; Artigas, David; Loza-Alvarez, Pablo

    2004-09-01

    In this paper we report the use of a starch as a non-linear medium for characterising ultrashort pulses. The starch suspension in water is sandwiched between a slide holder and a cover-slip and placed within the sample plane of the nonlinear microscope. This simple arrangement enables direct measurement of the pulse where they interact with the sample.

  17. Optimization of field-amplified sample injection for analysis of peptides by capillary electrophoresis-mass spectrometry.

    PubMed

    Yang, Yuanzhong; Boysen, Reinhard I; Hearn, Milton T W

    2006-07-15

    A versatile experimental approach is described to achieve very high sensitivity analysis of peptides by capillary electrophoresis-mass spectrometry with sheath flow configuration based on optimization of field-amplified sample injection. Compared to traditional hydrodynamic injection methods, signal enhancement in terms of detection sensitivity of the bioanalytes by more than 3000-fold can be achieved. The effects of injection conditions, composition of the acid and organic solvent in the sample solution, length of the water plug, sample injection time, and voltage on the efficiency of the sample stacking have been systematically investigated, with peptides in the low-nanomolar (10(-9) M) range readily detected under the optimized conditions. Linearity of the established stacking method was found to be excellent over 2 orders of magnitude of concentration. The method was further evaluated for the analysis of low concentration bioactive peptide mixtures and tryptic digests of proteins. A distinguishing feature of the described approach is that it can be employed directly for the analysis of low-abundance protein fragments generated by enzymatic digestion and a reversed-phase-based sample-desalting procedure. Thus, rapid identification of protein fragments as low-abundance analytes can be achieved with this new approach by comparison of the actual tandem mass spectra of selected peptides with the predicted fragmentation patterns using online database searching algorithms. PMID:16841892

  18. Optimization of Plasma Sample Pretreatment for Quantitative Analysis Using iTRAQ Labeling and LC-MALDI-TOF/TOF

    PubMed Central

    Luczak, Magdalena; Marczak, Lukasz; Stobiecki, Maciej

    2014-01-01

    Shotgun proteomic methods involving iTRAQ (isobaric tags for relative and absolute quantitation) peptide labeling facilitate quantitative analyses of proteomes and searches for useful biomarkers. However, the plasma proteome's complexity and the highly dynamic plasma protein concentration range limit the ability of conventional approaches to analyze and identify a large number of proteins, including useful biomarkers. The goal of this paper is to elucidate the best approach for plasma sample pretreatment for MS- and iTRAQ-based analyses. Here, we systematically compared four approaches, which include centrifugal ultrafiltration, SCX chromatography with fractionation, affinity depletion, and plasma without fractionation, to reduce plasma sample complexity. We generated an optimized protocol for quantitative protein analysis using iTRAQ reagents and an UltrafleXtreme (Bruker Daltonics) MALDI TOF/TOF mass spectrometer. Moreover, we used a simple, rapid, efficient, but inexpensive sample pretreatment technique that generated an optimal opportunity for biomarker discovery. We discuss the results from the four sample pretreatment approaches and conclude that SCX chromatography without affinity depletion is the best plasma sample preparation pretreatment method for proteome analysis. Using this technique, we identified 1,780 unique proteins, including 1,427 that were quantified by iTRAQ with high reproducibility and accuracy. PMID:24988083

  19. Alleviating Linear Ecological Bias and Optimal Design with Sub-sample Data

    PubMed Central

    Glynn, Adam; Wakefield, Jon; Handcock, Mark S.; Richardson, Thomas S.

    2009-01-01

    Summary In this paper, we illustrate that combining ecological data with subsample data in situations in which a linear model is appropriate provides three main benefits. First, by including the individual level subsample data, the biases associated with linear ecological inference can be eliminated. Second, by supplementing the subsample data with ecological data, the information about parameters will be increased. Third, we can use readily available ecological data to design optimal subsampling schemes, so as to further increase the information about parameters. We present an application of this methodology to the classic problem of estimating the effect of a college degree on wages. We show that combining ecological data with subsample data provides precise estimates of this value, and that optimal subsampling schemes (conditional on the ecological data) can provide good precision with only a fraction of the observations. PMID:20052294

  20. Improved estimates of forest vegetation structure and biomass with a LiDAR-optimized sampling design

    NASA Astrophysics Data System (ADS)

    Hawbaker, Todd J.; Keuler, Nicholas S.; Lesak, Adrian A.; Gobakken, Terje; Contrucci, Kirk; Radeloff, Volker C.

    2009-06-01

    LiDAR data are increasingly available from both airborne and spaceborne missions to map elevation and vegetation structure. Additionally, global coverage may soon become available with NASA's planned DESDynI sensor. However, substantial challenges remain to using the growing body of LiDAR data. First, the large volumes of data generated by LiDAR sensors require efficient processing methods. Second, efficient sampling methods are needed to collect the field data used to relate LiDAR data with vegetation structure. In this paper, we used low-density LiDAR data, summarized within pixels of a regular grid, to estimate forest structure and biomass across a 53,600 ha study area in northeastern Wisconsin. Additionally, we compared the predictive ability of models constructed from a random sample to a sample stratified using mean and standard deviation of LiDAR heights. Our models explained between 65 to 88% of the variability in DBH, basal area, tree height, and biomass. Prediction errors from models constructed using a random sample were up to 68% larger than those from the models built with a stratified sample. The stratified sample included a greater range of variability than the random sample. Thus, applying the random sample model to the entire population violated a tenet of regression analysis; namely, that models should not be used to extrapolate beyond the range of data from which they were constructed. Our results highlight that LiDAR data integrated with field data sampling designs can provide broad-scale assessments of vegetation structure and biomass, i.e., information crucial for carbon and biodiversity science.

  1. Improvements in pollutant monitoring: optimizing silicone for co-deployment with polyethylene passive sampling devices.

    PubMed

    O'Connell, Steven G; McCartney, Melissa A; Paulik, L Blair; Allan, Sarah E; Tidwell, Lane G; Wilson, Glenn; Anderson, Kim A

    2014-10-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2-5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960

  2. IMPROVEMENTS IN POLLUTANT MONITORING: OPTIMIZING SILICONE FOR CO-DEPLOYMENT WITH POLYETHYLENE PASSIVE SAMPLING DEVICES

    PubMed Central

    O’Connell, Steven G.; McCartney, Melissa A.; Paulik, L. Blair; Allan, Sarah E.; Tidwell, Lane G.; Wilson, Glenn; Anderson, Kim A.

    2014-01-01

    Sequestering semi-polar compounds can be difficult with low-density polyethylene (LDPE), but those pollutants may be more efficiently absorbed using silicone. In this work, optimized methods for cleaning, infusing reference standards, and polymer extraction are reported along with field comparisons of several silicone materials for polycyclic aromatic hydrocarbons (PAHs) and pesticides. In a final field demonstration, the most optimal silicone material is coupled with LDPE in a large-scale study to examine PAHs in addition to oxygenated-PAHs (OPAHs) at a Superfund site. OPAHs exemplify a sensitive range of chemical properties to compare polymers (log Kow 0.2–5.3), and transformation products of commonly studied parent PAHs. On average, while polymer concentrations differed nearly 7-fold, water-calculated values were more similar (about 3.5-fold or less) for both PAHs (17) and OPAHs (7). Individual water concentrations of OPAHs differed dramatically between silicone and LDPE, highlighting the advantages of choosing appropriate polymers and optimized methods for pollutant monitoring. PMID:25009960

  3. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  4. OPTIMIZING MINIRHIZOTRON SAMPLE FREQUENCY FOR ESTIMATING FINE ROOT PRODUCTION AND TURNOVER

    EPA Science Inventory

    The most frequent reason for using minirhizotrons in natural ecosystems is the determination of fine root production and turnover. Our objective is to determine the optimum sampling frequency for estimating fine root production and turnover using data from evergreen (Pseudotsuga ...

  5. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples

    USGS Publications Warehouse

    Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David

    2016-01-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.

  6. An accurate metalloprotein-specific scoring function and molecular docking program devised by a dynamic sampling and iteration optimization strategy.

    PubMed

    Bai, Fang; Liao, Sha; Gu, Junfeng; Jiang, Hualiang; Wang, Xicheng; Li, Honglin

    2015-04-27

    Metalloproteins, particularly zinc metalloproteins, are promising therapeutic targets, and recent efforts have focused on the identification of potent and selective inhibitors of these proteins. However, the ability of current drug discovery and design technologies, such as molecular docking and molecular dynamics simulations, to probe metal-ligand interactions remains limited because of their complicated coordination geometries and rough treatment in current force fields. Herein we introduce a robust, multiobjective optimization algorithm-driven metalloprotein-specific docking program named MpSDock, which runs on a scheme similar to consensus scoring consisting of a force-field-based scoring function and a knowledge-based scoring function. For this purpose, in this study, an effective knowledge-based zinc metalloprotein-specific scoring function based on the inverse Boltzmann law was designed and optimized using a dynamic sampling and iteration optimization strategy. This optimization strategy can dynamically sample and regenerate decoy poses used in each iteration step of refining the scoring function, thus dramatically improving both the effectiveness of the exploration of the binding conformational space and the sensitivity of the ranking of the native binding poses. To validate the zinc metalloprotein-specific scoring function and its special built-in docking program, denoted MpSDockZn, an extensive comparison was performed against six universal, popular docking programs: Glide XP mode, Glide SP mode, Gold, AutoDock, AutoDock4Zn, and EADock DSS. The zinc metalloprotein-specific knowledge-based scoring function exhibited prominent performance in accurately describing the geometries and interactions of the coordination bonds between the zinc ions and chelating agents of the ligands. In addition, MpSDockZn had a competitive ability to sample and identify native binding poses with a higher success rate than the other six docking programs. PMID:25746437

  7. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    NASA Astrophysics Data System (ADS)

    He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang

    2015-10-01

    In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.

  8. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    NASA Astrophysics Data System (ADS)

    He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang

    2016-08-01

    In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.

  9. Optimization design for selective extraction of size-fractioned DNA sample in microfabricated electrophoresis devices

    NASA Astrophysics Data System (ADS)

    Lin, Rongsheng; Burke, David T.; Burns, Mark A.

    2004-03-01

    In recent years, there has been tremendous interest in developing a highly integrated DNA analysis system using microfabrication techniques. With the success of incorporating sample injection, reaction, separation and detection onto a monolithic silicon device, addition of otherwise time-consuming components in macroworld such as sample preparation is gaining more and more attention. In this paper, we designed and fabricated a miniaturized device, capable of separating size-fractioned DNA sample and extracting the band of interest. In order to obtain pure target band, a novel technique utilizing shaping electric field is demonstrated. Both theoretical analysis and experimental data shows significant agreement in designing appropriate electrode structures to achieve the desired electric field distribution. This technique has a very simple fabrication procedure and can be readily added with other existing components to realize a highly integrated "lab-on-a-chip" system for DNA analysis.

  10. Neuroticism moderates the effect of maximum smoking level on lifetime panic disorder: a test using an epidemiologically defined national sample of smokers.

    PubMed

    Zvolensky, Michael J; Sachs-Ericsson, Natalie; Feldner, Matthew T; Schmidt, Norman B; Bowman, Carrie J

    2006-03-30

    The present study evaluated a moderational model of neuroticism on the relation between smoking level and panic disorder using data from the National Comorbidity Survey. Participants (n=924) included current regular smokers, as defined by a report of smoking regularly during the past month. Findings indicated that a generalized tendency to experience negative affect (neuroticism) moderated the effects of maximum smoking frequency (i.e., number of cigarettes smoked per day during the period when smoking the most) on lifetime history of panic disorder even after controlling for drug dependence, alcohol dependence, major depression, dysthymia, and gender. These effects were specific to panic disorder, as no such moderational effects were apparent for other anxiety disorders. Results are discussed in relation to refining recent panic-smoking conceptual models and elucidating different pathways to panic-related problems. PMID:16499972

  11. Optimal sampling strategy for estimation of spatial genetic structure in tree populations.

    PubMed

    Cavers, S; Degen, B; Caron, H; Lemes, M R; Margis, R; Salgueiro, F; Lowe, A J

    2005-10-01

    Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms. PMID:16030529

  12. Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats

    PubMed Central

    Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579

  13. Modular tube/plate-based sample management: a business model optimized for scalable storage and processing.

    PubMed

    Fillers, W Steven

    2004-12-01

    Modular approaches to sample management allow staged implementation and progressive expansion of libraries within existing laboratory space. A completely integrated, inert atmosphere system for the storage and processing of a variety of microplate and microtube formats is currently available as an integrated series of individual modules. Liquid handling for reformatting and replication into microplates, plus high-capacity cherry picking, can be performed within the inert environmental envelope to maximize compound integrity. Complete process automation provides ondemand access to samples and improved process control. Expansion of such a system provides a low-risk tactic for implementing a large-scale storage and processing system. PMID:15674027

  14. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    NASA Astrophysics Data System (ADS)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  15. Shotgun Proteomics of Tomato Fruits: Evaluation, Optimization and Validation of Sample Preparation Methods and Mass Spectrometric Parameters

    PubMed Central

    Kilambi, Himabindu V.; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K.; Sharma, Rameshwar; Sreelakshmi, Yellamaraju

    2016-01-01

    An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192

  16. Shotgun Proteomics of Tomato Fruits: Evaluation, Optimization and Validation of Sample Preparation Methods and Mass Spectrometric Parameters.

    PubMed

    Kilambi, Himabindu V; Manda, Kalyani; Sanivarapu, Hemalatha; Maurya, Vineet K; Sharma, Rameshwar; Sreelakshmi, Yellamaraju

    2016-01-01

    An optimized protocol was developed for shotgun proteomics of tomato fruit, which is a recalcitrant tissue due to a high percentage of sugars and secondary metabolites. A number of protein extraction and fractionation techniques were examined for optimal protein extraction from tomato fruits followed by peptide separation on nanoLCMS. Of all evaluated extraction agents, buffer saturated phenol was the most efficient. In-gel digestion [SDS-PAGE followed by separation on LCMS (GeLCMS)] of phenol-extracted sample yielded a maximal number of proteins. For in-solution digested samples, fractionation by strong anion exchange chromatography (SAX) also gave similar high proteome coverage. For shotgun proteomic profiling, optimization of mass spectrometry parameters such as automatic gain control targets (5E+05 for MS, 1E+04 for MS/MS); ion injection times (500 ms for MS, 100 ms for MS/MS); resolution of 30,000; signal threshold of 500; top N-value of 20 and fragmentation by collision-induced dissociation yielded the highest number of proteins. Validation of the above protocol in two tomato cultivars demonstrated its reproducibility, consistency, and robustness with a CV of < 10%. The protocol facilitated the detection of five-fold higher number of proteins compared to published reports in tomato fruits. The protocol outlined would be useful for high-throughput proteome analysis from tomato fruits and can be applied to other recalcitrant tissues. PMID:27446192

  17. An evaluation of optimal methods for avian influenza virus sample collection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Sample collection and transport are critical components of any diagnostic testing program and due to the amount of avian influenza virus (AIV) testing in the U.S. and worldwide, small improvements in sensitivity and specificity can translate into substantial cost savings from better test accuracy. ...

  18. Optimizing the soil sample collection strategy to identify maximum volatile organic compound concentrations in soil borings

    SciTech Connect

    Siebenmann, K. )

    1993-10-01

    The primary focus of the initial stages of a remedial investigation is to collect useful data for source identification and determination of the extent of soil contamination. To achieve this goal, soil samples should be collected at locations where the maximum concentration of contaminants exist. This study was conducted to determine the optimum strategy for selecting soil sample locations within a boring. Analytical results from soil samples collected during the remedial investigation of a Department of Defense Superfund site were used for the analysis. Trichloroethene (TCE) and tetrachloroethene (PCE) results were compared with organic vapor monitor (OVM) readings, lithologies, and organic carbon content to determine if these parameters can be used to choose soil sample locations in the field that contain the maximum concentration of these analytes within a soil boring or interval. The OVM was a handheld photoionization detector (PID) for screening the soil core to indicate areas of VOC contamination. The TCE and PCE concentrations were compared across lithologic contacts and within each lithologic interval. The organic content used for this analysis was visually estimated by the geologist during soil logging.

  19. Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2011-01-01

    Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…

  20. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  1. Correlated Spatio-Temporal Data Collection in Wireless Sensor Networks Based on Low Rank Matrix Approximation and Optimized Node Sampling

    PubMed Central

    Piao, Xinglin; Hu, Yongli; Sun, Yanfeng; Yin, Baocai; Gao, Junbin

    2014-01-01

    The emerging low rank matrix approximation (LRMA) method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs) by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability. PMID:25490583

  2. Optimized methods for extracting circulating small RNAs from long-term stored equine samples.

    PubMed

    Unger, Lucia; Fouché, Nathalie; Leeb, Tosso; Gerber, Vincent; Pacholewska, Alicja

    2016-01-01

    Circulating miRNAs in body fluids, particularly serum, are promising candidates for future routine biomarker profiling in various pathologic conditions in human and veterinary medicine. However, reliable standardized methods for miRNA extraction from equine serum and fresh or archived whole blood are sorely lacking. We systematically compared various miRNA extraction methods from serum and whole blood after short and long-term storage without addition of RNA stabilizing additives prior to freezing. Time of storage at room temperature prior to freezing did not affect miRNA quality in serum. Furthermore, we showed that miRNA of NGS-sufficient quality can be recovered from blood samples after >10 years of storage at -80 °C. This allows retrospective analyses of miRNAs from archived samples. PMID:27356979

  3. Optimization of proteomic sample preparation procedures for comprehensive protein characterization of pathogenic systems

    SciTech Connect

    Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.

    2008-12-19

    The elucidation of critical functional pathways employed by pathogens and hosts during an infectious cycle is both challenging and central to our understanding of infectious diseases. In recent years, mass spectrometry-based proteomics has been used as a powerful tool to identify key pathogenesis-related proteins and pathways. Despite the analytical power of mass spectrometry-based technologies, samples must be appropriately prepared to characterize the functions of interest (e.g. host-response to a pathogen or a pathogen-response to a host). The preparation of these protein samples requires multiple decisions about what aspect of infection is being studied, and it may require the isolation of either host and/or pathogen cellular material.

  4. Application of trajectory optimization techniques to upper atmosphere sampling flights using the F-15 Eagle aircraft

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1976-01-01

    Atmospheric sampling has been carried out by flights using an available high-performance supersonic aircraft. Altitude potential of an off-the-shelf F-15 aircraft is examined. It is shown that the standard F-15 has a maximum altitude capability in excess of 100,000 feet for routine flight operation by NASA personnel. This altitude is well in excess of the minimum altitudes which must be achieved for monitoring the possible growth of suspected aerosol contaminants.

  5. Size exclusion chromatography for analyses of fibroin in silk: optimization of sampling and separation conditions

    NASA Astrophysics Data System (ADS)

    Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna

    2014-02-01

    A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.

  6. Program Design Analysis using BEopt Building Energy Optimization Software: Defining a Technology Pathway Leading to New Homes with Zero Peak Cooling Demand; Preprint

    SciTech Connect

    Anderson, R.; Christensen, C.; Horowitz, S.

    2006-08-01

    An optimization method based on the evaluation of a broad range of different combinations of specific energy efficiency and renewable-energy options is used to determine the least-cost pathway to the development of new homes with zero peak cooling demand. The optimization approach conducts a sequential search of a large number of possible option combinations and uses the most cost-effective alternatives to generate a least-cost curve to achieve home-performance levels ranging from a Title 24-compliant home to a home that uses zero net source energy on an annual basis. By evaluating peak cooling load reductions on the least-cost curve, it is then possible to determine the most cost-effective combination of energy efficiency and renewable-energy options that both maximize annual energy savings and minimize peak-cooling demand.

  7. CHIP: Defining a dimension of the vulnerability to attention deficit hyperactivity disorder (ADHD) using sibling and individual data of children in a community-based sample.

    PubMed

    Curran, Sarah; Rijsdijk, Fruhling; Martin, Neilson; Marusic, Katja; Asherson, Philip; Taylor, Eric; Sham, Pak

    2003-05-15

    We are taking a quantitative trait approach to the molecular genetic study of attention deficit hyperactivity disorder (ADHD) using a truncated case-control association design. An epidemiological sample of children aged 5 to 15 years was evaluated for symptoms of ADHD using a parent rating scale. Individuals scoring high or low on this scale were selected for further investigation with additional questionnaires and DNA analysis. Data in studies like this are typically complicated. In the study reported on here, individuals have from 1 to 4 questionnaires completed on them and the sample is composed of a mixture of singletons and siblings. In this paper, we describe how we used a genetic hierarchical model to fit our data, together with a twin dataset, in order to estimate genetic factor loadings. Correlation matrices were estimated for our data using a maximum likelihood approach to account for missing data. We describe how we used these results to create a composite score, the heritability of which was estimated to be acceptably high using the twin dataset. This score measures a quantitative dimension onto which molecular genetic data will be mapped. PMID:12707944

  8. Defining Effective Teaching

    ERIC Educational Resources Information Center

    Layne, L.

    2012-01-01

    The author looks at the meaning of specific terminology commonly used in student surveys: "effective teaching." The research seeks to determine if there is a difference in how "effective teaching" is defined by those taking student surveys and those interpreting the results. To investigate this difference, a sample group of professors and students…

  9. The optimization of incident angles of low-energy oxygen ion beams for increasing sputtering rate on silicon samples

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Yoshida, N.; Takahashi, M.; Tomita, M.

    2008-12-01

    In order to determine an appropriate incident angle of low-energy (350-eV) oxygen ion beam for achieving the highest sputtering rate without degradation of depth resolution in SIMS analysis, a delta-doped sample was analyzed with incident angles from 0° to 60° without oxygen bleeding. As a result, 45° incidence was found to be the best analytical condition, and it was confirmed that surface roughness did not occur on the sputtered surface at 100-nm depth by using AFM. By applying the optimized incident angle, sputtering rate becomes more than twice as high as that of the normal incident condition.

  10. FM Reconstruction of Non-Uniformly Sampled Protein NMR Data at Higher Dimensions and Optimization by Distillation

    PubMed Central

    Hyberts, Sven G.; Frueh, Dominique P.; Arthanari, Haribabu; Wagner, Gerhard

    2010-01-01

    Non-uniform sampling (NUS) enables recording of multidimensional NMR data at resolutions matching the resolving power of modern instruments without using excessive measuring time. However, in order to obtain satisfying results, efficient reconstruction methods are needed. Here we describe an optimized version of the Forward Maximum entropy (FM) reconstruction method, which can reconstruct up to three indirect dimensions. For complex datasets, such as NOESY spectra, the performance of the procedure is enhanced by a distillation procedure that reduces artifacts stemming from intense peaks. PMID:19705283

  11. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method

  12. Optimal sample preparation to characterize corrosion in historical photographs with analytical TEM.

    PubMed

    Grieten, Eva; Caen, Joost; Schryvers, Dominique

    2014-10-01

    An alternative focused ion beam preparation method is used for sampling historical photographs containing metallic nanoparticles in a polymer matrix. We use the preparation steps of classical ultra-microtomy with an alternative final sectioning with a focused ion beam. Transmission electron microscopy techniques show that the lamella has a uniform thickness, which is an important factor for analytical transmission electron microscopy. Furthermore, the method maintains the spatial distribution of nanoparticles in the soft matrix. The results are compared with traditional preparation techniques such as ultra-microtomy and classical focused ion beam milling. PMID:25256650

  13. Optimal Media for Use in Air Sampling To Detect Cultivable Bacteria and Fungi in the Pharmacy

    PubMed Central

    Joseph, Riya Augustin; Le, Theresa V.; Trevino, Ernest A.; Schaeffer, M. Frances; Vance, Paula H.

    2013-01-01

    Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl. PMID:23903551

  14. Optimal media for use in air sampling to detect cultivable bacteria and fungi in the pharmacy.

    PubMed

    Weissfeld, Alice S; Joseph, Riya Augustin; Le, Theresa V; Trevino, Ernest A; Schaeffer, M Frances; Vance, Paula H

    2013-10-01

    Current guidelines for air sampling for bacteria and fungi in compounding pharmacies require the use of a medium for each type of organism. U.S. Pharmacopeia (USP) chapter <797> (http://www.pbm.va.gov/linksotherresources/docs/USP797PharmaceuticalCompoundingSterileCompounding.pdf) calls for tryptic soy agar with polysorbate and lecithin (TSApl) for bacteria and malt extract agar (MEA) for fungi. In contrast, the Controlled Environment Testing Association (CETA), the professional organization for individuals who certify hoods and clean rooms, states in its 2012 certification application guide (http://www.cetainternational.org/reference/CAG-009v3.pdf?sid=1267) that a single-plate method is acceptable, implying that it is not always necessary to use an additional medium specifically for fungi. In this study, we reviewed 5.5 years of data from our laboratory to determine the utility of TSApl versus yeast malt extract agar (YMEA) for the isolation of fungi. Our findings, from 2,073 air samples obtained from compounding pharmacies, demonstrated that the YMEA yielded >2.5 times more fungal isolates than TSApl. PMID:23903551

  15. Optimal bandpass sampling strategies for enhancing the performance of a phase noise meter

    NASA Astrophysics Data System (ADS)

    Angrisani, Leopoldo; Schiano Lo Moriello, Rosario; D'Arco, Mauro; Greenhall, Charles

    2008-10-01

    Measurement of phase noise affecting oscillators or clocks is a fundamental practice whenever the need of a reliable time base is of primary concern. In spite of the number of methods or techniques either available in the literature or implemented as personalities in general-purpose equipment, very accurate measurement results can be gained only through expensive, dedicated instruments. To offer a cost-effective alternative, the authors have already realized a DSP-based phase noise meter, capable of assuring good performance and real-time operation. The meter, however, suffers from a reduced frequency range (about 250 kHz), and needs an external time base for input signal digitization. To overcome these drawbacks, the authors propose the use of bandpass sampling strategies to enlarge the frequency range, and of an internal time base to make standalone operation much more feasible. After some remarks on the previous version of the meter, key features of the adopted time base and proposed sampling strategies are described in detail. Results of experimental tests, carried out on sinusoidal signals provided both by function and arbitrary waveform generators, are presented and discussed; evidence of the meter's reliability and efficacy is finally given.

  16. A tale of two retinal domains: near-optimal sampling of achromatic contrasts in natural scenes through asymmetric photoreceptor distribution.

    PubMed

    Baden, Tom; Schubert, Timm; Chang, Le; Wei, Tao; Zaichuk, Mariana; Wissinger, Bernd; Euler, Thomas

    2013-12-01

    For efficient coding, sensory systems need to adapt to the distribution of signals to which they are exposed. In vision, natural scenes above and below the horizon differ in the distribution of chromatic and achromatic features. Consequently, many species differentially sample light in the sky and on the ground using an asymmetric retinal arrangement of short- (S, "blue") and medium- (M, "green") wavelength-sensitive photoreceptor types. Here, we show that in mice this photoreceptor arrangement provides for near-optimal sampling of natural achromatic contrasts. Two-photon population imaging of light-driven calcium signals in the synaptic terminals of cone-photoreceptors expressing a calcium biosensor revealed that S, but not M cones, preferred dark over bright stimuli, in agreement with the predominance of dark contrasts in the sky but not on the ground. Therefore, the different cone types do not only form the basis of "color vision," but in addition represent distinct (achromatic) contrast-selective channels. PMID:24314730

  17. Design Of A Sorbent/desorbent Unit For Sample Pre-treatment Optimized For QMB Gas Sensors

    SciTech Connect

    Pennazza, G.; Cristina, S.; Santonico, M.; Martinelli, E.; Di Natale, C.; D'Amico, A.; Paolesse, R.

    2009-05-23

    Sample pre-treatment is a typical procedure in analytical chemistry aimed at improving the performance of analytical systems. In case of gas sensors sample pre-treatment systems are devised to overcome sensors limitations in terms of selectivity and sensitivity. For this purpose, systems based on adsorption and desorption processes driven by temperature conditioning have been illustrated. The involvement of large temperature ranges may pose problems when QMB gas sensors are used. In this work a study of such influences on the overall sensing properties of QMB sensors are illustrated. The results allowed the design of a pre-treatment unit coupled with a QMB gas sensors array optimized to operate in a suitable temperatures range. The performance of the system are illustrated by the partially separation of water vapor in a gas mixture, and by substantial improvement of the signal to noise ratio.

  18. Optimization of a low-cost defined medium for alcoholic fermentation--a case study for potential application in bioethanol production from industrial wastewaters.

    PubMed

    Comelli, Raúl N; Seluy, Lisandro G; Isla, Miguel A

    2016-01-25

    In bioethanol production processes, the media composition has an impact on product concentration, yields and the overall process economics. The main purpose of this research was to develop a low-cost mineral-based supplement for successful alcoholic fermentation in an attempt to provide an economically feasible alternative to produce bioethanol from novel sources, for example, sugary industrial wastewaters. Statistical experimental designs were used to select essential nutrients for yeast fermentation, and its optimal concentrations were estimated by Response Surface Methodology. Fermentations were performed on synthetic media inoculated with 2.0 g L(-1) of yeast, and the evolution of biomass, sugar, ethanol, CO2 and glycerol were monitored over time. A mix of salts [10.6 g L(-1) (NH4)2HPO4; 6.4 g L(-1) MgSO4·7H2O and 7.5 mg L(-1) ZnSO4·7H2O] was found to be optimal. It led to the complete fermentation of the sugars in less than 12h with an average ethanol yield of 0.42 g ethanol/g sugar. A general C-balance indicated that no carbonaceous compounds different from biomass, ethanol, CO2 or glycerol were produced in significant amounts in the fermentation process. Similar results were obtained when soft drink wastewaters were tested to evaluate the potential industrial application of this supplement. The ethanol yields were very close to those obtained when yeast extract was used as the supplement, but the optimized mineral-based medium is six times cheaper, which favorably impacts the process economics and makes this supplement more attractive from an industrial viewpoint. PMID:26391675

  19. Fractional Factorial Design of MALDI-TOF-MS Sample Preparations for the Optimized Detection of Phospholipids and Acylglycerols.

    PubMed

    AlMasoud, Najla; Correa, Elon; Trivedi, Drupad K; Goodacre, Royston

    2016-06-21

    Matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF-MS) has successfully been used for the analysis of high molecular weight compounds, such as proteins and nucleic acids. By contrast, analysis of low molecular weight compounds with this technique has been less successful due to interference from matrix peaks which have a similar mass to the target analyte(s). Recently, a variety of modified matrices and matrix additives have been used to overcome these limitations. An increased interest in lipid analysis arose from the feasibility of correlating these components with many diseases, e.g. atherosclerosis and metabolic dysfunctions. Lipids have a wide range of chemical properties making their analysis difficult with traditional methods. MALDI-TOF-MS shows excellent potential for sensitive and rapid analysis of lipids, and therefore this study focuses on computational-analytical optimization of the analysis of five lipids (4 phospholipids and 1 acylglycerol) in complex mixtures using MALDI-TOF-MS with fractional factorial design (FFD) and Pareto optimality. Five different experimental factors were investigated using FFD which reduced the number of experiments performed by identifying 720 key experiments from a total of 8064 possible analyses. Factors investigated included the following: matrices, matrix preparations, matrix additives, additive concentrations, and deposition methods. This led to a significant reduction in time and cost of sample analysis with near optimal conditions. We discovered that the key factors used to produce high quality spectra were the matrix and use of appropriate matrix additives. PMID:27228355

  20. Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates

    PubMed Central

    Smiley Evans, Tierra; Barry, Peter A.; Gilardi, Kirsten V.; Goldstein, Tracey; Deere, Jesse D.; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R.; Wolking, David; Smith, Brett; Mazet, Jonna A. K.; Johnson, Christine K.

    2015-01-01

    Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible. PMID:26046911

  1. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    SciTech Connect

    Chen, DI-WEN

    2001-11-21

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  2. Defining Tiger Parenting in Chinese Americans

    PubMed Central

    Kim, Su Yeong

    2016-01-01

    “Tiger” parenting, as described by Amy Chua [2011], has instigated scholarly discourse on this phenomenon and its possible effects on families. Our eight-year longitudinal study, published in the Asian American Journal of Psychology [Kim, Wang, Orozco-Lapray, Shen, & Murtuza, 2013b], demonstrates that tiger parenting is not a common parenting profile in a sample of 444 Chinese American families. Tiger parenting also does not relate to superior academic performance in children. In fact, the best developmental outcomes were found among children of supportive parents. We examine the complexities around defining tiger parenting by reviewing classical literature on parenting styles and scholarship on Asian American parenting, along with Amy Chua’s own description of her parenting method, to develop, define, and categorize variability in parenting in a sample of Chinese American families. We also provide evidence that supportive parenting is important for the optimal development of Chinese American adolescents. PMID:27182075

  3. Maternal death inquiry and response in India - the impact of contextual factors on defining an optimal model to help meet critical maternal health policy objectives

    PubMed Central

    2011-01-01

    Background Maternal death reviews have been utilized in several countries as a means of identifying social and health care quality issues affecting maternal survival. From 2005 to 2009, a standardized community-based maternal death inquiry and response initiative was implemented in eight Indian states with the aim of addressing critical maternal health policy objectives. However, state-specific contextual factors strongly influenced the effort's success. This paper examines the impact and implications of the contextual factors. Methods We identified community, public health systems and governance related contextual factors thought to affect the implementation, utilization and up-scaling of the death inquiry process. Then, according to selected indicators, we documented the contextual factors' presence and their impact on the process' success in helping meet critical maternal health policy objectives in four districts of Rajasthan, Madhya Pradesh and West Bengal. Based on this assessment, we propose an optimal model for conducting community-based maternal death inquiries in India and similar settings. Results The death inquiry process led to increases in maternal death notification and investigation whether civil society or government took charge of these tasks, stimulated sharing of the findings in multiple settings and contributed to the development of numerous evidence-based local, district and statewide maternal health interventions. NGO inputs were essential where communities, public health systems and governance were weak and boosted effectiveness in stronger settings. Public health systems participation was enabled by responsive and accountable governance. Communities participated most successfully through India's established local governance Panchayat Raj Institutions. In one instance this led to the development of a multi-faceted intervention well-integrated at multiple levels. Conclusions The impact of several contextual factors on the death inquiry

  4. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    NASA Astrophysics Data System (ADS)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-01-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e. equivalent to ~ 8 ng of amino sugar carbon. Our results obtained from δ13C analysis of amino sugars in selected marine sediment samples showed that muramic acid had isotopic imprints from indigenous bacterial activities, whereas glucosamine and galactosamine were mainly derived from organic detritus. The analysis of stable carbon isotopic compositions of amino sugars opens a promising window for the investigation of microbial metabolisms in marine sediments and the deep marine biosphere.

  5. Optimization of separation and online sample concentration of N,N-dimethyltryptamine and related compounds using MEKC.

    PubMed

    Wang, Man-Juing; Tsai, Chih-Hsin; Hsu, Wei-Ya; Liu, Ju-Tsung; Lin, Cheng-Huang

    2009-02-01

    The optimal separation conditions and online sample concentration for N,N-dimethyltryptamine (DMT) and related compounds, including alpha-methyltryptamine (AMT), 5-methoxy-AMT (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DiPT), 5-methoxy-DMT (5-MeO-DMT), and 5-methoxy-N,N-DiPT (5-MeO-DiPT), using micellar EKC (MEKC) with UV-absorbance detection are described. The LODs (S/N = 3) for MEKC ranged from 1.0 1.8 microg/mL. Use of online sample concentration methods, including sweeping-MEKC and cation-selective exhaustive injection-sweep-MEKC (CSEI-sweep-MEKC) improved the LODs to 2.2 8.0 ng/mL and 1.3 2.7 ng/mL, respectively. In addition, the order of migration of the nine tryptamines was investigated. A urine sample, obtained by spiking urine collected from a human volunteer with DMT, was also successfully examined. PMID:19137528

  6. Optimization of a derivatization-solid-phase microextraction method for the analysis of thirty phenolic pollutants in water samples.

    PubMed

    Llompart, Maria; Lourido, Mercedes; Landin, Pedro; García-Jares, Carmen; Cela, Rafael

    2002-07-19

    Solid-phase microextraction (SPME) coupled to gas chromatography-mass spectrometry has been applied to the extraction of 30 phenol derivatives from water samples. Analytes were in situ acetylated and headspace solid-phase microextraction was performed. Different parameters affecting extraction efficiency were studied. Optimization of temperature, type of microextraction fiber and volume of sample has been done by means of a mixed-level categorical experimental design, which allows to study main effects and second order interactions. Five different fiber coatings were employed in this study; also, extraction temperature was studied at three levels. Both factors, fiber coating and extraction temperature, were important to achieve high sensitivity. Moreover, these parameters showed a significant interaction, which indicates the different kinetic behavior of the SPME process when different coatings are used. It was found that 75 microm carboxen-polydimethylsiloxane and 100 microm polydimethylsiloxane, yield the highest responses. The first one is specially appropriated for phenol, methylphenols and low chlorinated chlorophenols and the second one for highly chlorinated phenols. The two methods proposed in this study shown good linearity and precision. Practical applicability was demonstrated through the analysis of a real sewage water sample, contaminated with phenols. PMID:12187964

  7. Tuberculosis and mass gatherings-opportunities for defining burden, transmission risk, and the optimal surveillance, prevention, and control measures at the annual Hajj pilgrimage.

    PubMed

    Zumla, Alimuddin; Saeed, Abdulaziz Bin; Alotaibi, Badriah; Yezli, Saber; Dar, Osman; Bieh, Kingsley; Bates, Matthew; Tayeb, Tamara; Mwaba, Peter; Shafi, Shuja; McCloskey, Brian; Petersen, Eskild; Azhar, Esam I

    2016-06-01

    Tuberculosis (TB) is now the most common infectious cause of death worldwide. In 2014, an estimated 9.6 million people developed active TB. There were an estimated three million people with active TB including 360000 with multidrug-resistant TB (MDR-TB) who were not diagnosed, and such people continue to fuel TB transmission in the community. Accurate data on the actual burden of TB and the transmission risk associated with mass gatherings are scarce and unreliable due to the small numbers studied and methodological issues. Every year, an estimated 10 million pilgrims from 184 countries travel to the Kingdom of Saudi Arabia (KSA) to perform the Hajj and Umrah pilgrimages. A large majority of pilgrims come from high TB burden and MDR-TB endemic areas and thus many may have undiagnosed active TB, sub-clinical TB, and latent TB infection. The Hajj pilgrimage provides unique opportunities for the KSA and the 184 countries from which pilgrims originate, to conduct high quality priority research studies on TB under the remit of the Global Centre for Mass Gatherings Medicine. Research opportunities are discussed, including those related to the definition of the TB burden, transmission risk, and the optimal surveillance, prevention, and control measures at the annual Hajj pilgrimage. The associated data are required to develop international recommendations and guidelines for TB management and control at mass gathering events. PMID:26873277

  8. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design

    PubMed Central

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges. PMID:27458364

  9. Selection of Specific Protein Binders for Pre-Defined Targets from an Optimized Library of Artificial Helicoidal Repeat Proteins (alphaRep)

    PubMed Central

    Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe

    2013-01-01

    We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a “filtration” procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×109 independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties. PMID:24014183

  10. Randomized Trial of Postoperative Adjuvant Therapy in Stage II and III Rectal Cancer to Define the Optimal Sequence of Chemotherapy and Radiotherapy: 10-Year Follow-Up

    SciTech Connect

    Kim, Tae-Won; Lee, Je-Hwan; Lee, Jung-Hee; Ahn, Jin-Hee; Kang, Yoon-Koo; Lee, Kyoo-Hyung; Yu, Chang-Sik; Kim, Jong-Hoon; Ahn, Seung-Do; Kim, Woo-Kun; Kim, Jin-Cheon; Lee, Jung-Shin

    2011-11-15

    Purpose: To determine the optimal sequence of postoperative adjuvant chemotherapy and radiotherapy in patients with Stage II or III rectal cancer. Methods and Materials: A total of 308 patients were randomized to early (n = 155) or late (n = 153) radiotherapy (RT). Treatment included eight cycles of chemotherapy, consisting of fluorouracil 375 mg/m{sup 2}/day and leucovorin 20 mg/m{sup 2}/day, at 4-week intervals, and pelvic radiotherapy of 45 Gy in 25 fractions. Radiotherapy started on Day 1 of the first chemotherapy cycle in the early RT arm and on Day 1 of the third chemotherapy cycle in the late RT arm. Results: At a median follow-up of 121 months for surviving patients, disease-free survival (DFS) at 10 years was not statistically significantly different between the early and late RT arms (71% vs. 63%; p = 0.162). A total of 36 patients (26.7%) in the early RT arm and 49 (35.3%) in the late RT arm experienced recurrence (p = 0.151). Overall survival did not differ significantly between the two treatment groups. However, in patients who underwent abdominoperineal resection, the DFS rate at 10 years was significantly greater in the early RT arm than in the late RT arm (63% vs. 40%; p = 0.043). Conclusions: After the long-term follow-up duration, this study failed to show a statistically significant DFS advantage for early radiotherapy with concurrent chemotherapy after resection of Stage II and III rectal cancer. Our results, however, suggest that if neoadjuvant chemoradiation is not given before surgery, then early postoperative chemoradiation should be considered for patients requiring an abdominoperineal resection.

  11. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    SciTech Connect

    Yu, Yuqi; Wang, Jinan; Shao, Qiang E-mail: Jiye.Shi@ucb.com Zhu, Weiliang E-mail: Jiye.Shi@ucb.com; Shi, Jiye E-mail: Jiye.Shi@ucb.com

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  12. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    NASA Astrophysics Data System (ADS)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang; Shi, Jiye; Zhu, Weiliang

    2015-03-01

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  13. An Optimized Adsorbent Sampling Combined to Thermal Desorption GC-MS Method for Trimethylsilanol in Industrial Environments

    PubMed Central

    Lee, Jae Hwan; Jia, Chunrong; Kim, Yong Doo; Kim, Hong Hyun; Pham, Tien Thang; Choi, Young Seok; Seo, Young Un; Lee, Ike Woo

    2012-01-01

    Trimethylsilanol (TMSOH) can cause damage to surfaces of scanner lenses in the semiconductor industry, and there is a critical need to measure and control airborne TMSOH concentrations. This study develops a thermal desorption (TD)-gas chromatography (GC)-mass spectrometry (MS) method for measuring trace-level TMSOH in occupational indoor air. Laboratory method optimization obtained best performance when using dual-bed tube configuration (100 mg of Tenax TA followed by 100 mg of Carboxen 569), n-decane as a solvent, and a TD temperature of 300°C. The optimized method demonstrated high recovery (87%), satisfactory precision (<15% for spiked amounts exceeding 1 ng), good linearity (R2 = 0.9999), a wide dynamic mass range (up to 500 ng), low method detection limit (2.8 ng m−3 for a 20-L sample), and negligible losses for 3-4-day storage. The field study showed performance comparable to that in laboratory and yielded first measurements of TMSOH, ranging from 1.02 to 27.30 μg/m3, in the semiconductor industry. We suggested future development of real-time monitoring techniques for TMSOH and other siloxanes for better maintenance and control of scanner lens in semiconductor wafer manufacturing. PMID:22966229

  14. Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry

    PubMed Central

    Die, Jose V.; Roman, Belen; Flores, Fernando; Rowland, Lisa J.

    2016-01-01

    The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data. PMID:27014296

  15. Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.

    PubMed

    Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J

    2016-01-01

    The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data. PMID:27014296

  16. College Chemistry and Piaget: Defining the Sample.

    ERIC Educational Resources Information Center

    Milakofsky, Louis; Bender, David S.

    Cognitive performance on "An Inventory of Piaget's Developmental Tasks" (IPDT) was related to the Scholastic Aptitude Tests and performance in both college chemistry lecture and laboratory classes. The IPDT is a valid and reliable 72-item, untimed, multiple-choice paper and pencil inventory with 19 subscales representing different Piagetian tasks.…

  17. Optimization and comparison of bottom-up proteomic sample preparation for early-stage Xenopus laevis embryos.

    PubMed

    Peuchen, Elizabeth H; Sun, Liangliang; Dovichi, Norman J

    2016-07-01

    Xenopus laevis is an important model organism in developmental biology. While there is a large literature on changes in the organism's transcriptome during development, the study of its proteome is at an embryonic state. Several papers have been published recently that characterize the proteome of X. laevis eggs and early-stage embryos; however, proteomic sample preparation optimizations have not been reported. Sample preparation is challenging because a large fraction (~90 % by weight) of the egg or early-stage embryo is yolk. We compared three common protein extraction buffer systems, mammalian Cell-PE LB(TM) lysing buffer (NP40), sodium dodecyl sulfate (SDS), and 8 M urea, in terms of protein extraction efficiency and protein identifications. SDS extracts contained the highest concentration of proteins, but this extract was dominated by a high concentration of yolk proteins. In contrast, NP40 extracts contained ~30 % of the protein concentration as SDS extracts, but excelled in discriminating against yolk proteins, which resulted in more protein and peptide identifications. We then compared digestion methods using both SDS and NP40 extraction methods with one-dimensional reverse-phase liquid chromatography-tandem mass spectrometry (RPLC-MS/MS). NP40 coupled to a filter-aided sample preparation (FASP) procedure produced nearly twice the number of protein and peptide identifications compared to alternatives. When NP40-FASP samples were subjected to two-dimensional RPLC-ESI-MS/MS, a total of 5171 proteins and 38,885 peptides were identified from a single stage of embryos (stage 2), increasing the number of protein identifications by 23 % in comparison to other traditional protein extraction methods. PMID:27137514

  18. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    NASA Astrophysics Data System (ADS)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-09-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.

  19. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    PubMed

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. PMID:27288706

  20. Detection of the Inflammation Biomarker C-Reactive Protein in Serum Samples: Towards an Optimal Biosensor Formula

    PubMed Central

    Fakanya, Wellington M.; Tothill, Ibtisam E.

    2014-01-01

    The development of an electrochemical immunosensor for the biomarker, C-reactive protein (CRP), is reported in this work. CRP has been used to assess inflammation and is also used in a multi-biomarker system as a predictive biomarker for cardiovascular disease risk. A gold-based working electrode sensor was developed, and the types of electrode printing inks and ink curing techniques were then optimized. The electrodes with the best performance parameters were then employed for the construction of an immunosensor for CRP by immobilizing anti-human CRP antibody on the working electrode surface. A sandwich enzyme-linked immunosorbent assay (ELISA) was then constructed after sample addition by using anti-human CRP antibody labelled with horseradish peroxidase (HRP). The signal was generated by the addition of a mediator/substrate system comprised of 3,3,5',5'-Tetramethylbenzidine dihydrochloride (TMB) and hydrogen peroxide (H2O2). Measurements were conducted using chronoamperometry at −200 mV against an integrated Ag/AgCl reference electrode. A CRP limit of detection (LOD) of 2.2 ng·mL−1 was achieved in spiked serum samples, and performance agreement was obtained with reference to a commercial ELISA kit. The developed CRP immunosensor was able to detect a diagnostically relevant range of the biomarker in serum without the need for signal amplification using nanoparticles, paving the way for future development on a cardiac panel electrochemical point-of-care diagnostic device. PMID:25587427

  1. Optimization of the RNA extraction method for transcriptome studies of Salmonella inoculated on commercial raw chicken breast samples

    PubMed Central

    2011-01-01

    Background There has been increased interest in the study of molecular survival mechanisms expressed by foodborne pathogens present on food surfaces. Determining genomic responses of these pathogens to antimicrobials is of particular interest since this helps to understand antimicrobial effects at the molecular level. Assessment of bacterial gene expression by transcriptomic analysis in response to these antimicrobials would aid prediction of the phenotypic behavior of the bacteria in the presence of antimicrobials. However, before transcriptional profiling approaches can be implemented routinely, it is important to develop an optimal method to consistently recover pathogens from the food surface and ensure optimal quality RNA so that the corresponding gene expression analysis represents the current response of the organism. Another consideration is to confirm that there is no interference from the "background" food or meat matrix that could mask the bacterial response. Findings Our study involved developing a food model system using chicken breast meat inoculated with mid-log Salmonella cells. First, we tested the optimum number of Salmonella cells required on the poultry meat in order to extract high quality RNA. This was analyzed by inoculating 10-fold dilutions of Salmonella on the chicken samples followed by RNA extraction. Secondly, we tested the effect of two different bacterial cell recovery solutions namely 0.1% peptone water and RNAprotect (Qiagen Inc.) on the RNA yield and purity. In addition, we compared the efficiency of sonication and bead beater methods to break the cells for RNA extraction. To check chicken nucleic acid interference on downstream Salmonella microarray experiments both chicken and Salmonella cDNA labeled with different fluorescent dyes were mixed together and hybridized on a single Salmonella array. Results of this experiment did not show any cross-hybridization signal from the chicken nucleic acids. In addition, we demonstrated the

  2. Carbonyl compounds emitted by a diesel engine fuelled with diesel and biodiesel-diesel blends: Sampling optimization and emissions profile

    NASA Astrophysics Data System (ADS)

    Guarieiro, Lílian Lefol Nani; Pereira, Pedro Afonso de Paula; Torres, Ednildo Andrade; da Rocha, Gisele Olimpio; de Andrade, Jailson B.

    Biodiesel is emerging as a renewable fuel, hence becoming a promising alternative to fossil fuels. Biodiesel can form blends with diesel in any ratio, and thus could replace partially, or even totally, diesel fuel in diesel engines what would bring a number of environmental, economical and social advantages. Although a number of studies are available on regulated substances, there is a gap of studies on unregulated substances, such as carbonyl compounds, emitted during the combustion of biodiesel, biodiesel-diesel and/or ethanol-biodiesel-diesel blends. CC is a class of hazardous pollutants known to be participating in photochemical smog formation. In this work a comparison was carried out between the two most widely used CC collection methods: C18 cartridges coated with an acid solution of 2,4-dinitrophenylhydrazine (2,4-DNPH) and impinger bottles filled in 2,4-DNPH solution. Sampling optimization was performed using a 2 2 factorial design tool. Samples were collected from the exhaust emissions of a diesel engine with biodiesel and operated by a steady-state dynamometer. In the central body of factorial design, the average of the sum of CC concentrations collected using impingers was 33.2 ppmV but it was only 6.5 ppmV for C18 cartridges. In addition, the relative standard deviation (RSD) was 4% for impingers and 37% for C18 cartridges. Clearly, the impinger system is able to collect CC more efficiently, with lower error than the C18 cartridge system. Furthermore, propionaldehyde was nearly not sampled by C18 system at all. For these reasons, the impinger system was chosen in our study. The optimized sampling conditions applied throughout this study were: two serially connected impingers each containing 10 mL of 2,4-DNPH solution at a flow rate of 0.2 L min -1 during 5 min. A profile study of the C1-C4 vapor-phase carbonyl compound emissions was obtained from exhaust of pure diesel (B0), pure biodiesel (B100) and biodiesel-diesel mixtures (B2, B5, B10, B20, B50, B

  3. Optimizing Frozen Sample Preparation for Laser Microdissection: Assessment of CryoJane Tape-Transfer System®.

    PubMed

    Golubeva, Yelena G; Smith, Roberta M; Sternberg, Lawrence R

    2013-01-01

    Laser microdissection is an invaluable tool in medical research that facilitates collecting specific cell populations for molecular analysis. Diversity of research targets (e.g., cancerous and precancerous lesions in clinical and animal research, cell pellets, rodent embryos, etc.) and varied scientific objectives, however, present challenges toward establishing standard laser microdissection protocols. Sample preparation is crucial for quality RNA, DNA and protein retrieval, where it often determines the feasibility of a laser microdissection project. The majority of microdissection studies in clinical and animal model research are conducted on frozen tissues containing native nucleic acids, unmodified by fixation. However, the variable morphological quality of frozen sections from tissues containing fat, collagen or delicate cell structures can limit or prevent successful harvest of the desired cell population via laser dissection. The CryoJane Tape-Transfer System®, a commercial device that improves cryosectioning outcomes on glass slides has been reported superior for slide preparation and isolation of high quality osteocyte RNA (frozen bone) during laser dissection. Considering the reported advantages of CryoJane for laser dissection on glass slides, we asked whether the system could also work with the plastic membrane slides used by UV laser based microdissection instruments, as these are better suited for collection of larger target areas. In an attempt to optimize laser microdissection slide preparation for tissues of different RNA stability and cryosectioning difficulty, we evaluated the CryoJane system for use with both glass (laser capture microdissection) and membrane (laser cutting microdissection) slides. We have established a sample preparation protocol for glass and membrane slides including manual coating of membrane slides with CryoJane solutions, cryosectioning, slide staining and dissection procedure, lysis and RNA extraction that facilitated

  4. Optimization of the SPME parameters and its online coupling with HPLC for the analysis of tricyclic antidepressants in plasma samples.

    PubMed

    Alves, Claudete; Fernandes, Christian; Dos Santos Neto, Alvaro José; Rodrigues, José Carlos; Costa Queiroz, Maria Eugênia; Lanças, Fernando Mauro

    2006-07-01

    Solid-phase microextraction (SPME)-liquid chromatography (LC) is used to analyze tricyclic antidepressant drugs desipramine, imipramine, nortriptyline, amitriptyline, and clomipramine (internal standard) in plasma samples. Extraction conditions are optimized using a 2(3) factorial design plus a central point to evaluate the influence of the time, temperature, and matrix pH. A Polydimethylsiloxane-divinylbenzene (60-mum film thickness) fiber is selected after the assessment of different types of coating. The chromatographic separation is realized using a C(18) column (150 x 4.6 mm, 5-microm particles), ammonium acetate buffer (0.05 mol/L, pH 5.50)-acetonitrile (55:45 v/v) with 0.1% of triethylamine as mobile phase and UV-vis detection at 214 nm. Among the factorial design conditions evaluated, the best results are obtained at a pH 11.0, temperature of 30 degrees C, and extraction time of 45 min. The proposed method, using a lab-made SPME-LC interface, allowed the determination of tricyclic antidepressants in in plasma at therapeutic concentration levels. PMID:16884589

  5. Optimization and evaluation of metabolite extraction protocols for untargeted metabolic profiling of liver samples by UPLC-MS.

    PubMed

    Masson, Perrine; Alves, Alexessander Couto; Ebbels, Timothy M D; Nicholson, Jeremy K; Want, Elizabeth J

    2010-09-15

    A series of six protocols were evaluated for UPLC-MS based untargeted metabolic profiling of liver extracts in terms of reproducibility and number of metabolite features obtained. These protocols, designed to extract both polar and nonpolar metabolites, were based on (i) a two stage extraction approach or (ii) a simultaneous extraction in a biphasic mixture, employing different volumes and combinations of extraction and resuspension solvents. A multivariate statistical strategy was developed to allow comparison of the multidimensional variation between the methods. The optimal protocol for profiling both polar and nonpolar metabolites was found to be an aqueous extraction with methanol/water followed by an organic extraction with dichloromethane/methanol, with resuspension of the dried extracts in methanol/water before UPLC-MS analysis. This protocol resulted in a median CV of feature intensities among experimental replicates of <20% for aqueous extracts and <30% for organic extracts. These data demonstrate the robustness of the proposed protocol for extracting metabolites from liver samples and make it well suited for untargeted liver profiling in studies exploring xenobiotic hepatotoxicity and clinical investigations of liver disease. The generic nature of this protocol facilitates its application to other tissues, for example, brain or lung, enhancing its utility in clinical and toxicological studies. PMID:20715759

  6. Separation optimization of long porous-layer open-tubular columns for nano-LC-MS of limited proteomic samples.

    PubMed

    Rogeberg, Magnus; Vehus, Tore; Grutle, Lene; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa

    2013-09-01

    The single-run resolving power of current 10 μm id porous-layer open-tubular (PLOT) columns has been optimized. The columns studied had a poly(styrene-co-divinylbenzene) porous layer (~0.75 μm thickness). In contrast to many previous studies that have employed complex plumbing or compromising set-ups, SPE-PLOT-LC-MS was assembled without the use of additional hardware/noncommercial parts, additional valves or sample splitting. A comprehensive study of various flow rates, gradient times, and column length combinations was undertaken. Maximum resolution for <400 bar was achieved using a 40 nL/min flow rate, a 400 min gradient and an 8 m long column. We obtained a 2.3-fold increase in peak capacity compared to previous PLOT studies (950 versus previously obtained 400, when using peak width = 2σ definition). Our system also meets or surpasses peak capacities obtained in recent reports using nano-ultra-performance LC conditions or long silica monolith nanocolumns. Nearly 500 proteins (1958 peptides) could be identified in just one single injection of an extract corresponding to 1000 BxPC3 beta catenin (-/-) cells, and ~1200 and 2500 proteins in extracts of 10,000 and 100,000 cells, respectively, allowing detection of central members and regulators of the Wnt signaling pathway. PMID:23813982

  7. Synthesis of zinc oxide nanoparticles-chitosan for extraction of methyl orange from water samples: Cuckoo optimization algorithm-artificial neural network

    NASA Astrophysics Data System (ADS)

    Khajeh, Mostafa; Golzary, Ali Reza

    2014-10-01

    In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7 μg L-1was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.

  8. Synthesis of zinc oxide nanoparticles-chitosan for extraction of methyl orange from water samples: cuckoo optimization algorithm-artificial neural network.

    PubMed

    Khajeh, Mostafa; Golzary, Ali Reza

    2014-10-15

    In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7μgL(-1)was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples. PMID:24835725

  9. Optimized sampling strategy of Wireless sensor network for validation of remote sensing products over heterogeneous coarse-resolution pixel

    NASA Astrophysics Data System (ADS)

    Peng, J.; Liu, Q.; Wen, J.; Fan, W.; Dou, B.

    2015-12-01

    Coarse-resolution satellite albedo products are increasingly applied in geographical researches because of their capability to characterize the spatio-temporal patterns of land surface parameters. In the long-term validation of coarse-resolution satellite products with ground measurements, the scale effect, i.e., the mismatch between point measurement and pixel observation becomes the main challenge, particularly over heterogeneous land surfaces. Recent advances in Wireless Sensor Networks (WSN) technologies offer an opportunity for validation using multi-point observations instead of single-point observation. The difficulty is to ensure the representativeness of the WSN in heterogeneous areas with limited nodes. In this study, the objective is to develop a ground-based spatial sampling strategy through consideration of the historical prior knowledge and avoidance of the information redundancy between different sensor nodes. Taking albedo as an example. First, we derive monthly local maps of albedo from 30-m HJ CCD images a 3-year period. Second, we pick out candidate points from the areas with higher temporal stability which helps to avoid the transition or boundary areas. Then, the representativeness (r) of each candidate point is evaluated through the correlational analysis between the point-specific and area-average time sequence albedo vector. The point with the highest r was noted as the new sensor point. Before electing a new point, the vector component of the selected points should be taken out from the vectors in the following correlational analysis. The selection procedure would be ceased once if the integral representativeness (R) meets the accuracy requirement. Here, the sampling method is adapted to both single-parameter and multi-parameter situations. Finally, it is shown that this sampling method has been effectively worked in the optimized layout of Huailai remote sensing station in China. The coarse resolution pixel covering this station could be

  10. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    NASA Technical Reports Server (NTRS)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  11. Optimization and validation of enzyme-linked immunosorbent assay for the determination of endosulfan residues in food samples.

    PubMed

    Zhang, Yan; Liu, Jun W; Zheng, Wen J; Wang, Lei; Zhang, Hong Y; Fang, Guo Z; Wang, Shuo

    2008-02-01

    In this study, an enzyme-linked immunosorbent assay (ELISA) was optimized and applied to the determination of endosulfan residues in 20 different kinds of food commodities including vegetables, dry fruits, tea and meat. The limit of detection (IC(15)) was 0.8 microg kg(-1) and the sensitivity (IC(50)) was 5.3 microg kg(-1). Three simple extraction methods were developed, including shaking on the rotary shaker at 250 r min(-1) overnight, shaking on the rotary shaker for 1 h and thoroughly mixing for 2 min. Methanol was used as the extraction solvent in this study. The extracts were diluted in 0.5% fish skin gelatin (FG) in phosphate-buffered saline (PBS) at various dilutions in order to remove the matrix interference. For cabbage (purple and green), asparagus, Japanese green, Chinese cabbage, scallion, garland chrysanthemum, spinach and garlic, the extracts were diluted 10-fold; for carrots and tea, the extracts were diluted 15-fold and 900-fold, respectively. The extracts of celery, adzuki beans and chestnuts, were diluted 20-fold to avoid the matrix interference; ginger, vegetable soybean and peanut extracts were diluted 100-fold; mutton and chicken extracts were diluted 10-fold and for eel, the dilution was 40-fold. Average recoveries were 63.13-125.61%. Validation was conducted by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The results of this study will be useful to the wide application of an ELISA for the rapid determination of pesticides in food samples. PMID:18246504

  12. Experimental design applied to the optimization of pyrolysis and atomization temperatures for As measurement in water samples by GFAAS

    NASA Astrophysics Data System (ADS)

    Ávila, Akie K.; Araujo, Thiago O.; Couto, Paulo R. G.; Borges, Renata M. H.

    2005-10-01

    In general, research experimentation is often used mainly when new methodologies are being developed or existing ones are being improved. The characteristics of any method depend on its factors or components. The planning techniques and analysis of experiments are basically used to improve the analytical conditions of methods, to reduce experimental labour with the minimum of tests and to optimize the use of resources (reagents, time of analysis, availability of the equipment, operator time, etc). These techniques are applied by the identification of variables (control factors) of a process that have the most influence on the response of the parameters of interest, by attributing values to the influential variables of the process in order that the variability of response can be minimum, or the obtained value (quality parameter) be very close to the nominal value, and by attributing values to the influential variables of the process so that the effects of uncontrollable variables can be reduced. In this central composite design (CCD), four permanent modifiers (Pd, Ir, W and Rh) and one combined permanent modifier W+Ir were studied. The study selected two factors: pyrolysis and atomization temperatures at five different levels for all the possible combinations. The pyrolysis temperatures with different permanent modifiers varied from 600 °C to 1600 °C with hold times of 25 s, while atomization temperatures ranged between 1900 °C and 2280 °C. The characteristic masses for As were in the range of 31 pg to 81 pg. Assuming the best conditions obtained on CCD, it was possible to estimate the measurement uncertainty of As determination in water samples. The results showed that considering the main uncertainty sources such as the repetitivity of measurement inherent in the equipment, the calibration curve which evaluates the adjustment of the mathematical model to the results and the calibration standards concentrations, the values obtained were similar to international

  13. A Closed-Loop Optimal Neural-Network Controller to Optimize Rotorcraft Aeromechanical Behaviour. Volume 2; Output from Two Sample Cases

    NASA Technical Reports Server (NTRS)

    Leyland, Jane Anne

    2001-01-01

    A closed-loop optimal neural-network controller technique was developed to optimize rotorcraft aeromechanical behaviour. This technique utilities a neural-network scheme to provide a general non-linear model of the rotorcraft. A modem constrained optimisation method is used to determine and update the constants in the neural-network plant model as well as to determine the optimal control vector. Current data is read, weighted, and added to a sliding data window. When the specified maximum number of data sets allowed in the data window is exceeded, the oldest data set is and the remaining data sets are re-weighted. This procedure provides at least four additional degrees-of-freedom in addition to the size and geometry of the neural-network itself with which to optimize the overall operation of the controller. These additional degrees-of-freedom are: 1. the maximum length of the sliding data window, 2. the frequency of neural-network updates, 3. the weighting of the individual data sets within the sliding window, and 4. the maximum number of optimisation iterations used for the neural-network updates.

  14. THE NIST-EPA INTERAGENCY AGREEMENT ON MEASUREMENTS AND STANDARDS IN AEROSOL CARBON: SAMPLING REGIONAL PM 2.5 FOR THE CHEMOMETRIC OPTIMIZATION OF THERMAL-OPTICAL ANALYSIS

    EPA Science Inventory

    Results from the NIST-EPA Interagency Agreement on Measurements and Standards in Aerosol Carbon: Sampling Regional PM2.5 for the Chemometric Optimization of Thermal-Optical Analysis Study will be presented at the American Association for Aerosol Research (AAAR) 24th Annual Confer...

  15. Selection, Optimization, and Compensation: The Structure, Reliability, and Validity of Forced-Choice versus Likert-Type Measures in a Sample of Late Adolescents

    ERIC Educational Resources Information Center

    Geldhof, G. John; Gestsdottir, Steinunn; Stefansson, Kristjan; Johnson, Sara K.; Bowers, Edmond P.; Lerner, Richard M.

    2015-01-01

    Intentional self-regulation (ISR) undergoes significant development across the life span. However, our understanding of ISR's development and function remains incomplete, in part because the field's conceptualization and measurement of ISR vary greatly. A key sample case involves how Baltes and colleagues' Selection, Optimization,…

  16. Optimization and calibration of atomic force microscopy sensitivity in terms of tip-sample interactions in high-order dynamic atomic force microscopy

    SciTech Connect

    Liu Yu; Guo Qiuquan; Nie Hengyong; Lau, W. M.; Yang Jun

    2009-12-15

    The mechanism of dynamic force modes has been successfully applied to many atomic force microscopy (AFM) applications, such as tapping mode and phase imaging. The high-order flexural vibration modes are recent advancement of AFM dynamic force modes. AFM optical lever detection sensitivity plays a major role in dynamic force modes because it determines the accuracy in mapping surface morphology, distinguishing various tip-surface interactions, and measuring the strength of the tip-surface interactions. In this work, we have analyzed optimization and calibration of the optical lever detection sensitivity for an AFM cantilever-tip ensemble vibrating in high-order flexural modes and simultaneously experiencing a wide range and variety of tip-sample interactions. It is found that the optimal detection sensitivity depends on the vibration mode, the ratio of the force constant of tip-sample interactions to the cantilever stiffness, as well as the incident laser spot size and its location on the cantilever. It is also found that the optimal detection sensitivity is less dependent on the strength of tip-sample interactions for high-order flexural modes relative to the fundamental mode, i.e., tapping mode. When the force constant of tip-sample interactions significantly exceeds the cantilever stiffness, the optimal detection sensitivity occurs only when the laser spot locates at a certain distance from the cantilever-tip end. Thus, in addition to the 'globally optimized detection sensitivity', the 'tip optimized detection sensitivity' is also determined. Finally, we have proposed a calibration method to determine the actual AFM detection sensitivity in high-order flexural vibration modes against the static end-load sensitivity that is obtained traditionally by measuring a force-distance curve on a hard substrate in the contact mode.

  17. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    NASA Astrophysics Data System (ADS)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  18. A model for direct laser interference patterning of ZnO:Al - predicting possible sample topographies to optimize light trapping in thin-film silicon solar cells

    NASA Astrophysics Data System (ADS)

    Dyck, Tobias; Haas, Stefan

    2016-04-01

    We present a novel approach to obtaining a quick prediction of a sample's topography after the treatment with direct laser interference patterning (DLIP) . The underlying model uses the parameters of the experimental setup as input, calculates the laser intensity distribution in the interference volume and determines the corresponding heat intake into the material as well as the subsequent heat diffusion within the material. The resulting heat distribution is used to determine the topography of the sample after the DLIP treatment . This output topography is in good agreement with corresponding experiments. The model can be applied in optimization algorithms in which a sample topography needs to be engineered in order to suit the needs of a given device. A prominent example for such an application is the optimization of the light scattering properties of the textured interfaces in a solar cell.

  19. A critical assessment of hidden markov model sub-optimal sampling strategies applied to the generation of peptide 3D models.

    PubMed

    Lamiable, A; Thevenet, P; Tufféry, P

    2016-08-01

    Hidden Markov Model derived structural alphabets are a probabilistic framework in which the complete conformational space of a peptidic chain is described in terms of probability distributions that can be sampled to identify conformations of largest probabilities. Here, we assess how three strategies to sample sub-optimal conformations-Viterbi k-best, forward backtrack and a taboo sampling approach-can lead to the efficient generation of peptide conformations. We show that the diversity of sampling is essential to compensate biases introduced in the estimates of the probabilities, and we find that only the forward backtrack and a taboo sampling strategies can efficiently generate native or near-native models. Finally, we also find such approaches are as efficient as former protocols, while being one order of magnitude faster, opening the door to the large scale de novo modeling of peptides and mini-proteins. © 2016 Wiley Periodicals, Inc. PMID:27317417

  20. Optimization of high-reliability-based hydrological design problems by robust automatic sampling of critical model realizations

    NASA Astrophysics Data System (ADS)

    Bayer, Peter; de Paly, Michael; Bürger, Claudius M.

    2010-05-01

    This study demonstrates the high efficiency of the so-called stack-ordering technique for optimizing a groundwater management problem under uncertain conditions. The uncertainty is expressed by multiple equally probable model representations, such as realizations of hydraulic conductivity. During optimization of a well-layout problem for contaminant control, a ranking mechanism is applied that extracts those realizations that appear most critical for the optimization problem. It is shown that this procedure works well for evolutionary optimization algorithms, which are to some extent robust against noisy objective functions. More precisely, differential evolution (DE) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are applied. Stack ordering is comprehensively investigated for a plume management problem at a hypothetical template site based on parameter values measured at and on a geostatistical model developed for the Lauswiesen study site near Tübingen, Germany. The straightforward procedure yields computational savings above 90% in comparison to always evaluating the full set of realizations. This is confirmed by cross testing with four additional validation cases. The results show that both evolutionary algorithms obtain highly reliable near-optimal solutions. DE appears to be the better choice for cases with significant noise caused by small stack sizes. On the other hand, there seems to be a problem-specific threshold for the evaluation stack size above which the CMA-ES achieves solutions with both better fitness and higher reliability.

  1. Prediction of elasticity constants in small biomaterial samples such as bone. A comparison between classical optimization techniques and identification with artificial neural networks.

    PubMed

    Lucchinetti, E; Stüssi, E

    2004-01-01

    Measuring the elasticity constants of biological materials often sets important constraints, such as the limited size or the irregular geometry of the samples. In this paper, the identification approach as applied to the specific problem of accurately retrieving the material properties of small bone samples from a measured displacement field is discussed. The identification procedure can be formulated as an optimization problem with the goal of minimizing the difference between computed and measured displacements by searching for an appropriate set of material parameters using dedicated algorithms. Alternatively, the backcalculation of the material properties from displacement maps can be implemented using artificial neural networks. In a practical situation, however, measurement errors strongly affect the identification results, calling for robust optimization approaches in order accurately to retrieve the material properties from error-polluted sample deformation maps. Using a simple model problem, the performances of both classical and neural network driven optimization are compared. When performed before the collection of experimental data, this evaluation can be very helpful in pinpointing potential problems with the envisaged experiments such as the need for a sufficient signal-to-noise ratio, particularly important when working with small tissue samples such as specimens cut from rodent bones or single bone trabeculae. PMID:15648663

  2. Optimal unified approach for rare-variant association testing with application to small-sample case-control whole-exome sequencing studies.

    PubMed

    Lee, Seunggeun; Emond, Mary J; Bamshad, Michael J; Barnes, Kathleen C; Rieder, Mark J; Nickerson, Deborah A; Christiani, David C; Wurfel, Mark M; Lin, Xihong

    2012-08-10

    We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193

  3. Optimal Unified Approach for Rare-Variant Association Testing with Application to Small-Sample Case-Control Whole-Exome Sequencing Studies

    PubMed Central

    Lee, Seunggeun; Emond, Mary J.; Bamshad, Michael J.; Barnes, Kathleen C.; Rieder, Mark J.; Nickerson, Deborah A.; Christiani, David C.; Wurfel, Mark M.; Lin, Xihong

    2012-01-01

    We propose in this paper a unified approach for testing the association between rare variants and phenotypes in sequencing association studies. This approach maximizes power by adaptively using the data to optimally combine the burden test and the nonburden sequence kernel association test (SKAT). Burden tests are more powerful when most variants in a region are causal and the effects are in the same direction, whereas SKAT is more powerful when a large fraction of the variants in a region are noncausal or the effects of causal variants are in different directions. The proposed unified test maintains the power in both scenarios. We show that the unified test corresponds to the optimal test in an extended family of SKAT tests, which we refer to as SKAT-O. The second goal of this paper is to develop a small-sample adjustment procedure for the proposed methods for the correction of conservative type I error rates of SKAT family tests when the trait of interest is dichotomous and the sample size is small. Both small-sample-adjusted SKAT and the optimal unified test (SKAT-O) are computationally efficient and can easily be applied to genome-wide sequencing association studies. We evaluate the finite sample performance of the proposed methods using extensive simulation studies and illustrate their application using the acute-lung-injury exome-sequencing data of the National Heart, Lung, and Blood Institute Exome Sequencing Project. PMID:22863193

  4. Fluorescence-detected X-ray magnetic circular dichroism of well-defined Mn(II) and Ni(II) doped in MgO crystals: credential evaluation for measurements on biological samples.

    PubMed

    Wang, Hongxin; Bryant, Craig; LeGros, M; Wang, Xin; Cramer, S P

    2012-10-18

    L(2,3)-edge X-ray magnetic circular dichroism (XMCD) spectra have been measured for the well-defined dilute Ni(II) and Mn(II) ions doped into a MgO crystal, with sub-Kelvin dilution refrigerator cooling and 2 T magnetic field magnetization. A 30-element Ge array X-ray detector has been used to measure the XMCD for these dilute ions, whose concentrations are 1400 ppm for Ni(II) and 10,000 ppm for Mn(II). Large XMCD effects have been observed for both Ni(II) and Mn(II), and multiplet simulation described the observed spectra. The fluorescence-detected L-edge absorption spectrum and XMCD of Ni(II) in MgO are comparable with both theoretical calculations and the total electron yield measured ions in similar chemical environments, at least qualitatively validating the use of the sensitive fluorescence detection technique for studying XMCD for dilute 3d metal ions, such as various metalloproteins. Sum rule analyses on the XMCD spectra are also performed. In addition, these XMCD measurements have also been used to obtain the sample's magnetization curve and the beamline's X-ray helicity curve. This study also illustrated that bend magnet beamlines are still useful in examining XMCD on dilute and paramagnetic metal sites. PMID:22650370

  5. An Improved Transformation and Optimized Sampling Scheme for the Numerical Evaluation of Singular and Near-Singular Potentials

    NASA Technical Reports Server (NTRS)

    Khayat, Michael A.; Wilton, Donald R.; Fink, Patrick W.

    2007-01-01

    Simple and efficient numerical procedures using singularity cancellation methods are presented for evaluating singular and near-singular potential integrals. Four different transformations are compared and the advantages of the Radial-angular transform are demonstrated. A method is then described for optimizing this integration scheme.

  6. Optimization of sample preparation for grazing emission X-ray fluorescence in micro- and trace analysis applications

    NASA Astrophysics Data System (ADS)

    Claes, Martine; de Bokx, Pieter; Willard, Nico; Veny, Paul; Van Grieken, René

    1997-07-01

    Grazing emission X-ray fluorescence (GEXRF) is a new development in X-ray fluorescence analysis related to total-reflection XRF. An optical flat carrying the sample is irradiated at an angle of approximately 90° with an uncollimated polychromatic X-ray beam. The emitted fluorescent radiation of the sample elements is measured at very small angles using wavelength dispersive detection. For the application of GEXRF in micro- and trace analysis, a sample preparation procedure for analysis of liquid samples has been developed. Polycarbonate was investigated as a possible material for the sample carrier. Homogeneous distribution of the sample on the support was achieved by special pre-treatment of the carrier. This pre-treatment includes siliconizing the polycarbonate disks with Serva silicone solution, after which the siliconized carriers are placed in an oxygen plasma asher. Finally, to obtain a spot of the same size as the X-ray beam (≈30 mm diameter), a thin silicone layer is placed as a ring on the carriers with an ear pick. Electron microprobe analyses were performed to check the distribution of the liquid sample deposit, and GEXRF measurements were used to check the reproducibility of sample preparation.

  7. Use of information on the manufacture of samples for the optical characterization of multilayers through a global optimization.

    PubMed

    Sancho-Parramon, Jordi; Ferré-Borrull, Josep; Bosch, Salvador; Ferrara, Maria Christina

    2003-03-01

    We present a procedure for the optical characterization of thin-film stacks from spectrophotometric data. The procedure overcomes the intrinsic limitations arising in the numerical determination of many parameters from reflectance or transmittance spectra measurements. The key point is to use all the information available from the manufacturing process in a single global optimization process. The method is illustrated by a case study of solgel applications. PMID:12638889

  8. Clear line of sight (CLOS) statistics within cloudy regions and optimal sampling strategies for space-based lidars

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Seze, G.

    1991-01-01

    Simulated cloud/hole fields as well as Landsat imagery are used in a computer model to evaluate several proposed sampling patterns and shot management schemes for pulsed space-based Doppler lidars. Emphasis is placed on two proposed sampling strategies - one obtained from a conically scanned single telescope and the other from four fixed telescopes that are sequentially used by one laser. The question of whether there are any sampling patterns that maximize the number of resolution areas with vertical soundings to the PBL is addressed.

  9. Time-response-based evolutionary optimization

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Goldvard, Alex; Salomon, Shaul

    2015-04-01

    Solutions to engineering problems are often evaluated by considering their time responses; thus, each solution is associated with a function. To avoid optimizing the functions, such optimization is usually carried out by setting auxiliary objectives (e.g. minimal overshoot). Therefore, in order to find different optimal solutions, alternative auxiliary optimization objectives may have to be defined prior to optimization. In the current study, a new approach is suggested that avoids the need to define auxiliary objectives. An algorithm is suggested that enables the optimization of solutions according to their transient behaviours. For this optimization, the functions are sampled and the problem is posed as a multi-objective problem. The recently introduced algorithm NSGA-II-PSA is adopted and tailored to solve it. Mathematical as well as engineering problems are utilized to explain and demonstrate the approach and its applicability to real life problems. The results highlight the advantages of avoiding the definition of artificial objectives.

  10. Strategies for selecting optimal sampling and work-up procedures for analysing alkylphenol polyethoxylates in effluents from non-activated sludge biofilm reactors.

    PubMed

    Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola

    2012-01-01

    Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range. PMID:22519096

  11. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    PubMed

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  12. Determination of total iodine in serum and urine samples by ion chromatography with pulsed amperometric detection - studies on analyte loss, optimization of sample preparation procedures, and validation of analytical method.

    PubMed

    Błażewicz, Anna; Klatka, Maria; Dolliver, Wojciech; Kocjan, Ryszard

    2014-07-01

    A fast, accurate and precise ion chromatography method with pulsed amperometric detection was applied to evaluate a variety of parameters affecting the determination of total iodine in serum and urine of 81 subjects, including 56 obese and 25 healthy Polish children. The sample pretreatment methods were carried out in a closed system and with the assistance of microwaves. Both alkaline and acidic digestion procedures were developed and optimized to find the simplest combination of reagents and the appropriate parameters for digestion that would allow for the fastest, least time consuming and most cost-effective way of analysis. A good correlation between the certified and the measured concentrations was achieved. The best recoveries (96.8% for urine and 98.8% for serum samples) were achieved using 1ml of 25% tetramethylammonium hydroxide solution within 6min for 0.1ml of serum/urine samples. Using 0.5ml of 65% nitric acid solution the best recovery (95.3%) was obtained when 7min of effective digestion time was used. Freeze-thaw stability and long-term stability were checked. After 24 weeks 14.7% loss of iodine in urine, and 10.9% in serum samples occurred. For urine samples, better correlation (R(2)=0.9891) of various sample preparation procedures (alkaline digestion and application of OnGuard RP cartidges) was obtained. Significantly lower iodide content was found in samples taken from obese children. Serum iodine content in obese children was markedly variable in comparison with the healthy group, whereas the difference was less evident when urine samples were analyzed. The mean content in serum was 59.12±8.86μg/L, and in urine 98.26±25.93 for obese children when samples were prepared by the use of optimized alkaline digestion reinforced by microwaves. In healthy children the mean content in serum was 82.58±6.01μg/L, and in urine 145.76±31.44μg/L. PMID:24911549

  13. Optimizing Viable Leukocyte Sampling from the Female Genital Tract for Clinical Trials: An International Multi-Site Study

    PubMed Central

    De Rosa, Stephen C.; Martinson, Jeffrey A.; Plants, Jill; Brady, Kirsten E.; Gumbi, Pamela P.; Adams, Devin J.; Vojtech, Lucia; Galloway, Christine G.; Fialkow, Michael; Lentz, Gretchen; Gao, Dayong; Shu, Zhiquan; Nyanga, Billy; Izulla, Preston; Kimani, Joshua; Kimwaki, Steve; Bere, Alfred; Moodie, Zoe; Landay, Alan L.; Passmore, Jo-Ann S.; Kaul, Rupert; Novak, Richard M.; McElrath, M. Juliana; Hladik, Florian

    2014-01-01

    Background Functional analysis of mononuclear leukocytes in the female genital mucosa is essential for understanding the immunologic effects of HIV vaccines and microbicides at the site of HIV exposure. However, the best female genital tract sampling technique is unclear. Methods and Findings We enrolled women from four sites in Africa and the US to compare three genital leukocyte sampling methods: cervicovaginal lavages (CVL), endocervical cytobrushes, and ectocervical biopsies. Absolute yields of mononuclear leukocyte subpopulations were determined by flow cytometric bead-based cell counting. Of the non-invasive sampling types, two combined sequential cytobrushes yielded significantly more viable mononuclear leukocytes than a CVL (p<0.0001). In a subsequent comparison, two cytobrushes yielded as many leukocytes (∼10,000) as one biopsy, with macrophages/monocytes being more prominent in cytobrushes and T lymphocytes in biopsies. Sample yields were consistent between sites. In a subgroup analysis, we observed significant reproducibility between replicate same-day biopsies (r = 0.89, p = 0.0123). Visible red blood cells in cytobrushes increased leukocyte yields more than three-fold (p = 0.0078), but did not change their subpopulation profile, indicating that these leukocytes were still largely derived from the mucosa and not peripheral blood. We also confirmed that many CD4+ T cells in the female genital tract express the α4β7 integrin, an HIV envelope-binding mucosal homing receptor. Conclusions CVL sampling recovered the lowest number of viable mononuclear leukocytes. Two cervical cytobrushes yielded comparable total numbers of viable leukocytes to one biopsy, but cytobrushes and biopsies were biased toward macrophages and T lymphocytes, respectively. Our study also established the feasibility of obtaining consistent flow cytometric analyses of isolated genital cells from four study sites in the US and Africa. These data represent an important step

  14. Towards quantitative metagenomics of wild viruses and other ultra-low concentration DNA samples: a rigorous assessment and optimization of the linker amplification method

    PubMed Central

    Duhaime, Melissa B; Deng, Li; Poulos, Bonnie T; Sullivan, Matthew B

    2012-01-01

    Metagenomics generates and tests hypotheses about dynamics and mechanistic drivers in wild populations, yet commonly suffers from insufficient (< 1 ng) starting genomic material for sequencing. Current solutions for amplifying sufficient DNA for metagenomics analyses include linear amplification for deep sequencing (LADS), which requires more DNA than is normally available, linker-amplified shotgun libraries (LASLs), which is prohibitively low throughput, and whole-genome amplification, which is significantly biased and thus non-quantitative. Here, we adapt the LASL approach to next generation sequencing by offering an alternate polymerase for challenging samples, developing a more efficient sizing step, integrating a ‘reconditioning PCR’ step to increase yield and minimize late-cycle PCR artefacts, and empirically documenting the quantitative capability of the optimized method with both laboratory isolate and wild community viral DNA. Our optimized linker amplification method requires as little as 1 pg of DNA and is the most precise and accurate available, with G + C content amplification biases less than 1.5-fold, even for complex samples as diverse as a wild virus community. While optimized here for 454 sequencing, this linker amplification method can be used to prepare metagenomics libraries for sequencing with next-generation platforms, including Illumina and Ion Torrent, the first of which we tested and present data for here. PMID:22713159

  15. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  16. Optimizing the models for rapid determination of chlorogenic acid, scopoletin and rutin in plant samples by near-infrared diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang

    2014-07-01

    Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.

  17. The development of an optimized sample preparation for trace level detection of 17α-ethinylestradiol and estrone in whole fish tissue.

    PubMed

    Al-Ansari, Ahmed M; Saleem, Ammar; Kimpe, Linda E; Trudeau, Vance L; Blais, Jules M

    2011-11-15

    The purpose of this study was to develop an optimized method for the extraction and determination of 17α-ethinylestradiol (EE2) and estrone (E1) in whole fish tissues at ng/g levels. The optimized procedure for sample preparation includes extraction of tissue by accelerated solvent extraction (ASE-200), lipid removal by gel permeation chromatography (GPC), and a cleanup step by acetonitrile precipitation followed by a hexane wash. Analysis was performed by gas chromatography/mass spectrometry (GC/MS) in negative chemical ionization (NCI) mode after samples were derivatized with pentafluorobenzoyl chloride (PFBCl). The method was developed using high lipid content wild fish that were exposed to the tested analytes. The whole procedure recoveries ranged from 74.5 to 93.7% with relative standard deviation (RSD) of 2.3-6.2% for EE2 and 64.8 to 91.6% with RSD of 9.46-0.18% for E1. The method detection limits were 0.67 ng/g for EE2 and 0.68 ng/g for E1 dry weight. The method was applied to determine EE2 levels in male goldfish (Carrasius auratus) after a 72 h dietary exposure. All samples contained EE2 averaging 1.7ng/g (±0.29 standard deviation, n=5). This is the first optimized protocol for EE2 extraction from whole fish tissue at environmentally relevant concentrations. Due to high sensitivity and recovery, the developed method will improve our knowledge about the environmental fate and uptake of synthetic steroidal estrogens in fish populations. PMID:21982913

  18. Optimization of an enclosed gas analyzer sampling system for measuring eddy covariance fluxes of H2O and CO2

    NASA Astrophysics Data System (ADS)

    Metzger, Stefan; Burba, George; Burns, Sean P.; Blanken, Peter D.; Li, Jiahong; Luo, Hongyan; Zulueta, Rommel C.

    2016-03-01

    Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this

  19. Optimizing sample preparation for anatomical determination in the hippocampus of rodent brain by ToF-SIMS analysis.

    PubMed

    Angerer, Tina B; Mohammadi, Amir Saeid; Fletcher, John S

    2016-06-01

    Lipidomics has been an expanding field since researchers began to recognize the signaling functions of lipids and their involvement in disease. Time-of-flight secondary ion mass spectrometry is a valuable tool for studying the distribution of a wide range of lipids in multiple brain regions, but in order to make valuable scientific contributions, one has to be aware of the influence that sample treatment can have on the results. In this article, the authors discuss different sample treatment protocols for rodent brain sections focusing on signal from the hippocampus and surrounding areas. The authors compare frozen hydrated analysis to freeze drying, which is the standard in most research facilities, and reactive vapor exposure (trifluoroacetic acid and NH3). The results show that in order to preserve brain chemistry close to a native state, frozen hydrated analysis is the most suitable, but execution can be difficult. Freeze drying is prone to produce artifacts as cholesterol migrates to surface, masking other signals. This effect can be partially reversed by exposing freeze dried sections to reactive vapor. When analyzing brain sections in negative ion mode, exposing those sections to NH3 vapor can re-establish the diversity in lipid signal found in frozen hydrated analyzed sections. This is accomplished by removing cholesterol and uncovering sulfatide signals, allowing more anatomical regions to be visualized. PMID:26856332

  20. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    PubMed Central

    Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn

    2016-01-01

    An important characteristic of a screening tool is its discriminant ability or the measure’s accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children’s mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673

  1. Optimization of sample pretreatment methods for simultaneous determination of dolasetron and hydrodolasetron in human plasma by HPLC-ESI-MS.

    PubMed

    Hu, Yuming; Chen, Shuo; Chen, Jitao; Liu, Guozhu; Chen, Bo; Yao, Shouzhuo

    2012-10-01

    A high-performance liquid chromatographic method coupled with electrospray mass spectrometry was developed for the simultaneous determination of dolasetron and its major metabolite, hydrodolasetron, in human plasma. A new sample pretreatment method, i.e., salt induced phase separation extraction (SIPSE), was proposed and compared with four other methods, i.e., albumin precipitation, liquid-liquid extraction, hydrophobic solvent-induced phase separation extraction and subzero-temperature induced phase separation extraction. Among these methods, SIPSE showed the highest extraction efficiency and the lowest matrix interferences. The extraction recoveries obtained from the SIPSE method were all more than 96% for dolasetron, hydrodolasetron and ondansetron (internal standard). The SIPSE method is also very fast and easy because protein precipitation, analyte extraction and sample cleanup are combined into one simple process by mixing acetonitrile with plasma and partitioning with 2 mol/L sodium carbonate aqueous solution. The correlation coefficients of the calibration curves were all more than 0.997, in the range of 7.9-4750.0 ng/mL and 4.8-2855.1 ng/mL for dolasetron and hydrodolasetron, respectively. The limits of quantification were 7.9 and 4.8 ng/mL for dolasetron and hydrodolasetron, respectively. The intra-day and inter-day repeatability were all less than 10%. The method was successfully applied to the pharmacokinetic study of dolasetron. PMID:22645289

  2. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    PubMed

    Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn

    2016-01-01

    An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted. PMID:26771673

  3. Defining Overweight and Obesity

    MedlinePlus

    ... Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs Defining Adult Overweight and ... Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition State and Local Programs File Formats Help: How ...

  4. Optimal sample volumes of human trabecular bone in μCT analysis within vertebral body and femoral head

    PubMed Central

    Wen, Xin-Xin; Zong, Chun-Lin; Xu, Chao; Ma, Xiang-Yu; Wang, Fa-Qi; Feng, Ya-Fei; Yan, Ya-Bo; Lei, Wei

    2015-01-01

    Trabecular bones of different skeletal sites have different bone morphologies. How to select an appropriate volume of region of interest (ROI) to reflect the microarchitecture of trabecular bone in different skeletal sites was an interesting problem. Therefore, in this study, the optimal volumes of ROI within vertebral body and femoral head, and if the relationships between volumes of ROI and microarchitectural parameters were affected by trabecular bone morphology were studied. Within vertebral body and femoral head, different cubic volumes of ROI (from (1 mm)3 to (20 mm)3) were set to compare with control groups(whole volume of trabecular bone). Five microarchitectural parameters (BV/TV, Tb.N, Tb.Th, Tb.Sp, and BS/BV) were obtained. Nonlinear curve fitting functions were used to explore the relationships between the microarchitectural parameters and the volumes of ROI. The volumes of ROI could affect the microarchitectural parameters when the volume was smaller than (8 mm)3 within the vertebral body and smaller than (13 mm)3 within the femoral head. As the volume increased, the variable tendencies of BV/TV, Tb.N, and Tb.Sp were different between these two skeletal sites. The curve fitting functions between these two sites were also different. The relationships between volumes of ROI and microarchitectural parameters were affected by the different trabecular bone morphologies within lumbar vertebral body and femoral head. When depicting the microarchitecture of human trabecular bone within lumbar vertebral body and femoral head, the volume of ROI would be larger than (8 mm)3 and (13 mm)3. PMID:26770381

  5. Optimization of a gas sampling system for measuring eddy-covariance fluxes of H2O and CO2

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Burba, G.; Burns, S. P.; Blanken, P. D.; Li, J.; Luo, H.; Zulueta, R. C.

    2015-10-01

    Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) will provide the ability of unbiased ecological inference across eco-climatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analysers are widely employed for eddy-covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation varies with site properties, and requires correction. Here, we show that the gas sampling system substantially contributes to high-frequency attenuation, which can be minimized by careful design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5-16.5 Hz for CO2, 2.4-14.3 Hz for H2O, and 8.3-21.8 Hz for CO2, 1.4-19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyser cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor-capacitor theory, and NEON's final gas sampling system was developed on this basis. The design consists of the stainless steel intake tube, a pleated mesh

  6. CIP10 optimization for 4,4-methylene diphenyl diisocyanate aerosol sampling and field comparison with impinger method.

    PubMed

    Puscasu, Silvia; Aubin, Simon; Cloutier, Yves; Sarazin, Philippe; Tra, Huu V; Gagné, Sébastien

    2015-04-01

    4,4-methylene diphenyl diisocyanate (MDI) aerosol exposure evaluation in spray foam insulation application is known as being a challenge because the spray foam application actually involves a fast-curing process. Available techniques are either not user-friendly or are inaccurate or not validated for this application. To address these issues, a new approach using a CIP10M was developed to appropriately collect MDI aerosol in spray foam insulation while being suitable for personal sampling. The CIP10M is a commercially available personal aerosol sampler that has been validated for the collection of microbial spores into a liquid medium. Tributylphosphate with 1-(2-methoxyphenyl)piperazine (MOPIP) was introduced into the CIP10M to collect and stabilize the MDI aerosols. The limit of detection and limit of quantification of the method were 0.007 and 0.024 μg ml(-1), respectively. The dynamic range was from 0.024 to 0.787 μg ml(-1) (with R (2) ≥ 0.990), which corresponds to concentrations in the air from 0.04 to 1.3 µg m(-3), assuming 60 min of sampling at 10 l min(-1). The intraday and interday analytical precisions were <2% for all of the concentration levels tested, and the accuracy was within an appropriate range of 98 ± 1%. No matrix effect was observed, and a total recovery of 99% was obtained. Parallel sampling was performed in a real MDI foam spraying environment with a CIP10M and impingers containing toluene/MOPIP (reference method). The results obtained show that the CIP10M provides levels of MDI monomer in the same range as the impingers, and higher levels of MDI oligomers. The negative bias observed for MDI monomer was between 2 and 26%, whereas the positive bias observed for MDI oligomers was between 76 and 113%, with both biases calculated with a confidence level of 95%. The CIP10M seems to be a promising approach for MDI aerosol exposure evaluation in spray foam applications. PMID:25452291

  7. Evaluation of optimal conditions for determination of low selenium content in shellfish samples collected at Todos os Santos Bay, Bahia, Brazil using HG-AFS.

    PubMed

    Lopes Dos Santos, Walter Nei; Macedo, Samuel Marques; Teixeira da Rocha, Sofia Negreiros; Souza de Jesus, Caio Niela; Cavalcante, Dannuza Dias; Hatje, Vanessa

    2014-08-01

    This work proposes a procedure for the determination of total selenium content in shellfish after digestion of samples in block using cold finger system and detection using atomic fluorescent spectrometry coupled hydride generation (HG AFS). The optimal conditions for HG such as effect and volume of prereduction KBr 10 % (m/v) (1.0 and 2.0 ml) and concentration of hydrochloric acid (3.0 and 6.0 mol L(-1)) were evaluated. The best results were obtained using 3 mL of HCl (6 mol L(-1)) and 1 mL of KBr 10 % (m/v), followed by 30 min of prereduction for the volume of 1 mL of the digested sample. The precision and accuracy were assessed by the analysis of the Certified Reference Material NIST 1566b. Under the optimized conditions, the detection and quantification limits were 6.06 and 21.21 μg kg(-1), respectively. The developed method was applied to samples of shellfish (oysters, clams, and mussels) collected at Todos os Santos Bay, Bahia, Brazil. Selenium concentrations ranged from 0.23 ± 0.02 to 3.70 ± 0.27 mg kg(-1) for Mytella guyanensis and Anomalocardia brasiliana, respectively. The developed method proved to be accurate, precise, cheap, fast, and could be used for monitoring Se in shellfish samples. PMID:24771464

  8. Interpolation and Definability

    NASA Astrophysics Data System (ADS)

    Gabbay, Dov M.; Maksimova, Larisa L.

    This chapter is on interpolation and definability. This notion is not only central in pure logic, but has significant meaning and applicability in all areas where logic itself is applied, especially in computer science, artificial intelligence, logic programming, philosophy of science and natural language. The notion may sometimes appear to the reader as too technical/mathematical but it does also have a general meaning in terms of expressibility and definability.

  9. Optimization and Comparison of ESI and APCI LC-MS/MS Methods: A Case Study of Irgarol 1051, Diuron, and their Degradation Products in Environmental Samples

    NASA Astrophysics Data System (ADS)

    Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.

    2011-10-01

    A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.

  10. An approach to optimize sample preparation for MALDI imaging MS of FFPE sections using fractional factorial design of experiments.

    PubMed

    Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter

    2016-09-01

    A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended. PMID:27485623

  11. SU-C-207-03: Optimization of a Collimator-Based Sparse Sampling Technique for Low-Dose Cone-Beam CT

    SciTech Connect

    Lee, T; Cho, S; Kim, I; Han, B

    2015-06-15

    Purpose: In computed tomography (CT) imaging, radiation dose delivered to the patient is one of the major concerns. Sparse-view CT takes projections at sparser view angles and provides a viable option to reducing dose. However, a fast power switching of an X-ray tube, which is needed for the sparse-view sampling, can be challenging in many CT systems. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. In this study, we investigated the effects of collimator parameters on the image quality and aimed to optimize the collimator design. Methods: We used a bench-top circular cone-beam CT system together with a CatPhan600 phantom, and took 1440 projections from a single rotation. The multi-slit collimator made of tungsten was mounted on the X-ray source for beam blocking. For image reconstruction, we used a total-variation minimization (TV) algorithm and modified the backprojection step so that only the measured data through the collimator slits are to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. We also analyzed the sampling efficiency: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/ro. Results: Consistent results in the image quality have been produced with the sampling efficiency, and the optimum condition was found to be using 12 slits at 30 Hz/ro. As image quality indices, we used the CNR and the detectability. Conclusion: We conducted an experiment with a moving multi-slit collimator to realize a sparse-sampled cone-beam CT. Effects of collimator parameters on the image quality have been systematically investigated, and the optimum condition has been reached.

  12. Performance of an Optimized Paper-Based Test for Rapid Visual Measurement of Alanine Aminotransferase (ALT) in Fingerstick and Venipuncture Samples

    PubMed Central

    Noubary, Farzad; Coonahan, Erin; Schoeplein, Ryan; Baden, Rachel; Curry, Michael; Afdhal, Nezam; Kumar, Shailendra; Pollock, Nira R.

    2015-01-01

    Background A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT) in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress) as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time. Methods 96 ambulatory patients with varied baseline ALT concentration underwent fingerstick testing using the paper device; cell phone images of completed devices were taken and texted to a blinded off-site reader. Venipuncture serum was obtained from 93/96 participants for routine clinical testing (Roche/Hitachi); subsequently, 88/93 serum samples were captured and applied to paper and Abaxis platforms. Paper test and reference standard results were compared by Bland-Altman analysis. Findings For serum, there was excellent agreement between paper test and Abaxis results, with negligible bias (+4.5 U/L). Abaxis results were systematically 8.6% lower than Roche/Hitachi results. ALT values in fingerstick samples tested on paper were systematically lower than values in paired serum tested on paper (bias -23.6 U/L) or Abaxis (bias -18.4 U/L); a correction factor was developed for the paper device to match fingerstick blood to serum. Visual reads of cell phone images closely matched reads made in real time (bias +5.5 U/L). Conclusions The paper ALT test is highly accurate for serum testing, matching the reference method against which it was optimized better than the reference methods matched each other. A systematic difference exists between ALT values in fingerstick and paired

  13. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    SciTech Connect

    Hu, Lingzhi E-mail: raymond.muzic@case.edu; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr. E-mail: raymond.muzic@case.edu

    2014-10-15

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2{sup ∗} = 1/T2{sup ∗}, was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2{sup ∗} of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2{sup ∗} of human skull was measured as 0.2–0.3 ms{sup −1} depending on the specific region, which is more than ten times greater than the R2{sup ∗} of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in

  14. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    SciTech Connect

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  15. Defining Equality in Education

    ERIC Educational Resources Information Center

    Benson, Ronald E.

    1977-01-01

    Defines equality of education in three areas: 1) by the degree of integration of school systems; 2) by a comparison of material resources and assets in education; and 3) by the effects of schooling as measured by the mean scores of groups on standardized tests. Available from: College of Education, 107 Quadrangle, Iowa State University, Ames, Iowa…

  16. Defining Supports Geometry

    ERIC Educational Resources Information Center

    Stephan, Michelle L.; McManus, George E.; Dickey, Ashley L.; Arb, Maxwell S.

    2012-01-01

    The process of developing definitions is underemphasized in most mathematics instruction. Investing time in constructing meaning is well worth the return in terms of the knowledge it imparts. In this article, the authors present a third approach to "defining," called "constructive." It involves modifying students' previous understanding of a term…

  17. On Defining Mass

    ERIC Educational Resources Information Center

    Hecht, Eugene

    2011-01-01

    Though central to any pedagogical development of physics, the concept of mass is still not well understood. Properly defining mass has proven to be far more daunting than contemporary textbooks would have us believe. And yet today the origin of mass is one of the most aggressively pursued areas of research in all of physics. Much of the excitement…

  18. Defining Faculty Work.

    ERIC Educational Resources Information Center

    Gray, Peter J.; Diamond, Robert M.

    1994-01-01

    A process of planned change is proposed for redefining college faculty work. Legitimate faculty work is defined in broad terms, and information sources and methods for collecting information to support redefinition are identified. The final step in the redefinition process is the development of new mission statements for the institution and its…

  19. Defined by Limitations

    ERIC Educational Resources Information Center

    Arriola, Sonya; Murphy, Katy

    2010-01-01

    Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…

  20. Defining structural limit zones

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1978-01-01

    Method for defining limit loads uses probability distribution of largest load occurring during given time intervals. Method is compatible with both deterministic and probabilistic structural design criteria. It also rationally accounts for fact that longer structure is exposed to random loading environment, greater is possibility that it will experience extreme load.

  1. Defining Airflow Obstruction

    PubMed Central

    Eschenbacher, William L.

    2016-01-01

    Airflow obstruction has been defined using spirometric test results when the forced expiratory volume in 1 second (FEV1) to forced vital capacity (FVC) ratio is below a fixed cutoff (<70%) or lower limits of normal (LLN) from reference equations that are based on values from a normal population. However, similar to other positive or abnormal diagnostic test results that are used to identify the presence of disease, perhaps airflow obstruction should be defined based on the values of FEV1/FVC for a population of individuals with known disease such as chronic obstructive pulmonary disease (COPD). Unfortunately, we do not know such a distribution of values of FEV1/FVC for patients with COPD since there is no gold standard for this syndrome or condition. Yet, we have used this physiologic definition of airflow obstruction based on a normal population to identify patients with COPD. In addition, we have defined airflow obstruction as either being present or absent. Instead, we should use a different approach to define airflow obstruction based on the probability or likelihood that the airflow obstruction is present which in turn would give us the probability or likelihood of a disease state such as COPD. PMID:27239557

  2. Optimization of an improved analytical method for the determination of 1-nitropyrene in milligram diesel soot samples by high-performance liquid chromatography-mass spectrometry.

    PubMed

    Barreto, R P; Albuquerque, F C; Netto, Annibal D Pereira

    2007-09-01

    A method for determination of nitrated polycyclic aromatic hydrocarbons (NPAHs) in diesel soot by high-performance liquid chromatography-mass spectrometry with atmospheric pressure chemical ionization (APCI) and detection by ion-trap following ultrasonic extraction is described. The determination of 1-nitropyrene that it is the predominant NPAH in diesel soot was emphasized. Vaporization and drying temperatures of the APCI interface, electronic parameters of the MS detector and the analytical conditions in reversed-phase HPLC were optimized. The patterns of fragmentation of representative NPAHs were evaluated by single and multiple fragmentation steps and negative ionization led to the largest signals. The transition (247-->217) was employed for quantitative analysis of 1-nitropyrene. Calibration curves were linear between 1 and 15 microgL(-1) with correlation coefficients better than 0.999. Typical detection limit (DL) of 0.2 microgL(-1) was obtained. Samples of diesel soot and of the reference material (SRM-2975, NIST, USA) were extracted with methylene chloride. Recoveries were estimated by analysis of SRM 2975 and were between 82 and 105%. DL for 1-nitropyrene was better than 1.5 mg kg(-1), but the inclusion of an evaporation step in the sample processing procedure lowered the DL. The application of the method to diesel soot samples from bench motors showed levels

  3. Optimization of the β LACTA test for the detection of extended-spectrum-β-lactamase-producing bacteria directly in urine samples.

    PubMed

    Amzalag, Jonas; Mizrahi, Assaf; Naouri, Diane; Nguyen, Jean Claude; Ganansia, Olivier; Le Monnier, Alban

    2016-09-01

    The β LACTA™ test (BLT) is a chromogenic test detecting resistance to third-generation cephalosporins on bacterial colonies. Some studies have shown that this test can be used directly in urine samples. The aim of this study was to determine the optimal conditions of use of this test in order to detect the ESBL-producing bacteria directly in urine samples. During a 4-months period, a total of 365 consecutive urine samples were tested with the BLT using the recommendation of the manufacturer. We isolated 56 ESBL-producing bacteria and 17 AmpC-overproducing bacteria. ESBL- and/or AmpC β-lactamase-producing bacteria isolates were systematically characterized by disc diffusion antibiotic susceptibility testing interpreted according to the guidelines of EUCAST. The sensitivity and the specificity for 3GC-resistance detection, regardless the mechanism of resistance, were, respectively, 60.3% and 100%, whereas for ESBL detection, it was, respectively, 75.4% and 99.7%. We applied then modification of the initial protocol considering urines with a bacteriuria >1000/μL, a reading time at 30 min and considering any change of the initial colour as positive. The overall sensitivity was 81% and the sensitivity for ESBL-detection raised to 95.7%. PMID:27225534

  4. Optimization of solvent bar microextraction combined with gas chromatography for preconcentration and determination of methadone in human urine and plasma samples.

    PubMed

    Ebrahimzadeh, Homeira; Mirbabaei, Fatemeh; Asgharinezhad, Ali Akbar; Shekari, Nafiseh; Mollazadeh, Narges

    2014-02-01

    In this study, solvent bar microextraction combined with gas chromatography-flame ionization detector (GC-FID) was used for preconcentration and determination of methadone in human body fluids. The target drug was extracted from an aqueous sample with pH 11.5 (source phase) into an organic extracting solvent (1-Undecanol) located inside the pores and lumen of a polypropylene hollow fiber as a receiving phase. To obtain high extraction efficiency, the effect of different variables on the extraction efficiency was studied using an experimental design. The variables of interest were the organic phase type, source phase pH, ionic strength, stirring rate, extraction time, concentration of Triton X-100, and extraction temperature, which were first investigated by Plackett-Burman design and subsequently by central composite design (CCD). So that the optimum experimental condition was obtained when the sodium chloride concentration was 5% (w/v); stirring rate, 700 rpm; extraction temperature, 20 °C; extraction time, 45 min and pH of the aqueous sample, 11.5. Under the optimized conditions, the preconcentration factors were between 275 and 300. The calibration curves were linear in the concentration range of 10-1500 μg L(-1). The limits of detection (LODs) were 2.7-7 and relative standard deviations (RSDs) of the proposed method were 5.9-7.3%. Ultimately, the applicability of the current method was evaluated by the extraction and determination of methadone in different biological samples. PMID:24412690

  5. Optimization of Matrix Solid-Phase Dispersion method for simultaneous extraction of aflatoxins and OTA in cereals and its application to commercial samples.

    PubMed

    Rubert, Josep; Soler, Carla; Mañes, Jordi

    2010-07-15

    A method based on Matrix Solid-Phase Dispersion (MSPD) has been developed for the determination of 5 mycotoxins (ochratoxin A and aflatoxins B and G) in different cereals. Several dispersants, eluents and ratios were tested during the optimization of the process in order to obtain the best results. Finally, samples were blended with C(18) and the mycotoxins were extracted with acetonitrile. Regarding to matrix effects, the results clearly demonstrated the necessity to use a matrix-matched calibration to validate the method. Analyses were performed by liquid chromatography-triple quadrupole-tandem mass spectrometry (LC-QqQ-MS/MS). The recoveries of the extraction process ranged from 64% to 91% with relative standard deviation lower than 19% in all cases, when samples were fortified at two different concentrations levels. Limits of detection ranged from 0.3 ng g(-1) for aflatoxins to 0.8 ng g(-1) for OTA and the limits of quantification ranged from 1 ng g(-1) for aflatoxins to 2 ng g(-1) for OTA, which were below the limits of mycotoxins set by European Union in the matrices evaluated. Application of the method to the analysis of several samples purchased in local supermarkets revealed aflatoxins and OTA levels. PMID:20602937

  6. Young black women: defining health.

    PubMed

    Hargrove, H J; Keller, C

    1993-01-01

    The purpose of this study was to elicit a definition of health as described by young Black women and to characterize the factors related to their definitions of health. The research questions were: (a) How do young Black women define health and (b) what factors are related to their definition of health? Using interviews and open-ended questions, an exploratory descriptive design examined the factors which contribute to the definition of health. Twenty-two young Black women between the ages of 21 and 40 comprised the sample. A wide range of incomes, occupations, educational levels, marital status, and family sizes were represented. The informants defined health as comprising those characteristics, behaviors, and/or activities which include: (a) having or avoiding a disease, (b) the presence or absence of obesity, (c) experiencing and reducing stress, (d) good and bad health habits, (e) eating good and bad foods, and (f) engaging (or not) in exercise. PMID:8106873

  7. Comprehensive kinetics of triiodothyronine production, distribution, and metabolism in blood and tissue pools of the rat using optimized blood-sampling protocols.

    PubMed

    DiStefano, J J; Jang, M; Malone, T K; Broutman, M

    1982-01-01

    We have determined estimates for 24 physiological parameters of production, interpool transport, distribution, and metabolism of T3 in the major T3 pools of the unanesthetized male Sprague-Dawley rat, from blood-borne data and a comprehensive model and analysis of this system. Most of these indices have previously been unavailable. Whereas only 3% (2 ng/100 g BW) of the total body T3 pool (74 ng/100 g BW) is in plasma, the composite of slowly equilibrating (slow) tissue pools (e.g. muscle, skin, and brain) appears to contain most of the T3, 76% (57 ng/100 g BW) of the total. The composite of rapidly equilibrating (fast) tissue pools (e.g. liver and kidney) contains the remaining 19% (16 ng/100 g BW). The total body T3 production rate is 0.12 ng/100 g BW . min, and we estimate that about half of this emanates directly from T4 in the slow pools, whereas the remainder is derived from both thyroidal secretion and T4 to T3 conversion in the fast pools. Our results also indicate that T3 molecules spend an average of only 0.5 min in transit each time through plasma, whereas the single pass mean transit times in fast and slow tissue pools (the times available for hormone action) are 10 times and 200 times greater. In contrast, the mean residence time for T3 in the entire system is greater than 12 h despite the extremely rapid early disappearance of injected T3 from plasma. To obtain the required accuracy, we used a novel optimization approach for choosing blood-sampling schedules (1, 4, 44, 202, and 600 min), a remarkably small number of sample times, and each was adjustable by about +/- 20% without effect on optimized parameter accuracies. PMID:7053984

  8. Proteomic study of a model causative agent of harmful red tide, Prorocentrum triestinum I: Optimization of sample preparation methodologies for analyzing with two-dimensional electrophoresis.

    PubMed

    Chan, Leo Lai; Lo, Samuel Chun-Lap; Hodgkiss, Ivor John

    2002-09-01

    A comprehensive study to find the optimal sample preparation conditions for two-dimensional electrophoresis (2-DE) analysis of Prorocentrum triestinum, a model causative agent of harmful algal blooms (HABs) was carried out. The four major sample preparation steps for 2-DE: (a) cell disruption: i.e. sonication and homogenization with glass beads; (b) protein extraction : i.e. sequential and independent extraction procedures; (c) pre-electrophoretic treatment: these included (i) treatment with RNAase/DNAase or benzonase; (ii) ultracentrifugation to sediment large macromolecules such as DNA; (iii) desalting and concentration by ultrafiltration through a Microcon centrifugal filter device (MWCO: 3000 daltons); and (iv) desalting by a micro BioSpin chromatography column (MWCO: 6000 daltons); and (d) rehydration buffers, reducing agents and sample application in the first dimension isoelectric focussing were studied. Our results showed that sonication is easy to perform and resulted in a higher protein yield. Among the four extraction buffers, the urea containing buffers resulted in the extraction of the highest amount of protein while tris(hydroxymethyl)aminomethane buffers and trichloroacetic acid (TCA)/acetone precipitation allowed detection of a higher number of protein species (i.e. protein spots). Desalting by BioSpin and ultrafiltration have improved the 2-DE resolution of the water soluble fraction but have less effect on urea containing fractions. TCA/acetone precipitation was able to desalt all protein fractions independent of the extraction media, however extended exposure to this low pH medium has caused protein modification. Introduction of either DNase/RNase or benzonase treatment did not improve the discriminatory power of the 2-DE but this treatment did yield 2-DE with the clearest background. Proteolytic digestion was inhibited by addition of a protease inhibitor cocktail. Taken overall, a combination of sequential extraction and desalting by Bio

  9. Defining Dynamic Route Structure

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Jastrzebski, Michael

    2011-01-01

    This poster describes a method for defining route structure from flight tracks. Dynamically generated route structures could be useful in guiding dynamic airspace configuration and helping controllers retain situational awareness under dynamically changing traffic conditions. Individual merge and diverge intersections between pairs of flights are identified, clustered, and grouped into nodes of a route structure network. Links are placed between nodes to represent major traffic flows. A parametric analysis determined the algorithm input parameters producing route structures of current day flight plans that are closest to todays airway structure. These parameters are then used to define and analyze the dynamic route structure over the course of a day for current day flight paths. Route structures are also compared between current day flight paths and more user preferred paths such as great circle and weather avoidance routing.

  10. Defining the paramedic process.

    PubMed

    Carter, Holly; Thompson, James

    2015-01-01

    The use of a 'process of care' is well established in several health professions, most evidently within the field of nursing. Now ingrained within methods of care delivery, it offers a logical approach to problem solving and ensures an appropriate delivery of interventions that are specifically suited to the individual patient. Paramedicine is a rapidly advancing profession despite a wide acknowledgement of limited research provisions. This frequently results in the borrowing of evidence from other disciplines. While this has often been useful, there are many concerns relating to the acceptable limit of evidence transcription between professions. To date, there is no formally recognised 'process of care'-defining activity within the pre-hospital arena. With much current focus on the professional classification of paramedic work, it is considered timely to formally define a formula that underpins other professional roles such as nursing. It is hypothesised that defined processes of care, particularly the nursing process, may have features that would readily translate to pre-hospital practice. The literature analysed was obtained through systematic searches of a range of databases, including Ovid MEDLINE, Cumulative Index to Nursing and Allied Health. The results demonstrated that the defined process of care provides nursing with more than just a structure for practice, but also has implications for education, clinical governance and professional standing. The current nursing process does not directly articulate to the complex and often unstructured role of the paramedic; however, it has many principles that offer value to the paramedic in their practice. Expanding the nursing process model to include the stages of Dispatch Considerations, Scene Assessment, First Impressions, Patient History, Physical Examination, Clinical Decision-Making, Interventions, Re-evaluation, Transport Decisions, Handover and Reflection would provide an appropriate model for pre

  11. Optimal Appearance Model for Visual Tracking

    PubMed Central

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  12. Optimal Appearance Model for Visual Tracking.

    PubMed

    Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao

    2016-01-01

    Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639

  13. Optimization of a validated stability-indicating RP-LC method for the determination of fulvestrant from polymeric based nanoparticle systems, drugs and biological samples.

    PubMed

    Gumustas, Mehmet; Sengel-Turk, Ceyda Tuba; Hascicek, Canan; Ozkan, Sibel A

    2014-10-01

    Fulvestrant is used for the treatment of hormone receptor-positive metastatic breast cancer in postmenopausal women with disease progression following anti-estrogen therapy. Several reversed-phase columns with variable silica materials, diameters, lengths, etc., were tested for the optimization study. A good chromatographic separation was achieved using a Waters X-Terra RP(18) column (250 × 4.6 mm i.d. × 5 µm) and a mobile phase, consisting of a mixture of acetonitrile-water (65:35; v/v) containing phosphoric acid (0.1%). The separation was carried out 40 °C with detection at 215 nm.The calibration curves were linear over the concentration range between 1.0-300 and 1.0-200 µg/mL for standard solutions and biological media, respectively. The proposed method is accurate and reproducible. Forced degradation studies were also realized. This fully validated method allows the direct determination of fulvestrant in dosage form and biological samples. The average recovery of the added fulvestrant amount in the samples was between 98.22 and 104.03%. The proposed method was also applied for the determination of fulvestrant from the polymeric-based nanoparticle systems. No interference from using polymers and other excipients was observed in in vitro drug release studies. Therefore an incorporation efficiency of fulvestrant-loaded nanoparticle could be determined accurately and specifically. PMID:24861889

  14. [Optimization of sample pretreatment method for the determination of typical artificial sweeteners in soil by high performance liquid chromatography-tandem mass spectrometry].

    PubMed

    Feng, Biting; Gan, Zhiwei; Hu, Hongwei; Sun, Hongwen

    2014-09-01

    The sample pretreatment method for the determination of four typical artificial sweeteners (ASs) including sucralose, saccharin, cyclamate, and acesulfame in soil by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was optimized. Different conditions of extraction, including four extractants (methanol, acetonitrile, acetone, deionized water), three kinds of ionic strength of sodium acetate solution (0.001, 0.01, 0.1 mol/L), four pH values (3, 4, 5 and 6) of 0.01 mol/L acetate-sodium acetate solution, four set durations of extraction (20, 40, 60, 120 min) and number of extraction times (1, 2, 3, 4 times) were compared. The optimal sample pretreatment method was finally set up. The sam- ples were extracted twice with 25 mL 0.01 mol/L sodium acetate solution (pH 4) for 20 min per cycle. The extracts were combined and then purified and concentrated by CNW Poly-Sery PWAX cartridges with methanol containing 1 mmol/L tris (hydroxymethyl) amino methane (Tris) and 5% (v/v) ammonia hydroxide as eluent. The analytes were determined by HPLC-MS/MS. The recoveries were obtained by spiked soil with the four artificial sweeteners at 1, 10, 100 μg/kg (dry weight), separately. The average recoveries of the analytes ranged from 86.5% to 105%. The intra-day and inter-day precisions expressed as relative standard deviations (RSDs) were in the range of 2.56%-5.94% and 3.99%-6.53%, respectively. Good linearities (r2 > 0.995) were observed between 1-100 μg/kg (dry weight) for all the compounds. The limits of detection were 0.01-0.21 kg/kg and the limits of quantification were 0.03-0.70 μg/kg for the analytes. The four artificial sweeteners were determined in soil samples from farmland contaminated by wastewater in Tianjin. This method is rapid, reliable, and suitable for the investigation of artificial sweeteners in soil. PMID:25752083

  15. Defining functional dyspepsia.

    PubMed

    Mearin, Fermín; Calleja, José Luis

    2011-12-01

    Dyspepsia and functional dyspepsia represent a highly significant public health issue. A good definition of dyspepsia is key for helping us to better approach symptoms, decision making, and therapy indications.During the last few years many attempts were made at establishing a definition of dyspepsia. Results were little successful on most occasions, and clear discrepancies arose on whether symptoms should be associated with digestion, which types of symptoms were to be included, which anatomic location should symptoms have, etc.The Rome III Committee defined dyspepsia as "a symptom or set of symptoms that most physicians consider to originate from the gastroduodenal area", including the following: postprandial heaviness, early satiety, and epigastric pain or burning. Two new entities were defined: a) food-induced dyspeptic symptoms (postprandial distress syndrome); and b) epigastric pain (epigastric pain syndrome). These and other definitions have shown both strengths and weaknesses. At times they have been much too complex, at times much too simple; furthermore, they have commonly erred on the side of being inaccurate and impractical. On the other hand, some (the most recent ones) are difficult to translate into the Spanish language. In a meeting of gastroenterologists with a special interest in digestive functional disorders, the various aspects of dyspepsia definition were discussed and put to the vote, and the following conclusions were arrived at: dyspepsia is defined as a set of symptoms, either related or unrelated to food ingestion, localized on the upper half of the abdomen. They include: a) epigastric discomfort (as a category of severity) or pain; b) postprandial heaviness; and c) early satiety. Associated complaints include: nausea, belching, bloating, and epigastric burn (heartburn). All these must be scored according to severity and frequency. Furthermore, psychological factors may be involved in the origin of functional dyspepsia. On the other hand

  16. Defining responsibility for screening.

    PubMed

    Sifri, R; Wender, R

    1999-10-01

    Patients commonly receive medical care from multiple providers and confusion as to who is responsible for cancer screening undoubtedly contributes to inadequate recommendations. Effective screening requires successful implementation of a series of steps that begin with the initial discussion of a screening test and proceed through obtaining results and instituting appropriate follow-up. Clear definition of generalist and specialist physician roles are necessary to optimally screen the public. This article explores the differences in how generalists and specialists approach screening, describes models of care that facilitate shared responsibility for screening, and suggests strategies on how to improve communication between physicians to maximize screening performance. PMID:10452930

  17. Analysis of PCDD/Fs and dioxin-like PCBs in atmospheric deposition samples from the Flemish measurement network: Optimization and validation of a new CALUX bioassay method.

    PubMed

    Croes, K; Van Langenhove, K; Elskens, M; Desmedt, M; Roekens, E; Kotz, A; Denison, M S; Baeyens, W

    2011-01-01

    Since the CALUX (Chemically Activated LUciferase gene eXpression) bioassay is a fast, sensitive and inexpensive tool for the analysis of a high number of samples, validation of new methods is urgently needed. In this study, a new method for the analysis of PCDD/Fs and dioxin-like PCBs in atmospheric deposition samples with the CALUX bioassay was developed, optimized and validated. The method consists of 4 steps: filtration, extraction, clean up and bioassay analysis. To avoid the use of large amounts of toxic solvents, new techniques were used for filtration and extraction: a C18 filter was used instead of a liquid/liquid extraction and an Accelerated Solvent Extractor (ASE) was used instead of the traditional soxhlet extraction. After pre-oxidation of the sample extract, clean up was done using a multi-layer silica gel column coupled to a carbon column. The PCDD/F and PCB fractions were finally analyzed with the H1L7.5c1 and/or the H1L6.1c3 mouse hepatoma cell lines. The limit of quantification was 1.4pg CALUX-BEQm(-2)d(-1) for the PCBs and 5.6pgCALUX-BEQm(-2)d(-1) for the PCDD/Fs, when using the new sensitive H1L7.5c1 cell line. The GC-HRMS recovery for all PCDD/F congeners was between 55% and 112%, with a mean recovery of 90%. CALUX recoveries of spiked procedural blanks were between the accepted ranges of 80-120%. Repeatability and reproducibility were satisfactory and no interferences from metals were detected. The first results from the Flemish measurement program showed good correlation between CALUX and GC-HRMS. PMID:21094512

  18. Defining periodontal health

    PubMed Central

    2015-01-01

    Assessment of the periodontium has relied exclusively on a variety of physical measurements (e.g., attachment level, probing depth, bone loss, mobility, recession, degree of inflammation, etc.) in relation to various case definitions of periodontal disease. Periodontal health was often an afterthought and was simply defined as the absence of the signs and symptoms of a periodontal disease. Accordingly, these strict and sometimes disparate definitions of periodontal disease have resulted in an idealistic requirement of a pristine periodontium for periodontal health, which makes us all diseased in one way or another. Furthermore, the consequence of not having a realistic definition of health has resulted in potentially questionable recommendations. The aim of this manuscript was to assess the biological, environmental, sociological, economic, educational and psychological relationships that are germane to constructing a paradigm that defines periodontal health using a modified wellness model. The paradigm includes four cardinal characteristics, i.e., 1) a functional dentition, 2) the painless function of a dentition, 3) the stability of the periodontal attachment apparatus, and 4) the psychological and social well-being of the individual. Finally, strategies and policies that advocate periodontal health were appraised. I'm not sick but I'm not well, and it's a sin to live so well. Flagpole Sitta, Harvey Danger PMID:26390888

  19. TAPERED DEFINING SLOT

    DOEpatents

    Pressey, F.W.

    1959-09-01

    An improvement is reported in the shape and formation of the slot or opening in the collimating slot member which forms part of an ion source of the type wherein a vapor of the material to be ionized is bombarded by electrons in a magnetic field to strike an arc-producing ionization. The defining slot is formed so as to have a substantial taper away from the cathode, causing the electron bombardment from the cathode to be dispersed over a greater area reducing its temperature and at the same time bringing the principal concentration of heat from the electron bombardment nearer the anode side of the slot, thus reducing deterioration and prolonging the life of the slot member during operation.

  20. [Determination of 51 carbamate pesticide residues in vegetables by liquid chromatography-tandem mass spectrometry based on optimization of QuEChERS sample preparation method].

    PubMed

    Wang, Lianzhu; Zhou, Yu; Huang, Xiaoyan; Wang, Ruilong; Lin, Zixu; Chen, Yong; Wang, Dengfei; Lin, Dejuan; Xu, Dunming

    2013-12-01

    The raw extracts of six vegetables (tomato, green bean, shallot, broccoli, ginger and carrot) were analyzed using gas chromatography-mass spectrometry (GC-MS) in full scan mode combined with NIST library search to confirm main matrix compounds. The effects of cleanup and adsorption mechanisms of primary secondary amine (PSA) , octadecylsilane (C18) and PSA + C18 on co-extractives were studied by the weight of evaporation residue for extracts before and after cleanup. The suitability of the two versions of QuEChERS method for sample preparation was evaluated for the extraction of 51 carbamate pesticides in the six vegetables. One of the QuEChERS methods was the original un-buffered method published in 2003, and the other was AOAC Official Method 2007.01 using acetate buffer. As a result, the best effects were obtained from using the combination of C18 and PSA for extract cleanup in vegetables. The acetate-buffered version was suitable for the determination of all pesticides except dioxacarb. Un-buffered QuEChERS method gave satisfactory results for determining dioxacarb. Based on these results, the suitable QuEChERS sample preparation method and liquid chromatography-positive electrospray ionization-tandem mass spectrometry under the optimized conditions were applied to determine the 51 carbamate pesticide residues in six vegetables. The analytes were quantified by matrix-matched standard solution. The recoveries at three levels of 10, 20 and 100 microg/kg spiked in six vegetables ranged from 58.4% to 126% with the relative standard deviations of 3.3%-26%. The limits of quantification (LOQ, S/N > or = 10) were 0.2-10 microg/kg except that the LOQs of cartap and thiofanox were 50 microg/kg. The method is highly efficient, sensitive and suitable for monitoring the 51 carbamate pesticide residues in vegetables. PMID:24669707

  1. Defining the Anthropocene

    NASA Astrophysics Data System (ADS)

    Lewis, Simon; Maslin, Mark

    2016-04-01

    Time is divided by geologists according to marked shifts in Earth's state. Recent global environmental changes suggest that Earth may have entered a new human-dominated geological epoch, the Anthropocene. Should the Anthropocene - the idea that human activity is a force acting upon the Earth system in ways that mean that Earth will be altered for millions of years - be defined as a geological time-unit at the level of an Epoch? Here we appraise the data to assess such claims, first in terms of changes to the Earth system, with particular focus on very long-lived impacts, as Epochs typically last millions of years. Can Earth really be said to be in transition from one state to another? Secondly, we then consider the formal criteria used to define geological time-units and move forward through time examining whether currently available evidence passes typical geological time-unit evidence thresholds. We suggest two time periods likely fit the criteria (1) the aftermath of the interlinking of the Old and New Worlds, which moved species across continents and ocean basins worldwide, a geologically unprecedented and permanent change, which is also the globally synchronous coolest part of the Little Ice Age (in Earth system terms), and the beginning of global trade and a new socio-economic "world system" (in historical terms), marked as a golden spike by a temporary drop in atmospheric CO2, centred on 1610 CE; and (2) the aftermath of the Second World War, when many global environmental changes accelerated and novel long-lived materials were increasingly manufactured, known as the Great Acceleration (in Earth system terms) and the beginning of the Cold War (in historical terms), marked as a golden spike by the peak in radionuclide fallout in 1964. We finish by noting that the Anthropocene debate is politically loaded, thus transparency in the presentation of evidence is essential if a formal definition of the Anthropocene is to avoid becoming a debate about bias. The

  2. Orthogonal array design as a chemometric method for the optimization of analytical procedures. Part 5. Three-level design and its application in microwave dissolution of biological samples.

    PubMed

    Lan, W G; Wong, M K; Chen, N; Sin, Y M

    1995-04-01

    The theory and methodology of a three-level orthogonal array design as a chemometric method for the optimization of analytical procedures were developed. In the theoretical section, firstly, the matrix of a three-level orthogonal array design is described and orthogonality is proved by a quadratic regression model. Next, the assignment of experiments in a three-level orthogonal array design and the use of the triangular table associated with the corresponding orthogonal array matrix are illustrated, followed by the data analysis strategy, in which significance of the different factor effects is quantitatively evaluated by the analysis of variance (ANOVA) technique and the percentage contribution method. Then, a quadratic regression equation representing the response surface is established to estimate each factor that has a significant influence. Finally, on the basis of the quadratic regression equation established, the derivative algorithm is used to find the optimum value for each variable considered. In the application section, microwave dissolution for the determination of selenium in biological samples by hydride generation atomic absorption spectrometry is employed, as a practical example, to demonstrate the application of the proposed three-level orthogonal array design in analytical chemistry. PMID:7771675

  3. Using Soil Apparent Electrical Conductivity to Optimize Sampling of Soil Penetration Resistance and to Improve the Estimations of Spatial Patterns of Soil Compaction

    PubMed Central

    Siqueira, Glécio Machado; Dafonte, Jorge Dafonte; Bueno Lema, Javier; Valcárcel Armesto, Montserrat; Silva, Ênio Farias França e

    2014-01-01

    This study presents a combined application of an EM38DD for assessing soil apparent electrical conductivity (ECa) and a dual-sensor vertical penetrometer Veris-3000 for measuring soil electrical conductivity (ECveris) and soil resistance to penetration (PR). The measurements were made at a 6 ha field cropped with forage maize under no-tillage after sowing and located in Northwestern Spain. The objective was to use data from ECa for improving the estimation of soil PR. First, data of ECa were used to determine the optimized sampling scheme of the soil PR in 40 points. Then, correlation analysis showed a significant negative relationship between soil PR and ECa, ranging from −0.36 to −0.70 for the studied soil layers. The spatial dependence of soil PR was best described by spherical models in most soil layers. However, below 0.50 m the spatial pattern of soil PR showed pure nugget effect, which could be due to the limited number of PR data used in these layers as the values of this parameter often were above the range measured by our equipment (5.5 MPa). The use of ECa as secondary variable slightly improved the estimation of PR by universal cokriging, when compared with kriging. PMID:25610899

  4. Defining an emerging disease.

    PubMed

    Moutou, F; Pastoret, P-P

    2015-04-01

    Defining an emerging disease is not straightforward, as there are several different types of disease emergence. For example, there can be a 'real' emergence of a brand new disease, such as the emergence of bovine spongiform encephalopathy in the 1980s, or a geographic emergence in an area not previously affected, such as the emergence of bluetongue in northern Europe in 2006. In addition, disease can emerge in species formerly not considered affected, e.g. the emergence of bovine tuberculosis in wildlife species since 2000 in France. There can also be an unexpected increase of disease incidence in a known area and a known species, or there may simply be an increase in our knowledge or awareness of a particular disease. What all these emerging diseases have in common is that human activity frequently has a role to play in their emergence. For example, bovine spongiform encephalopathy very probably emerged as a result of changes in the manufacturing of meat-and-bone meal, bluetongue was able to spread to cooler climes as a result of uncontrolled trade in animals, and a relaxation of screening and surveillance for bovine tuberculosis enabled the disease to re-emerge in areas that had been able to drastically reduce the number of cases. Globalisation and population growth will continue to affect the epidemiology of diseases in years to come and ecosystems will continue to evolve. Furthermore, new technologies such as metagenomics and high-throughput sequencing are identifying new microorganisms all the time. Change is the one constant, and diseases will continue to emerge, and we must consider the causes and different types of emergence as we deal with these diseases in the future. PMID:26470448

  5. Defining the Stimulus - A Memoir

    PubMed Central

    Terrace, Herbert

    2010-01-01

    The eminent psychophysicist, S. S. Stevens, once remarked that, “the basic problem of psychology was the definition of the stimulus” (Stevens, 1951, p. 46). By expanding the traditional definition of the stimulus, the study of animal learning has metamorphosed into animal cognition. The main impetus for that change was the recognition that it is often necessary to postulate a representation between the traditional S and R of learning theory. Representations allow a subject to re-present a stimulus it learned previously that is currently absent. Thus, in delayed-matching-to-sample, one has to assume that a subject responds to a representation of the sample during test if it responds correctly. Other examples, to name but a few, include concept formation, spatial memory, serial memory, learning a numerical rule, imitation and metacognition. Whereas a representation used to be regarded as a mentalistic phenomenon that was unworthy of scientific inquiry, it can now be operationally defined. To accommodate representations, the traditional discriminative stimulus has to be expanded to allow for the role of representations. The resulting composite can account for a significantly larger portion of the variance of performance measures than the exteroceptive stimulus could by itself. PMID:19969047

  6. Optimal Fluoridation

    PubMed Central

    Lee, John R.

    1975-01-01

    Optimal fluoridation has been defined as that fluoride exposure which confers maximal cariostasis with minimal toxicity and its values have been previously determined to be 0.5 to 1 mg per day for infants and 1 to 1.5 mg per day for an average child. Total fluoride ingestion and urine excretion were studied in Marin County, California, children in 1973 before municipal water fluoridation. Results showed fluoride exposure to be higher than anticipated and fulfilled previously accepted criteria for optimal fluoridation. Present and future water fluoridation plans need to be reevaluated in light of total environmental fluoride exposure. PMID:1130041

  7. Family Life and Human Development: Sample Units, K-6. Revised.

    ERIC Educational Resources Information Center

    Prince George's County Public Schools, Upper Marlboro, MD.

    Sample unit outlines, designed for kindergarten through grade six, define the content, activities, and assessment tasks appropriate to specific grade levels. The units have been extracted from the Board-approved curriculum, Health Education: The Curricular Approach to Optimal Health. The instructional guidelines for grade one are: describing a…

  8. Optimal control of objects on the micro- and nano-scale by electrokinetic and electromagnetic manipulation: For bio-sample preparation, quantum information devices and magnetic drug delivery

    NASA Astrophysics Data System (ADS)

    Probst, Roland

    In this thesis I show achievements for precision feedback control of objects inside micro-fluidic systems and for magnetically guided ferrofluids. Essentially, this is about doing flow control, but flow control on the microscale, and further even to nanoscale accuracy, to precisely and robustly manipulate micro and nano-objects (i.e. cells and quantum dots). Target applications include methods to miniaturize the operations of a biological laboratory (lab-on-a-chip), i.e. presenting pathogens to on-chip sensing cells or extracting cells from messy bio-samples such as saliva, urine, or blood; as well as non-biological applications such as deterministically placing quantum dots on photonic crystals to make multi-dot quantum information systems. The particles are steered by creating an electrokinetic fluid flow that carries all the particles from where they are to where they should be at each time step. The control loop comprises sensing, computation, and actuation to steer particles along trajectories. Particle locations are identified in real-time by an optical system and transferred to a control algorithm that then determines the electrode voltages necessary to create a flow field to carry all the particles to their next desired locations. The process repeats at the next time instant. I address following aspects of this technology. First I explain control and vision algorithms for steering single and multiple particles, and show extensions of these algorithms for steering in three dimensional (3D) spaces. Then I show algorithms for calculating power minimum paths for steering multiple particles in actuation constrained environments. With this microfluidic system I steer biological cells and nano particles (quantum dots) to nano meter precision. In the last part of the thesis I develop and experimentally demonstrate two dimensional (2D) manipulation of a single droplet of ferrofluid by feedback control of 4 external electromagnets, with a view towards enabling

  9. High performance liquid chromatographic determination of ultra traces of two tricyclic antidepressant drugs imipramine and trimipramine in urine samples after their dispersive liquid-liquid microextraction coupled with response surface optimization.

    PubMed

    Shamsipur, Mojtaba; Mirmohammadi, Mehrosadat

    2014-11-01

    Dispersive liquid-liquid microextraction (DLLME) coupled with high performance liquid chromatography by ultraviolet detection (HPLC-UV) as a fast and inexpensive technique was applied to the determination of imipramine and trimipramine in urine samples. Response surface methodology (RSM) was used for multivariate optimization of the effects of seven different parameters influencing the extraction efficiency of the proposed method. Under optimized experimental conditions, the enrichment factors and extraction recoveries were between 161.7-186.7 and 97-112%, respectively. The linear range and limit of detection for both analytes found to be 5-100ng mL(-1) and 0.6ng mL(-1), respectively. The relative standard deviations for 5ng mL(-1) of the drugs in urine samples were in the range of 5.1-6.1 (n=5). The developed method was successfully applied to real urine sample analyses. PMID:25178259

  10. GROUND WATER SAMPLING ISSUES

    EPA Science Inventory

    Obtaining representative ground water samples is important for site assessment and
    remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...

  11. GEOTHERMAL EFFLUENT SAMPLING WORKSHOP

    EPA Science Inventory

    This report outlines the major recommendations resulting from a workshop to identify gaps in existing geothermal effluent sampling methodologies, define needed research to fill those gaps, and recommend strategies to lead to a standardized sampling methodology.

  12. Evaluating the interaction of faecal pellet deposition rates and DNA degradation rates to optimize sampling design for DNA-based mark-recapture analysis of Sonoran pronghorn.

    PubMed

    Woodruff, S P; Johnson, T R; Waits, L P

    2015-07-01

    Knowledge of population demographics is important for species management but can be challenging in low-density, wide-ranging species. Population monitoring of the endangered Sonoran pronghorn (Antilocapra americana sonoriensis) is critical for assessing the success of recovery efforts, and noninvasive DNA sampling (NDS) could be more cost-effective and less intrusive than traditional methods. We evaluated faecal pellet deposition rates and faecal DNA degradation rates to maximize sampling efficiency for DNA-based mark-recapture analyses. Deposition data were collected at five watering holes using sampling intervals of 1-7 days and averaged one pellet pile per pronghorn per day. To evaluate nuclear DNA (nDNA) degradation, 20 faecal samples were exposed to local environmental conditions and sampled at eight time points from one to 124 days. Average amplification success rates for six nDNA microsatellite loci were 81% for samples on day one, 63% by day seven, 2% by day 14 and 0% by day 60. We evaluated the efficiency of different sampling intervals (1-10 days) by estimating the number of successful samples, success rate of individual identification and laboratory costs per successful sample. Cost per successful sample increased and success and efficiency declined as the sampling interval increased. Results indicate NDS of faecal pellets is a feasible method for individual identification, population estimation and demographic monitoring of Sonoran pronghorn. We recommend collecting samples <7 days old and estimate that a sampling interval of four to seven days in summer conditions (i.e., extreme heat and exposure to UV light) will achieve desired sample sizes for mark-recapture analysis while also maximizing efficiency [Corrected]. PMID:25522240

  13. Considerations and challenges in defining optimal iron utilization in hemodialysis.

    PubMed

    Charytan, David M; Pai, Amy Barton; Chan, Christopher T; Coyne, Daniel W; Hung, Adriana M; Kovesdy, Csaba P; Fishbane, Steven

    2015-06-01

    Trials raising concerns about erythropoiesis-stimulating agents, revisions to their labeling, and changes to practice guidelines and dialysis payment systems have provided strong stimuli to decrease erythropoiesis-stimulating agent use and increase intravenous iron administration in recent years. These factors have been associated with a rise in iron utilization, particularly among hemodialysis patients, and an unprecedented increase in serum ferritin concentrations. The mean serum ferritin concentration among United States dialysis patients in 2013 exceeded 800 ng/ml, with 18% of patients exceeding 1200 ng/ml. Although these changes are broad based, the wisdom of these practices is uncertain. Herein, we examine influences on and trends in intravenous iron utilization and assess the clinical trial, epidemiologic, and experimental evidence relevant to its safety and efficacy in the setting of maintenance dialysis. These data suggest a potential for harm from increasing use of parenteral iron in dialysis-dependent patients. In the absence of well powered, randomized clinical trials, available evidence will remain inadequate for making reliable conclusions about the effect of a ubiquitous therapy on mortality or other outcomes of importance to dialysis patients. Nephrology stakeholders have an urgent obligation to initiate well designed investigations of intravenous iron in order to ensure the safety of the dialysis population. PMID:25542967

  14. Considerations and Challenges in Defining Optimal Iron Utilization in Hemodialysis

    PubMed Central

    Pai, Amy Barton; Chan, Christopher T.; Coyne, Daniel W.; Hung, Adriana M.; Kovesdy, Csaba P.; Fishbane, Steven

    2015-01-01

    Trials raising concerns about erythropoiesis-stimulating agents, revisions to their labeling, and changes to practice guidelines and dialysis payment systems have provided strong stimuli to decrease erythropoiesis-stimulating agent use and increase intravenous iron administration in recent years. These factors have been associated with a rise in iron utilization, particularly among hemodialysis patients, and an unprecedented increase in serum ferritin concentrations. The mean serum ferritin concentration among United States dialysis patients in 2013 exceeded 800 ng/ml, with 18% of patients exceeding 1200 ng/ml. Although these changes are broad based, the wisdom of these practices is uncertain. Herein, we examine influences on and trends in intravenous iron utilization and assess the clinical trial, epidemiologic, and experimental evidence relevant to its safety and efficacy in the setting of maintenance dialysis. These data suggest a potential for harm from increasing use of parenteral iron in dialysis-dependent patients. In the absence of well powered, randomized clinical trials, available evidence will remain inadequate for making reliable conclusions about the effect of a ubiquitous therapy on mortality or other outcomes of importance to dialysis patients. Nephrology stakeholders have an urgent obligation to initiate well designed investigations of intravenous iron in order to ensure the safety of the dialysis population. PMID:25542967

  15. Determining the optimal number of individual samples to pool for quantification of average herd levels of antimicrobial resistance genes in Danish pig herds using high-throughput qPCR.

    PubMed

    Clasen, Julie; Mellerup, Anders; Olsen, John Elmerdahl; Angen, Øystein; Folkesson, Anders; Halasa, Tariq; Toft, Nils; Birkegård, Anna Camilla

    2016-06-30

    The primary objective of this study was to determine the minimum number of individual fecal samples to pool together in order to obtain a representative sample for herd level quantification of antimicrobial resistance (AMR) genes in a Danish pig herd, using a novel high-throughput qPCR assay. The secondary objective was to assess the agreement between different methods of sample pooling. Quantification of AMR was achieved using a high-throughput qPCR method to quantify the levels of seven AMR genes (ermB, ermF, sulI, sulII, tet(M), tet(O) and tet(W)). A large variation in the levels of AMR genes was found between individual samples. As the number of samples in a pool increased, a decrease in sample variation was observed. It was concluded that the optimal pooling size is five samples, as an almost steady state in the variation was observed when pooling this number of samples. Good agreement between different pooling methods was found and the least time-consuming method of pooling, by transferring feces from each individual sample to a tube using a 10μl inoculation loop and adding 3.5ml of PBS, approximating a 10% solution, can therefore be used in future studies. PMID:27259826

  16. Optimization and application of a custom microarray for the detection and genotyping of E. coli O157:H7 in fresh meat samples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    DNA microarrays are promising high-throughput tools for multiple pathogen detection. Currently, the performance and cost of this platform has limited its broad application in identifying microbial contaminants in foods. In this study, an optimized custom DNA microarray with flexibility in design and...

  17. Optimization of Methods for Obtaining, Extracting and Detecting Mycobacterium avium subsp. paratuberculosis in Environmental Samples using Quantitative, Real-Time PCR

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Detection of Johne’s disease, an enteric infection of cattle caused by Mycobacterium avium subsp. paratuberculosis (M. paratuberculosis), has been impeded by the lack of rapid, reliable detection methods. The goal of this study was to optimize methodologies for obtaining, extracting and evaluating t...

  18. Highly broad-specific and sensitive enzyme-linked immunosorbent assay for screening sulfonamides: Assay optimization and application to milk samples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A broad-specific and sensitive immunoassay for the detection of sulfonamides was developed by optimizing the conditions of an enzyme-linked immunosorbent assay (ELISA) in regard to different monoclonal antibodies (MAbs), assay format, immunoreagents, and several physicochemical factors (pH, salt, de...

  19. Clarifying and Defining Library Services.

    ERIC Educational Resources Information Center

    Shubert, Joseph F., Ed.; Josey, E. J., Ed.

    1991-01-01

    This issue presents articles which, in some way, help to clarify and define library services. It is hoped that this clarification in library service will serve to secure the resources libraries need to serve the people of New York. The following articles are presented: (1) Introduction: "Clarifying and Defining Library Services" (Joseph F.…

  20. Crack-Defined Electronic Nanogaps.

    PubMed

    Dubois, Valentin; Niklaus, Frank; Stemme, Göran

    2016-03-01

    Achieving near-atomic-scale electronic nanogaps in a reliable and scalable manner will facilitate fundamental advances in molecular detection, plasmonics, and nanoelectronics. Here, a method is shown for realizing crack-defined nanogaps separating TiN electrodes, allowing parallel and scalable fabrication of arrays of sub-10 nm electronic nanogaps featuring individually defined gap widths. PMID:26784270

  1. ThermoPhyl: a software tool for selecting phylogenetically optimized conventional and quantitative-PCRtaxon-targeted assays for usewith complex samples

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The ability to specifically and sensitively target genotypes of interest is critical for the success of many PCR-based analyses of environmental or clinical samples that contain multiple templates. Next-generation sequence data clearly show that such samples can harbour hundreds to thousands of oper...

  2. The Optimization of Molecular Detection of Clinical Isolates of Brucella in Blood Cultures by eryD Transcriptase Gene for Confirmation of Culture-Negative Samples

    PubMed Central

    Tabibnejad, Mahsa; Alikhani, Mohammad Yousef; Arjomandzadegan, Mohammad; Hashemi, Seyed Hamid; Naseri, Zahra

    2016-01-01

    Background Brucellosis is a zoonosis disease which is widespread across the world. Objectives The aim of the present study is the evaluation of culture-negative blood samples. Materials and Methods A total of 100 patients with suspected brucellosis were included in this experimental study and given positive serological tests. Diagnosis was performed on patients with clinical symptoms of the disease, followed by the detection of a titer that was equal to or more than 1:160 (in endemic areas) by the standard tube agglutination method. Blood samples were cultured by a BACTEC 9050 system, and subsequently by Brucella agar. At the same time, DNA from all blood samples was extracted by Qiagen Kit Company (Qia Amp Mini Kit). A molecular assay of blood samples was carried out by detection of eryD transcriptase and bcsp 31 genes in specific double PCR reactions. The specificity of the primers was evaluated by DNA from pure and approved Brucella colonies found in the blood samples, by DNA from other bacteria, and by ordinary PCR. DNA extraction from the pure colonies was carried out by both Qiagen Kit and Chelex 100 methods; the two were compared. Results 39 cases (39%) had positive results when tested by the BACTEC system, and 61 cases (61%) became negative. 23 culture-positive blood samples were randomly selected for PCR reactions; all showed 491 bp for the eryD gene and 223 bp for the bcsp 31 gene. Interestingly, out of 14 culture-negative blood samples, 13 cases showed positive bonds in PCR. The specificity of the PCR method was equal to 100%. DNA extraction from pure cultures was done by both Chelex 100 and Qiagen Kit; these showed the same results for all samples. Conclusions The results prove that the presented double PCR method could be used to detect positive cases from culture-negative blood samples. The Chelex 100 method is simpler and safer than the use of Qiagen Kit for DNA extraction. PMID:27330831

  3. The Problem of Defining Intelligence.

    ERIC Educational Resources Information Center

    Lubar, David

    1981-01-01

    The major philosophical issues surrounding the concept of intelligence are reviewed with respect to the problems surrounding the process of defining and developing artificial intelligence (AI) in computers. Various current definitions and problems with these definitions are presented. (MP)

  4. Sample Optimization and Identification of Signal Patterns of Amino Acid Side Chains in 2D RFDR Spectra of the α-Spectrin SH3 Domain

    NASA Astrophysics Data System (ADS)

    Pauli, Jutta; van Rossum, Barth; Förster, Hans; de Groot, Huub J. M.; Oschkinat, Hartmut

    2000-04-01

    Future structural investigations of proteins by solid-state CPMAS NMR will rely on uniformly labeled protein samples showing spectra with an excellent resolution. NMR samples of the solid α-spectrin SH3 domain were generated in four different ways, and their 13C CPMAS spectra were compared. The spectrum of a [u-13C, 15N]-labeled sample generated by precipitation shows very narrow 13C signals and resolved scalar carbon-carbon couplings. Linewidths of 16-19 Hz were found for the three alanine Cβ signals of a selectively labeled [70% 3-13C]alanine-enriched SH3 sample. The signal pattern of the isoleucine, of all prolines, valines, alanines, and serines, and of three of the four threonines were identified in 2D 13C-13C RFDR spectra of the [u-13C,15N]-labeled SH3 sample. A comparison of the 13C chemical shifts of the found signal patterns with the 13C assignment obtained in solution shows an intriguing match.

  5. Optimized Heart Sampling and Systematic Evaluation of Cardiac Therapies in Mouse Models of Ischemic Injury: Assessment of Cardiac Remodeling and Semi-Automated Quantification of Myocardial Infarct Size.

    PubMed

    Valente, Mariana; Araújo, Ana; Esteves, Tiago; Laundos, Tiago L; Freire, Ana G; Quelhas, Pedro; Pinto-do-Ó, Perpétua; Nascimento, Diana S

    2015-01-01

    Cardiac therapies are commonly tested preclinically in small-animal models of myocardial infarction. Following functional evaluation, post-mortem histological analysis is essential to assess morphological and molecular alterations underlying the effectiveness of treatment. However, non-methodical and inadequate sampling of the left ventricle often leads to misinterpretations and variability, making direct study comparisons unreliable. Protocols are provided for representative sampling of the ischemic mouse heart followed by morphometric analysis of the left ventricle. Extending the use of this sampling to other types of in situ analysis is also illustrated through the assessment of neovascularization and cellular engraftment in a cell-based therapy setting. This is of interest to the general cardiovascular research community as it details methods for standardization and simplification of histo-morphometric evaluation of emergent heart therapies. © 2015 by John Wiley & Sons, Inc. PMID:26629776

  6. Assessment of estrogenic activity in PM₁₀ air samples with the ERE-CALUX bioassay: Method optimization and implementation at an urban location in Flanders (Belgium).

    PubMed

    Croes, Kim; Debaillie, Pieterjan; Van den Bril, Bo; Staelens, Jeroen; Vandermarken, Tara; Van Langenhove, Kersten; Denison, Michael S; Leermakers, Martine; Elskens, Marc

    2016-02-01

    Endocrine disrupting chemicals represent a broad class of compounds, are widespread in the environment and can pose severe health effects. The objective of this study was to investigate the overall estrogen activating potential of PM10 air samples at an urban location with high traffic incidence in Flanders, using a human in vitro cell bioassay. PM10 samples (n = 36) were collected on glass fiber filters every six days between April 2013 and January 2014 using a high-volume sampler. Extraction was executed with a hexane/acetone mixture before analysis using a recombinant estrogen-responsive human ovarian carcinoma (BG1Luc4E2) cell line. In addition, several samples and procedural blanks were extracted with ultra-pure ethanol or acetonitrile to compare extraction efficiencies. Results were expressed as bioanalytical equivalents (BEQs) in femtogram 17β-estradiol equivalent (fg E2-Eq) per cubic meter of air. High fluctuations in estrogenic activity were observed during the entire sampling period, with mean and median BEQs of 50.7 and 35.9 fg E2-Eq m(-)(3), respectively. Estrogenic activity was measured in more than 70% of the samples and several sample extracts showed both high BEQs and high cytotoxicity, which could not be related to black carbon, PM10 or heavy metal concentrations. At this moment, it remains unclear which substances cause this toxicity, but comparison of results obtained with different extraction solvents indicated that acetone/hexane extracts contained more compounds that were cytotoxic and suppressive of responses than those extracted using ultra-pure ethanol. Although more research is needed, the use of a more polar extraction solvent seems to be advisable. PMID:26383266

  7. Separation of very hydrophobic analytes by micellar electrokinetic chromatography. I. Optimization of the composition of the sample solution for the determination of the aromatic ingredients of sassafras and other essential oils of forensic interest.

    PubMed

    Huhn, Carolin; Pütz, Michael; Holthausen, Ivie; Pyell, Ute

    2008-01-01

    A micellar electrokinetic chromatographic method using UV and (UV)LIF detection in-line was developed for the determination of aromatic constituents, mainly allylbenzenes in essential oils. The method optimization included the optimization of the composition of the separation electrolyte using ACN and urea to reduce retention factors and CaCl(2) to widen the migration time window. In addition, it was necessary to optimize the composition of the sample solution which included the addition of a neutral surfactant at high concentration. With the optimized method, the determination of minor constituents in essential oils was possible despite of the presence of a structurally related compound being in a molar ratio excess of 1000:1. The use of UV and LIF-detection in-line enabled the direct comparison of the two detection traces using an electrophoretic mobility x-axis instead of the normal time-based scale. This simplifies the assignment of signals and enhances repeatability. The method developed was successfully applied to the determination of minor and major constituents in herbal essential oils, some of them being forensically relevant as sources of precursors for synthetic drugs. PMID:18064732

  8. Gear optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Chen, Xiang; Zhang, Ning-Tian

    1988-01-01

    The use of formal numerical optimization methods for the design of gears is investigated. To achieve this, computer codes were developed for the analysis of spur gears and spiral bevel gears. These codes calculate the life, dynamic load, bending strength, surface durability, gear weight and size, and various geometric parameters. It is necessary to calculate all such important responses because they all represent competing requirements in the design process. The codes developed here were written in subroutine form and coupled to the COPES/ADS general purpose optimization program. This code allows the user to define the optimization problem at the time of program execution. Typical design variables include face width, number of teeth and diametral pitch. The user is free to choose any calculated response as the design objective to minimize or maximize and may impose lower and upper bounds on any calculated responses. Typical examples include life maximization with limits on dynamic load, stress, weight, etc. or minimization of weight subject to limits on life, dynamic load, etc. The research codes were written in modular form for easy expansion and so that they could be combined to create a multiple reduction optimization capability in future.

  9. LENGTH-HETEROGENEITY POLYMERASE CHAIN REACTION (LH-PCR) AS AN INDICATOR OF STREAM SANITARY AND ECOLOGICAL CONDITION: OPTIMAL SAMPLE SIZE AND HOLDING CONDITIONS

    EPA Science Inventory

    The use of coliform plate count data to assess stream sanitary and ecological condition is limited by the need to store samples at 4oC and analyze them within a 24-hour period. We are testing LH-PCR as an alternative tool to assess the bacterial load of streams, offering a cost ...

  10. What do we need to measure, how much, and where? A quantitative assessment of terrestrial data needs across North American biomes through data-model fusion and sampling optimization

    NASA Astrophysics Data System (ADS)

    Dietze, M. C.; Davidson, C. D.; Desai, A. R.; Feng, X.; Kelly, R.; Kooper, R.; LeBauer, D. S.; Mantooth, J.; McHenry, K.; Serbin, S. P.; Wang, D.

    2012-12-01

    Ecosystem models are designed to synthesize our current understanding of how ecosystems function and to predict responses to novel conditions, such as climate change. Reducing uncertainties in such models can thus improve both basic scientific understanding and our predictive capacity, but rarely have the models themselves been employed in the design of field campaigns. In the first part of this paper we provide a synthesis of uncertainty analyses conducted using the Predictive Ecosystem Analyzer (PEcAn) ecoinformatics workflow on the Ecosystem Demography model v2 (ED2). This work spans a number of projects synthesizing trait databases and using Bayesian data assimilation techniques to incorporate field data across temperate forests, grasslands, agriculture, short rotation forestry, boreal forests, and tundra. We report on a number of data needs that span a wide array diverse biomes, such as the need for better constraint on growth respiration. We also identify other data needs that are biome specific, such as reproductive allocation in tundra, leaf dark respiration in forestry and early-successional trees, and root allocation and turnover in mid- and late-successional trees. Future data collection needs to balance the unequal distribution of past measurements across biomes (temperate biased) and processes (aboveground biased) with the sensitivities of different processes. In the second part we present the development of a power analysis and sampling optimization module for the the PEcAn system. This module uses the results of variance decomposition analyses to estimate the further reduction in model predictive uncertainty for different sample sizes of different variables. By assigning a cost to each measurement type, we apply basic economic theory to optimize the reduction in model uncertainty for any total expenditure, or to determine the cost required to reduce uncertainty to a given threshold. Using this system we find that sampling switches among multiple

  11. Fenton and Fenton-like oxidation of pesticide acetamiprid in water samples: kinetic study of the degradation and optimization using response surface methodology.

    PubMed

    Mitsika, Elena E; Christophoridis, Christophoros; Fytianos, Konstantinos

    2013-11-01

    The aims of this study were (a) to evaluate the degradation of acetamiprid with the use of Fenton reaction, (b) to investigate the effect of different concentrations of H2O2 and Fe(2+), initial pH and various iron salts, on the degradation of acetamiprid and (c) to apply response surface methodology for the evaluation of degradation kinetics. The kinetic study revealed a two-stage process, described by pseudo- first and second order kinetics. Different H2O2:Fe(2+) molar ratios were examined for their effect on acetamiprid degradation kinetics. The ratio of 3 mg L(-1) Fe(2+): 40 mg L(-1) H2O2 was found to completely remove acetamiprid at less than 10 min. Degradation rate was faster at lower pH, with the optimal value at pH 2.9, while Mohr salt appeared to degrade acetamiprid faster. A central composite design was selected in order to observe the effects of Fe(2+) and H2O2 initial concentration on acetamiprid degradation kinetics. A quadratic model fitted the experimental data, with satisfactory regression and fit. The most significant effect on the degradation of acetamiprid, was induced by ferrous iron concentration followed by H2O2. Optimization, aiming to minimize the applied ferrous concentration and the process time, proposed a ratio of 7.76 mg L(-1) Fe(II): 19.78 mg L(-1) H2O2. DOC is reduced much more slowly and requires more than 6h of processing for 50% degradation. The use to zero valent iron, demonstrated fast kinetic rates with acetamiprid degradation occurring in 10 min and effective DOC removal. PMID:23871596

  12. Application and optimization of microwave-assisted extraction and dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for sensitive determination of polyamines in turkey breast meat samples.

    PubMed

    Bashiry, Moein; Mohammadi, Abdorreza; Hosseini, Hedayat; Kamankesh, Marzieh; Aeenehvand, Saeed; Mohammadi, Zaniar

    2016-01-01

    A novel method based on microwave-assisted extraction and dispersive liquid-liquid microextraction (MAE-DLLME) followed by high-performance liquid chromatography (HPLC) was developed for the determination of three polyamines from turkey breast meat samples. Response surface methodology (RSM) based on central composite design (CCD) was used to optimize the effective factors in DLLME process. The optimum microextraction efficiency was obtained under optimized conditions. The calibration graphs of the proposed method were linear in the range of 20-200 ng g(-1), with the coefficient determination (R(2)) higher than 0.9914. The relative standard deviations were 6.72-7.30% (n = 7). The limits of detection were in the range of 0.8-1.4 ng g(-1). The recoveries of these compounds in spiked turkey breast meat samples were from 95% to 105%. The increased sensitivity in using the MAE-DLLME-HPLC-UV has been demonstrated. Compared with previous methods, the proposed method is an accurate, rapid and reliable sample-pretreatment method. PMID:26213091

  13. Characterization of the olfactory impact around a wastewater treatment plant: optimization and validation of a hydrogen sulfide determination procedure based on passive diffusion sampling.

    PubMed

    Colomer, Fernando Llavador; Espinós-Morató, Héctor; Iglesias, Enrique Mantilla; Pérez, Tatiana Gómez; Campos-Candel, Andreu; Lozano, Caterina Coll

    2012-08-01

    A monitoring program based on an indirect method was conducted to assess the approximation of the olfactory impact in several wastewater treatment plants (in the present work, only one is shown). The method uses H2S passive sampling using Palmes-type diffusion tubes impregnated with silver nitrate and fluorometric analysis employing fluorescein mercuric acetate. The analytical procedure was validated in the exposure chamber. Exposure periods ofat least 4 days are recommended. The quantification limit of the procedure is 0.61 ppb for a 5-day sampling, which allows the H2S immission (ground concentration) level to be measured within its low odor threshold, from 0.5 to 300 ppb. Experimental results suggest an exposure time greater than 4 days, while recovery efficiency of the procedure, 93.0+/-1.8%, seems not to depend on the amount of H2S collected by the samplers within their application range. The repeatability, expressed as relative standard deviation, is lower than 7%, which is within the limits normally accepted for this type of sampler. Statistical comparison showed that this procedure and the reference method provide analogous accuracy. The proposed procedure was applied in two experimental campaigns, one intensive and the other extensive, and concentrations within the H2S low odor threshold were quantified at each sampling point. From these results, it can be concluded that the procedure shows good potential for monitoring the olfactory impact around facilities where H2S emissions are dominant. PMID:22916433

  14. Defining "Folklore" in the Classroom.

    ERIC Educational Resources Information Center

    Falke, Anne

    Folklore, a body of traditional beliefs of a people conveyed orally or by means of custom, is very much alive, involves all people, and is not the study of popular culture. In studying folklore, the principal tasks of the folklorist have been defined as determining definition, classification, source (the folk), origin (who composed folklore),…

  15. Defined by Word and Space

    ERIC Educational Resources Information Center

    Brisco, Nicole D.

    2010-01-01

    In the author's art class, she found that many of her students in an intro art class have some technical skill, but lack the ability to think conceptually. Her goal was to create an innovative project that combined design, painting, and sculpture into a compact unit that asked students how they define themselves. In the process of answering this…

  16. Sampling and Sample Preparation

    NASA Astrophysics Data System (ADS)

    Morawicki, Rubén O.

    Quality attributes in food products, raw materials, or ingredients are measurable characteristics that need monitoring to ensure that specifications are met. Some quality attributes can be measured online by using specially designed sensors and results obtained in real time (e.g., color of vegetable oil in an oil extraction plant). However, in most cases quality attributes are measured on small portions of material that are taken periodically from continuous processes or on a certain number of small portions taken from a lot. The small portions taken for analysis are referred to as samples, and the entire lot or the entire production for a certain period of time, in the case of continuous processes, is called a population. The process of taking samples from a population is called sampling. If the procedure is done correctly, the measurable characteristics obtained for the samples become a very accurate estimation of the population.

  17. Homotopy optimization methods for global optimization.

    SciTech Connect

    Dunlavy, Daniel M.; O'Leary, Dianne P.

    2005-12-01

    We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.

  18. Membership of Defined Responses in Stimulus Classes

    PubMed Central

    Lionello-DeNolf, Karen M.; Braga-Kenyon, Paula

    2012-01-01

    Sidman (2000) has suggested that in addition to conditional and discriminative stimuli, class-consistent defined responses can also become part of an equivalence class. In the current study, this assertion was tested using a mixed-schedule procedure that allowed defined response patterns to be “presented” as samples in the absence of different occasioning stimuli. Four typically developing adults were first trained to make distinct response topographies to two visual color stimuli, and then were taught to match those color stimuli to two different form-sample stimuli in a matching task. Three separate tests were given in order to determine whether training had established two classes each comprised of a response, a color, and a form: a form-response test in which the forms were presented to test if the participants would make differential responses to them; and two response-matching tests to test if the participants would match visual stimulus comparisons to response-pattern samples. Three of the four participants showed class-consistent responding in the tests, although some participants needed additional training prior to passing the tests. In general, the data indicated that the different response patterns had entered into a class with the visual stimuli. These results add to a growing literature on the role of class-consistent responding in stimulus class formation, and provide support for the notion that differential responses themselves can become a part of an equivalence class. PMID:24778458

  19. Defining and managing sustainable yield.

    PubMed

    Maimone, Mark

    2004-01-01

    Ground water resource management programs are paying increasing attention to the integration of ground water and surface water in the planning process. Many plans, however, show a sophistication in approach and presentation that masks a fundamental weakness in the overall analysis. The plans usually discuss issues of demand and yield, yet never directly address a fundamental issue behind the plan--how to define sustainable yield of an aquifer system. This paper points out a number of considerations that must be addressed in defining sustainable yield in order to make the definition more useful in practical water resource planning studies. These include consideration for the spatial and temporal aspects of the problem, the development of a conceptual water balance, the influence of boundaries and changes in technology on the definition, the need to examine water demand as well as available supply, the need for stakeholder involvement, and the issue of uncertainty in our understanding of the components of the hydrologic system. PMID:15584295

  20. Optimization of an enclosed gas analyzer sampling system for measuring eddy covariance fluxes of H2O and CO2

    DOE PAGESBeta

    Metzger, Stefan; Burba, George; Burns, Sean P.; Blanken, Peter D.; Li, Jiahong; Luo, Hongyan; Zulueta, Rommel C.

    2016-03-31

    Several initiatives are currently emerging to observe the exchange of energy and matter between the earth's surface and atmosphere standardized over larger space and time domains. For example, the National Ecological Observatory Network (NEON) and the Integrated Carbon Observing System (ICOS) are set to provide the ability of unbiased ecological inference across ecoclimatic zones and decades by deploying highly scalable and robust instruments and data processing. In the construction of these observatories, enclosed infrared gas analyzers are widely employed for eddy covariance applications. While these sensors represent a substantial improvement compared to their open- and closed-path predecessors, remaining high-frequency attenuation variesmore » with site properties and gas sampling systems, and requires correction. Here, we show that components of the gas sampling system can substantially contribute to such high-frequency attenuation, but their effects can be significantly reduced by careful system design. From laboratory tests we determine the frequency at which signal attenuation reaches 50 % for individual parts of the gas sampling system. For different models of rain caps and particulate filters, this frequency falls into ranges of 2.5–16.5 Hz for CO2, 2.4–14.3 Hz for H2O, and 8.3–21.8 Hz for CO2, 1.4–19.9 Hz for H2O, respectively. A short and thin stainless steel intake tube was found to not limit frequency response, with 50 % attenuation occurring at frequencies well above 10 Hz for both H2O and CO2. From field tests we found that heating the intake tube and particulate filter continuously with 4 W was effective, and reduced the occurrence of problematic relative humidity levels (RH > 60 %) by 50 % in the infrared gas analyzer cell. No further improvement of H2O frequency response was found for heating in excess of 4 W. These laboratory and field tests were reconciled using resistor–capacitor theory, and NEON's final gas sampling system was

  1. Defined DNA/nanoparticle conjugates.

    PubMed

    Ackerson, Christopher J; Sykes, Michael T; Kornberg, Roger D

    2005-09-20

    Glutathione monolayer-protected gold clusters were reacted by place exchange with 19- or 20-residue thiolated oligonucleotides. The resulting DNA/nanoparticle conjugates could be separated on the basis of the number of bound oligonucleotides by gel electrophoresis and assembled with one another by DNA-DNA hybridization. This approach overcomes previous limitations of DNA/nanoparticle synthesis and yields conjugates that are precisely defined with respect to both gold and nucleic acid content. PMID:16155122

  2. How do people define moderation?

    PubMed

    vanDellen, Michelle R; Isherwood, Jennifer C; Delose, Julie E

    2016-06-01

    Eating in moderation is considered to be sound and practical advice for weight maintenance or prevention of weight gain. However, the concept of moderation is ambiguous, and the effect of moderation messages on consumption has yet to be empirically examined. The present manuscript examines how people define moderate consumption. We expected that people would define moderate consumption in ways that justified their current or desired consumption rather than view moderation as an objective standard. In Studies 1 and 2, moderate consumption was perceived to involve greater quantities of an unhealthy food (chocolate chip cookies, gummy candies) than perceptions of how much one should consume. In Study 3, participants generally perceived themselves to eat in moderation and defined moderate consumption as greater than their personal consumption. Furthermore, definitions of moderate consumption were related to personal consumption behaviors. Results suggest that the endorsement of moderation messages allows for a wide range of interpretations of moderate consumption. Thus, we conclude that moderation messages are unlikely to be effective messages for helping people maintain or lose weight. PMID:26964691

  3. Optimization of dispersive liquid-liquid microextraction based on the solidification of floating organic droplets using an orthogonal array design and its application for the determination of fungicide concentrations in environmental water samples.

    PubMed

    Yang, Xiaoling; Yang, Miyi; Hou, Bang; Li, Songqing; Zhang, Ying; Lu, Runhua; Zhang, Sanbing

    2014-08-01

    A dispersive liquid-liquid microextraction method based on the solidification of floating organic droplets was developed as a simple and sensitive method for the simultaneous determination of the concentrations of multiple fungicides (triazolone, chlorothalonil, cyprodinil, and trifloxystrobin) in water by high-performance liquid chromatography with variable-wavelength detection. After an approach varying one factor at a time was used, an orthogonal array design [L25 (5(5))] was employed to optimize the method and to determine the interactions between the parameters. The significance of the effects of the different factors was determined using analysis of variance. The results indicated that the extraction solvent volume significantly affects the efficiency of the extraction. Under optimal conditions, the relative standard deviation (n = 5) varied from 2.3 to 5.5% at 0.1 μg/mL for each analyte. Low limits of detection were obtained and ranged from 0.02 to 0.2 ng/mL. In addition, the proposed method was applied to the analysis of fungicides in real water samples. The results show that the dispersive liquid-liquid microextraction based on the solidification of floating organic droplets is a potential method for detecting fungicides in environmental water samples, with recoveries of the target analytes ranging from 70.1 to 102.5%. PMID:24824837

  4. Optimization of magnetic stirring assisted dispersive liquid-liquid microextraction of rhodamine B and rhodamine 6G by response surface methodology: Application in water samples, soft drink, and cosmetic products.

    PubMed

    Ranjbari, Elias; Hadjmohammadi, Mohammad Reza

    2015-07-01

    An exact, rapid and efficient method for the extraction of rhodamine B (RB) and rhodamine 6G (RG) as well as their determination in three different matrices was developed using magnetic stirring assisted dispersive liquid-liquid microextraction (MSA-DLLME) and HPLC-Vis. 1-Octanol and acetone were selected as the extraction and dispersing solvents, respectively. The potentially variables were the volume of extraction and disperser solvents, pH of sample solution, salt effect, temperature, stirring rate and vortex time in the optimization process. A methodology based on fractional factorial design (2(7)(-2)) was carried out to choose the significant variables for the optimization. Then, the significant factors (extraction solvent volume, pH of sample solution, temperature, stirring rate) were optimized using a central composite design (CCD). A quadratic model between dependent and independent variables was built. Under the optimum conditions (extraction solvent volume=1050µL, pH=2, temperature=35°C and stirring rate=1500rpm), the calibration curves showed high levels of linearity (R(2)=0.9999) for RB and RG in the ranges of 5-1000ngmL(-1) and 7.5-1000ngmL(-1), respectively. The obtained extraction recoveries for 100ngmL(-1) of RB and RG standard solutions were 100% and 97%, and preconcentration factors were 48 and 46, respectively. While the limit of detection was 1.15ngmL(-1) for RB, it was 1.23ngmL(-1) for RG. Finally, the MSA-DLLME method was successfully applied for preconcentration and trace determination of RB and RG in different matrices of environmental waters, soft drink and cosmetic products. PMID:25882429

  5. Defining the Ischemic Penumbra using Magnetic Resonance Oxygen Metabolic Index

    PubMed Central

    An, Hongyu; Ford, Andria L.; Chen, Yasheng; Zhu, Hongtu; Ponisio, Rosana; Kumar, Gyanendra; Shanechi, Amirali Modir; Khoury, Naim; Vo, Katie D.; Williams, Jennifer; Derdeyn, Colin P.; Diringer, Michael N.; Panagos, Peter; Powers, William J.; Lee, Jin-Moo; Lin, Weili

    2015-01-01

    Background and Purpose Penumbral biomarkers promise to individualize treatment windows in acute ischemic stroke. We used a novel MRI approach which measures oxygen metabolic index (OMI), a parameter closely related to PET-derived cerebral metabolic rate of oxygen utilization, to derive a pair of ischemic thresholds: (1) an irreversible-injury threshold which differentiates ischemic core from penumbra and (2) a reversible-injury threshold which differentiates penumbra from tissue not-at-risk for infarction. Methods Forty acute ischemic stroke patients underwent MRI at three time-points after stroke onset: < 4.5 hours (for OMI threshold derivation), 6 hours (to determine reperfusion status), and 1 month (for infarct probability determination). A dynamic susceptibility contrast method measured CBF, and an asymmetric spin echo sequence measured OEF, to derive OMI (OMI=CBF*OEF). Putative ischemic threshold pairs were iteratively tested using a computation-intensive method to derive infarct probabilities in three tissue groups defined by the thresholds (core, penumbra, and not-at-risk tissue). An optimal threshold pair was chosen based on its ability to predict: infarction in the core, reperfusion-dependent survival in the penumbra, and survival in not-at-risk tissue. The predictive abilities of the thresholds were then tested within the same cohort using a 10-fold cross-validation method. Results The optimal OMI ischemic thresholds were found to be 0.28 and 0.42 of normal values in the contralateral hemisphere. Using the 10-fold cross-validation method, median infarct probabilities were 90.6% for core, 89.7% for non-reperfused penumbra, 9.95% for reperfused penumbra, and 6.28% for not-at-risk tissue. Conclusions OMI thresholds, derived using voxel-based, reperfusion-dependent infarct probabilities, delineated the ischemic penumbra with high predictive ability. These thresholds will require confirmation in an independent patient sample. PMID:25721017

  6. Defining a genetic ideotype for crop improvement.

    PubMed

    Trethowan, Richard M

    2014-01-01

    While plant breeders traditionally base selection on phenotype, the development of genetic ideotypes can help focus the selection process. This chapter provides a road map for the establishment of a refined genetic ideotype. The first step is an accurate definition of the target environment including the underlying constraints, their probability of occurrence, and impact on phenotype. Once the environmental constraints are established, the wealth of information on plant physiological responses to stresses, known gene information, and knowledge of genotype ×environment and gene × environment interaction help refine the target ideotype and form a basis for cross prediction.Once a genetic ideotype is defined the challenge remains to build the ideotype in a plant breeding program. A number of strategies including marker-assisted recurrent selection and genomic selection can be used that also provide valuable information for the optimization of genetic ideotype. However, the informatics required to underpin the realization of the genetic ideotype then becomes crucial. The reduced cost of genotyping and the need to combine pedigree, phenotypic, and genetic data in a structured way for analysis and interpretation often become the rate-limiting steps, thus reducing genetic gain. Systems for managing these data and an example of ideotype construction for a defined environment type are discussed. PMID:24816655

  7. Optimizing electrostatic field calculations with the adaptive Poisson-Boltzmann Solver to predict electric fields at protein-protein interfaces. I. Sampling and focusing.

    PubMed

    Ritchie, Andrew W; Webb, Lauren J

    2013-10-01

    Continuum electrostatics methods are commonly used to calculate electrostatic potentials in proteins and at protein-protein interfaces to aid many types of biophysical studies. Despite their ubiquity throughout the biophysical literature, these calculations are difficult to test against experimental data to determine their accuracy and validity. To address this, we have calculated the Boltzmann-weighted electrostatic field at the midpoint of a nitrile bond placed at a variety of locations on the surface of the protein RalGDS, both in its monomeric form as well as when docked to four different constructs of the protein Rap, and compared the computation results to vibrational absorption energy measurements of the nitrile oscillator. This was done by generating a statistical ensemble of protein structures using enhanced molecular dynamics sampling with the Amber03 force field, followed by solving the linear Poisson-Boltzmann equation for each structure using the Applied Poisson-Boltzmann Solver (APBS) software package. Using a two-stage focusing strategy, we examined numerous second stage box dimensions, grid point densities, box locations, and compared the numerical result to the result obtained from the sum of the numeric reaction field and the analytic Coulomb field. It was found that the reaction field method yielded higher correlation with experiment for the absolute calculation of fields, while the numeric solutions yielded higher correlation with experiment for the relative field calculations. Finer grid spacing typically improved the calculation, although this effect was less pronounced in the reaction field method. These sorts of calculations were also very sensitive to the box location, particularly for the numeric calculations of absolute fields using a 10(3) Å(3) box. PMID:24041016

  8. Defined DNA/nanoparticle conjugates

    NASA Astrophysics Data System (ADS)

    Ackerson, Christopher J.; Sykes, Michael T.; Kornberg, Roger D.

    2005-09-01

    Glutathione monolayer-protected gold clusters were reacted by place exchange with 19- or 20-residue thiolated oligonucleotides. The resulting DNA/nanoparticle conjugates could be separated on the basis of the number of bound oligonucleotides by gel electrophoresis and assembled with one another by DNA-DNA hybridization. This approach overcomes previous limitations of DNA/nanoparticle synthesis and yields conjugates that are precisely defined with respect to both gold and nucleic acid content. Freely available online through the PNAS open access option.

  9. Response surface methodology based on central composite design as a chemometric tool for optimization of dispersive-solidification liquid-liquid microextraction for speciation of inorganic arsenic in environmental water samples.

    PubMed

    Asadollahzadeh, Mehdi; Tavakoli, Hamed; Torab-Mostaedi, Meisam; Hosseini, Ghaffar; Hemmati, Alireza

    2014-06-01

    Dispersive-solidification liquid-liquid microextraction (DSLLME) coupled with electrothermal atomic absorption spectrometry (ETAAS) was developed for preconcentration and determination of inorganic arsenic (III, V) in water samples. At pH=1, As(III) formed complex with ammonium pyrrolidine dithiocarbamate (APDC) and extracted into the fine droplets of 1-dodecanol (extraction solvent) which were dispersed with ethanol (disperser solvent) into the water sample solution. After extraction, the organic phase was separated by centrifugation, and was solidified by transferring into an ice bath. The solidified solvent was transferred to a conical vial and melted quickly at room temperature. As(III) was determined in the melted organic phase while As(V) remained in the aqueous layer. Total inorganic As was determined after the reduction of the pentavalent forms of arsenic with sodium thiosulphate and potassium iodide. As(V) was calculated by difference between the concentration of total inorganic As and As(III). The variable of interest in the DSLLME method, such as the volume of extraction solvent and disperser solvent, pH, concentration of APDC (chelating agent), extraction time and salt effect, was optimized with the aid of chemometric approaches. First, in screening experiments, fractional factorial design (FFD) was used for selecting the variables which significantly affected the extraction procedure. Afterwards, the significant variables were optimized using response surface methodology (RSM) based on central composite design (CCD). In the optimum conditions, the proposed method has been successfully applied to the determination of inorganic arsenic in different environmental water samples and certified reference material (NIST RSM 1643e). PMID:24725860

  10. Defining Life: The Virus Viewpoint

    NASA Astrophysics Data System (ADS)

    Forterre, Patrick

    2010-04-01

    Are viruses alive? Until very recently, answering this question was often negative and viruses were not considered in discussions on the origin and definition of life. This situation is rapidly changing, following several discoveries that have modified our vision of viruses. It has been recognized that viruses have played (and still play) a major innovative role in the evolution of cellular organisms. New definitions of viruses have been proposed and their position in the universal tree of life is actively discussed. Viruses are no more confused with their virions, but can be viewed as complex living entities that transform the infected cell into a novel organism—the virus—producing virions. I suggest here to define life (an historical process) as the mode of existence of ribosome encoding organisms (cells) and capsid encoding organisms (viruses) and their ancestors. I propose to define an organism as an ensemble of integrated organs (molecular or cellular) producing individuals evolving through natural selection. The origin of life on our planet would correspond to the establishment of the first organism corresponding to this definition.

  11. Critically sampled wavelets with composite dilations.

    PubMed

    Easley, Glenn R; Labate, Demetrio

    2012-02-01

    Wavelets with composite dilations provide a general framework for the construction of waveforms defined not only at various scales and locations, as traditional wavelets, but also at various orientations and with different scaling factors in each coordinate. As a result, they are useful to analyze the geometric information that often dominate multidimensional data much more efficiently than traditional wavelets. The shearlet system, for example, is a particular well-known realization of this framework, which provides optimally sparse representations of images with edges. In this paper, we further investigate the constructions derived from this approach to develop critically sampled wavelets with composite dilations for the purpose of image coding. Not only do we show that many nonredundant directional constructions recently introduced in the literature can be derived within this setting, but we also introduce new critically sampled discrete transforms that achieve much better nonlinear approximation rates than traditional discrete wavelet transforms and outperform the other critically sampled multiscale transforms recently proposed. PMID:21843993

  12. Defining Characteristics of Creative Women

    ERIC Educational Resources Information Center

    Bender, Sarah White; Nibbelink, BradyLeigh; Towner-Thyrum, Elizabeth; Vredenburg, Debra

    2013-01-01

    This study was an effort to identify correlates of creativity in women. A sample of 447 college students were given the picture completion subtest of the Torrance Test of Creative Thinking, the "How Do You Think Test," the Revised NEO Personality Inventory, the Multidimensional Self-Esteem Inventory, the Family Environment Scale, and the…

  13. The association between combination antiretroviral adherence and AIDS-defining conditions at HIV diagnosis.

    PubMed

    Abara, Winston E; Xu, Junjun; Adekeye, Oluwatoyosi A; Rust, George

    2016-08-01

    Combination antiretroviral therapy (cART) has changed the clinical course of HIV. AIDS-defining conditions (ADC) are suggestive of severe or advanced disease and are a leading cause of HIV-related hospitalizations and death among people living with HIV/AIDS (PLWHA) in the USA. Optimal adherence to cART can mitigate the impact of ADC and disease severity on the health and survivability of PLWHA. The objective of this study was to evaluate the association between ADC at HIV diagnosis and optimal adherence among PLWHA. Using data from the 2008 and 2009 Medicaid data from 29 states, we identified individuals, between 18 and 49 years, recently infected with HIV and with a cART prescription. Frequencies and descriptive statistics were conducted to characterize sample. Univariate and multivariable Poisson regression analyses were employed to evaluate the association optimal cART adherence (defined as ≥ 95% study days covered by cART) and ADC at HIV diagnosis (≥1 ADC) were assessed. Approximately 17% of respondents with ADC at HIV diagnosis reported optimal cART adherence. After adjusting for covariates, respondents with an ADC at HIV diagnosis were less likely to report optimal cART adherence (adjusted prevalence ratio (APR) = 0.64, 95% confidence intervals (CI), 0.54-0.75). Among the covariates, males (APR=1.10, 95% CI, 1.02-1.19) compared to females were significantly more likely to report optimal adherence while younger respondents, 18-29 years (APR=0.67, 95% CI, 0.57-0.77), 30-39 years (APR=0.86, 95% CI, 0.79-0.95) compared to older respondents were significantly less likely to report optimal adherence. PLWHA with ADC at HIV diagnosis are at risk of suboptimal cART adherence. Multiple adherence strategies that include healthcare providers, case managers, and peer navigators should be utilized to improve cART adherence and optimize health outcomes among PLWHA with ADC at HIV diagnosis. Targeted adherence programs and services are required to address

  14. Defining Life: Synthesis and Conclusions

    NASA Astrophysics Data System (ADS)

    Gayon, Jean

    2010-04-01

    The first part of the paper offers philosophical landmarks on the general issue of defining life. §1 defends that the recognition of “life” has always been and remains primarily an intuitive process, for the scientist as for the layperson. However we should not expect, then, to be able to draw a definition from this original experience, because our cognitive apparatus has not been primarily designed for this. §2 is about definitions in general. Two kinds of definition should be carefully distinguished: lexical definitions (based upon current uses of a word), and stipulative or legislative definitions, which deliberately assign a meaning to a word, for the purpose of clarifying scientific or philosophical arguments. The present volume provides examples of these two kinds of definitions. §3 examines three traditional philosophical definitions of life, all of which have been elaborated prior to the emergence of biology as a specific scientific discipline: life as animation (Aristotle), life as mechanism, and life as organization (Kant). All three concepts constitute a common heritage that structures in depth a good deal of our cultural intuitions and vocabulary any time we try to think about “life”. The present volume offers examples of these three concepts in contemporary scientific discourse. The second part of the paper proposes a synthesis of the major debates developed in this volume. Three major questions have been discussed. A first issue (§4) is whether we should define life or not, and why. Most authors are skeptical about the possibility of defining life in a strong way, although all admit that criteria are useful in contexts such as exobiology, artificial life and the origins of life. §5 examines the possible kinds of definitions of life presented in the volume. Those authors who have explicitly defended that a definition of life is needed, can be classified into two categories. The first category (or standard view) refers to two conditions

  15. Defining life: synthesis and conclusions.

    PubMed

    Gayon, Jean

    2010-04-01

    The first part of the paper offers philosophical landmarks on the general issue of defining life. Section 1 defends that the recognition of "life" has always been and remains primarily an intuitive process, for the scientist as for the layperson. However we should not expect, then, to be able to draw a definition from this original experience, because our cognitive apparatus has not been primarily designed for this. Section 2 is about definitions in general. Two kinds of definition should be carefully distinguished: lexical definitions (based upon current uses of a word), and stipulative or legislative definitions, which deliberately assign a meaning to a word, for the purpose of clarifying scientific or philosophical arguments. The present volume provides examples of these two kinds of definitions. Section 3 examines three traditional philosophical definitions of life, all of which have been elaborated prior to the emergence of biology as a specific scientific discipline: life as animation (Aristotle), life as mechanism, and life as organization (Kant). All three concepts constitute a common heritage that structures in depth a good deal of our cultural intuitions and vocabulary any time we try to think about "life". The present volume offers examples of these three concepts in contemporary scientific discourse. The second part of the paper proposes a synthesis of the major debates developed in this volume. Three major questions have been discussed. A first issue (Section 4) is whether we should define life or not, and why. Most authors are skeptical about the possibility of defining life in a strong way, although all admit that criteria are useful in contexts such as exobiology, artificial life and the origins of life. Section 5 examines the possible kinds of definitions of life presented in the volume. Those authors who have explicitly defended that a definition of life is needed, can be classified into two categories. The first category (or standard view) refers

  16. Ionic-liquid-based hollow-fiber liquid-phase microextraction method combined with hybrid artificial neural network-genetic algorithm for speciation and optimized determination of ferro and ferric in environmental water samples.

    PubMed

    Saeidi, Iman; Barfi, Behruz; Asghari, Alireza; Gharahbagh, Abdorreza Alavi; Barfi, Azadeh; Peyrovi, Moazameh; Afsharzadeh, Maryam; Hojatinasab, Mostafa

    2015-10-01

    A novel and environmentally friendly ionic-liquid-based hollow-fiber liquid-phase microextraction method combined with a hybrid artificial neural network (ANN)-genetic algorithm (GA) strategy was developed for ferro and ferric ions speciation as model analytes. Different parameters such as type and volume of extraction solvent, amounts of chelating agent, volume and pH of sample, ionic strength, stirring rate, and extraction time were investigated. Much more effective parameters were firstly examined based on one-variable-at-a-time design, and obtained results were used to construct an independent model for each parameter. The models were then applied to achieve the best and minimum numbers of candidate points as inputs for the ANN process. The maximum extraction efficiencies were achieved after 9 min using 22.0 μL of 1-hexyl-3-methylimidazolium hexafluorophosphate ([C6MIM][PF6]) as the acceptor phase and 10 mL of sample at pH = 7.0 containing 64.0 μg L(-1) of benzohydroxamic acid (BHA) as the complexing agent, after the GA process. Once optimized, analytical performance of the method was studied in terms of linearity (1.3-316 μg L(-1), R (2) = 0.999), accuracy (recovery = 90.1-92.3%), and precision (relative standard deviation (RSD) <3.1). Finally, the method was successfully applied to speciate the iron species in the environmental and wastewater samples. PMID:26383736

  17. Defining biocultural approaches to conservation.

    PubMed

    Gavin, Michael C; McCarter, Joe; Mead, Aroha; Berkes, Fikret; Stepp, John Richard; Peterson, Debora; Tang, Ruifei

    2015-03-01

    We contend that biocultural approaches to conservation can achieve effective and just conservation outcomes while addressing erosion of both cultural and biological diversity. Here, we propose a set of guidelines for the adoption of biocultural approaches to conservation. First, we draw lessons from work on biocultural diversity and heritage, social-ecological systems theory, integrated conservation and development, co-management, and community-based conservation to define biocultural approaches to conservation. Second, we describe eight principles that characterize such approaches. Third, we discuss reasons for adopting biocultural approaches and challenges. If used well, biocultural approaches to conservation can be a powerful tool for reducing the global loss of both biological and cultural diversity. PMID:25622889

  18. Miniature EVA Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey

    2012-01-01

    As NASA embarks upon developing the Next-Generation Extra Vehicular Activity (EVA) Radio for deep space exploration, the demands on EVA battery life will substantially increase. The number of modes and frequency bands required will continue to grow in order to enable efficient and complex multi-mode operations including communications, navigation, and tracking applications. Whether conducting astronaut excursions, communicating to soldiers, or first responders responding to emergency hazards, NASA has developed an innovative, affordable, miniaturized, power-efficient software defined radio that offers unprecedented power-efficient flexibility. This lightweight, programmable, S-band, multi-service, frequency- agile EVA software defined radio (SDR) supports data, telemetry, voice, and both standard and high-definition video. Features include a modular design, an easily scalable architecture, and the EVA SDR allows for both stationary and mobile battery powered handheld operations. Currently, the radio is equipped with an S-band RF section. However, its scalable architecture can accommodate multiple RF sections simultaneously to cover multiple frequency bands. The EVA SDR also supports multiple network protocols. It currently implements a Hybrid Mesh Network based on the 802.11s open standard protocol. The radio targets RF channel data rates up to 20 Mbps and can be equipped with a real-time operating system (RTOS) that can be switched off for power-aware applications. The EVA SDR's modular design permits implementation of the same hardware at all Network Nodes concept. This approach assures the portability of the same software into any radio in the system. It also brings several benefits to the entire system including reducing system maintenance, system complexity, and development cost.

  19. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  20. Integrating theory and method in the study of positive youth development: the sample case of gender-specificity and longitudinal stability of the dimensions of intention self-regulation (selection, optimization, and compensation).

    PubMed

    von Eye, Alexander; Martel, Michelle M; Lerner, Richard M; Lerner, Jacqueline V; Bowers, Edmond P

    2011-01-01

    The study of positive youth development (PYD) rests on the integration of sound developmental theory with rigorous developmental methods, To illustrate this link, we focused on the Selection (S), Optimization (O), and Compensation (C; SOC) model of intentional self regulation, a key individual-level component of the individual context relations involved in the PYD process, and assessed the dimensional structure of the SOC questionnaire, which includes indices of Elective Selection, Loss-Based Selection, Optimization, and Compensation. Using cross-sectional and longitudinal data from Grades 10 and 11 of the 4-H Study of PYD, we estimated three models through bifactor data analysis, a procedure that allows indicators to load both on their specific latent variables and on a superordinate factor that comprises the construct under study. The first model estimated was a standard bifactor model, computed separately for the 10th and 11 graders. In both samples, the same model described the hypothesized structure well. The second model, proposed for the first time in this chapter, compared multiple groups in their bifactor structure. Results indicated only minimal gender differences in SOC structure in Grade 10. The third model, also proposed for the first time in this chapter, involved an autoregression-type model for longitudinal data, and used data from the 609 participants present in both grades. Results suggested that the SOC bifactor structure was temporally stable. PMID:23259198

  1. Defining Electron Backscatter Diffraction Resolution

    SciTech Connect

    El-Dasher, B S; Rollett, A D

    2005-02-07

    Automated electron backscatter diffraction (EBSD) mapping systems have existed for more than 10 years [1,2], and due to their versatility in characterizing multiple aspects of microstructure, they have become an important tool in microscale crystallographic studies. Their increasingly widespread use however raises questions about their accuracy in both determining crystallographic orientations, as well as ensuring that the orientation information is spatially correct. The issue of orientation accuracy (as defined by angular resolution) has been addressed previously [3-5]. While the resolution of EBSD systems is typically quoted to be on the order of 1{sup o}, it has been shown that by increasing the pattern quality via acquisition parameter adjustment, the angular resolution can be improved to sub-degree levels. Ultimately, the resolution is dependent on how it is identified. In some cases it can be identified as the orientation relative to a known absolute, in others as the misorientation between nearest neighbor points in a scan. Naturally, the resulting values can be significantly different. Therefore, a consistent and universal definition of resolution that can be applied to characterize any EBSD system is necessary, and is the focus of the current study. In this work, a Phillips (FEI) XL-40 FEGSEM coupled to a TexSEM Laboratories OIM system was used. The pattern capturing hardware consisted of both a 512 by 512 pixel SIT CCD camera and a 1300 by 1030 pixel Peltier cooled CCD camera. Automated scans of various sizes, each consisting of 2500 points, were performed on a commercial-grade single crystal silicon wafer used for angular resolution measurements. To adequately quantify angular resolution for all possible EBSD applications we define two angular values. The first is {omega}{sub center}, the mean of the misorientation angle distribution between all scan points and the scan point coincident to the calibration source (typically the scan center). The {omega

  2. Simultaneous and high-throughput analysis of iodo-trihalomethanes, haloacetonitriles, and halonitromethanes in drinking water using solid-phase microextraction/gas chromatography-mass spectrometry: an optimization of sample preparation.

    PubMed

    Luo, Qian; Chen, Xichao; Wei, Zi; Xu, Xiong; Wang, Donghong; Wang, Zijian

    2014-10-24

    When iodide and natural organic matter are present in raw water, the formation of iodo-trihalomethanes (Iodo-THMs), haloacetonitriles (HANs), and halonitromethanes (HNMs) pose a potential health risk because they have been reported to be more toxic than their brominated or chlorinated analogs. In the work, simultaneous analysis of Iodo-THMs, HANs, and HNMs in drinking water samples in a single cleanup and chromatographic analysis was proposed. The DVB/CAR/PDMS fiber was found to be the most suitable for all target compounds, although 75μm CAR/PDMS was better for chlorinated HANs and 65μm PDMS/DVB for brominated HNMs. After optimization of the SPME parameters (DVB/CAR/PDMS fiber, extraction time of 30min at 40°C, addition of 40% w/v of salt, (NH4)2SO4 as a quenching agent, and desorption time of 3min at 170°C), detection limits ranged from 1 to 50ng/L for different analogs, with a linear range of at least two orders of magnitude. Good recoveries (78.6-104.7%) were obtained for spiked samples of a wide range of treated drinking waters, demonstrating that the method is applicable for analysis of real drinking water samples. Matrix effects were negligible for the treated water samples with total organic carbon concentration of less than 2.9mg/L. An effective survey conducted by two drinking water treatment plants showed the highest proportion of Iodo-THMs, HANs, and HNMs occurred in treated water, and concentrations of 13 detected compounds ranged between the ng/L and the μg/L levels. PMID:25257930

  3. Endothelial progenitor cells: identity defined?

    PubMed Central

    Timmermans, Frank; Plum, Jean; Yöder, Mervin C; Ingram, David A; Vandekerckhove, Bart; Case, Jamie

    2009-01-01

    Abstract In the past decade, researchers have gained important insights on the role of bone marrow (BM)-derived cells in adult neovascularization. A subset of BM-derived cells, called endothelial progenitor cells (EPCs), has been of particular interest, as these cells were suggested to home to sites of neovascularization and neoendothelialization and differentiate into endothelial cells (ECs) in situ, a process referred to as postnatal vasculogenesis. Therefore, EPCs were proposed as a potential regenerative tool for treating human vascular disease and a possible target to restrict vessel growth in tumour pathology. However, conflicting results have been reported in the field, and the identification, characterization, and exact role of EPCs in vascular biology is still a subject of much discussion. The focus of this review is on the controversial issues in the field of EPCs which are related to the lack of a unique EPC marker, identification challenges related to the paucity of EPCs in the circulation, and the important phenotypical and functional overlap between EPCs, haematopoietic cells and mature ECs. We also discuss our recent findings on the origin of endothelial outgrowth cells (EOCs), showing that this in vitro defined EC population does not originate from circulating CD133+ cells or CD45+ haematopoietic cells. PMID:19067770

  4. Optimal sampling in network performance evaluation

    SciTech Connect

    Fedorov, V.; Flanagan, D.; Batsell, S.

    1998-11-01

    Unlike many other experiments, in meteorology and seismology for instance, monitoring measurements on communication networks are cheap and fast. Even the simplest measurement tools, which are usually some interrogating programs, can provide a huge amount of data at almost no expense. The problem is not decreasing the cost of measurements, but rather reducing the amount of stored data and the measurement and analysis time. The authors address the approach that is based on the covariances between the measurements for various sites. The corresponding covariance matrix can be constructed either theoretically under some assumptions about the observed random processes, or can be estimated from some preliminary experiments. The authors compare the proposed algorithm with heuristic procedures that are used in other monitoring problems.

  5. Defining critical thresholds for ensemble flood forecasting and warning

    NASA Astrophysics Data System (ADS)

    Weeink, Werner H. A.; Ramos, Maria-Helena; Booij, Martijn J.; Andréassian, Vazken; Krol, Maarten S.

    2010-05-01

    The use of weather ensemble predictions in ensemble flood forecasting is an acknowledged procedure to include the uncertainty of meteorological forecasts in a probabilistic streamflow prediction system. Operational flood forecasters can thus get an overview of the probability of exceeding a critical discharge or water level, and decide on whether a flood warning should be issued or not. This process offers several challenges to forecasters: 1) how to define critical thresholds along all the rivers under survey? 2) How to link locally defined thresholds to simulated discharges, which result from models with specific spatial and temporal resolutions? 3) How to define the number of ensemble forecasts predicting the exceedance of critical thresholds necessary to launch a warning? This study focuses on this third challenge. We investigate the optimal number of ensemble members exceeding a critical discharge in order to issue a flood warning. The optimal probabilistic threshold is the one that minimizes the number of false alarms and misses, while it optimizes the number of flood events correctly forecasted. Furthermore, in our study, an optimal probabilistic threshold also maximizes flood preparedness: the gain in lead-time compared to a deterministic forecast. Data used to evaluate critical thresholds for ensemble flood forecasting come from a selection of 208 catchments in France, which covers a wide range of the hydroclimatic conditions (including catchment size) encountered in the country. The GRP hydrological forecasting model, a lumped soil-moisture-accounting type rainfall-runoff model, is used. The model is driven by the 10-day ECMWF deterministic and ensemble (51 members) precipitation forecasts for a period of 18 months. A trade-off between the number of hits, misses, false alarms and the gain in lead time is sought to find the optimal number of ensemble members exceeding the critical discharge. These optimal probability thresholds are further explored in

  6. Superposition Enhanced Nested Sampling

    NASA Astrophysics Data System (ADS)

    Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan

    2014-07-01

    The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  7. Nramp defines a family of membrane proteins.

    PubMed Central

    Cellier, M; Privé, G; Belouchi, A; Kwan, T; Rodrigues, V; Chia, W; Gros, P

    1995-01-01

    Nramp (natural resistance-associated macrophage protein) is a newly identified family of integral membrane proteins whose biochemical function is unknown. We report on the identification of Nramp homologs from the fly Drosophila melanogaster, the plant Oryza sativa, and the yeast Saccharomyces cerevisiae. Optimal alignment of protein sequences required insertion of very few gaps and revealed remarkable sequence identity of 28% (yeast), 40% (plant), and 55% (fly) with the mammalian proteins (46%, 58%, and 73% similarity), as well as a common predicted transmembrane topology. This family is defined by a highly conserved hydrophobic core encoding 10 transmembrane segments. Other features of this hydrophobic core include several invariant charged residues, helical periodicity of sequence conservation suggesting conserved and nonconserved faces for several transmembrane helices, a consensus transport signature on the intracytoplasmic face of the membrane, and structural determinants previously described in ion channels. These characteristics suggest that the Nramp polypeptides form part of a group of transporters or channels that act on as yet unidentified substrates. Images Fig. 1 PMID:7479731

  8. Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection

    PubMed Central

    Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi

    2011-01-01

    The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237

  9. Multiobjective genetic approach for optimal control of photoinduced processes

    SciTech Connect

    Bonacina, Luigi; Extermann, Jerome; Rondi, Ariana; Wolf, Jean-Pierre; Boutou, Veronique

    2007-08-15

    We have applied a multiobjective genetic algorithm to the optimization of multiphoton-excited fluorescence. Our study shows the advantages that this approach can offer to experiments based on adaptive shaping of femtosecond pulses. The algorithm outperforms single-objective optimizations, being totally independent from the bias of user defined parameters and giving simultaneous access to a large set of feasible solutions. The global inspection of their ensemble represents a powerful support to unravel the connections between pulse spectral field features and excitation dynamics of the sample.

  10. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  11. Generic sequential sampling for metamodel approximations

    SciTech Connect

    Turner, C. J.; Campbell, M. I.

    2003-01-01

    Metamodels approximate complex multivariate data sets from simulations and experiments. These data sets often are not based on an explicitly defined function. The resulting metamodel represents a complex system's behavior for subsequent analysis or optimization. Often an exhaustive data search to obtain the data for the metalnodel is impossible, so an intelligent sampling strategy is necessary. While inultiple approaches have been advocated, the majority of these approaches were developed in support of a particular class of metamodel, known as a Kriging. A more generic, cotninonsense approach to this problem allows sequential sampling techniques to be applied to other types of metamodeis. This research compares recent search techniques for Kriging inetamodels with a generic, inulti-criteria approach combined with a new type of B-spline metamodel. This B-spline metamodel is competitive with prior results obtained with a Kriging metamodel. Furthermore, the results of this research highlight several important features necessary for these techniques to be extended to more complex domains.

  12. Eutectic superalloys by edge-defined, film-fed growth

    NASA Technical Reports Server (NTRS)

    Hurley, G. F.

    1975-01-01

    The feasibility of producing directionally solidified eutectic alloy composites by edge-defined, film-fed growth (EFG) was carried out. The three eutectic alloys which were investigated were gamma + delta, gamma/gamma prime + delta, and a Co-base TaC alloy containing Cr and Ni. Investigations into the compatibility and wettability of these metals with various carbides, borides, nitrides, and oxides disclosed that compounds with the largest (negative) heats of formation were most stable but poorest wetting. Nitrides and carbides had suitable stability and low contact angles but capillary rise was observed only with carbides. Oxides would not give capillary rise but would probably fulfill the other wetting requirements of EFG. Tantalum carbide was selected for most of the experimental portion of the program based on its exhibiting spontaneous capillary rise and satisfactory slow rate of degradation in the liquid metals. Samples of all three alloys were grown by EFG with the major experimental effort restricted to gamma + delta and gamma/gamma prime + delta alloys. In the standard, uncooled EFG apparatus, the thermal gradient was inferred from the growth speed and was 150 to 200 C/cm. This value may be compared to typical gradients of less than 100 C/cm normally achieved in a standard Bridgman-type apparatus. When a stream of helium was directed against the side of the bar during growth, the gradient was found to improve to about 250 C/cm. In comparison, a theoretical gradient of 700 C/cm should be possible under ideal conditions, without the use of chills. Methods for optimizing the gradient in EFG are discussed, and should allow attainment of close to the theoretical for a particular configuration.

  13. Surface-engineered substrates for improved human pluripotent stem cell culture under fully defined conditions.

    PubMed

    Saha, Krishanu; Mei, Ying; Reisterer, Colin M; Pyzocha, Neena Kenton; Yang, Jing; Muffat, Julien; Davies, Martyn C; Alexander, Morgan R; Langer, Robert; Anderson, Daniel G; Jaenisch, Rudolf

    2011-11-15

    The current gold standard for the culture of human pluripotent stem cells requires the use of a feeder layer of cells. Here, we develop a spatially defined culture system based on UV/ozone radiation modification of typical cell culture plastics to define a favorable surface environment for human pluripotent stem cell culture. Chemical and geometrical optimization of the surfaces enables control of early cell aggregation from fully dissociated cells, as predicted from a numerical model of cell migration, and results in significant increases in cell growth of undifferentiated cells. These chemically defined xeno-free substrates generate more than three times the number of cells than feeder-containing substrates per surface area. Further, reprogramming and typical gene-targeting protocols can be readily performed on these engineered surfaces. These substrates provide an attractive cell culture platform for the production of clinically relevant factor-free reprogrammed cells from patient tissue samples and facilitate the definition of standardized scale-up friendly methods for disease modeling and cell therapeutic applications. PMID:22065768

  14. DEFINED CONTRIBUTION PLANS, DEFINED BENEFIT PLANS, AND THE ACCUMULATION OF RETIREMENT WEALTH.

    PubMed

    Poterba, James; Rauh, Joshua; Venti, Steven; Wise, David

    2007-11-01

    The private pension structure in the United States, once dominated by defined benefit (DB) plans, is currently divided between defined contribution (DC) and DB plans. Wealth accumulation in DC plans depends on the participant's contribution behavior and on financial market returns, while accumulation in DB plans is sensitive to a participant's labor market experience and to plan parameters. This paper simulates the distribution of retirement wealth under representative DB and DC plans. It uses data from the Health and Retirement Study (HRS) to explore how asset returns, earnings histories, and retirement plan characteristics contribute to the variation in retirement wealth outcomes. We simulate DC plan accumulation by randomly assigning individuals a share of wages that they and their employer contribute to the plan. We consider several possible asset allocation strategies, with asset returns drawn from the historical return distribution. Our DB plan simulations draw earnings histories from the HRS, and randomly assign each individual a pension plan drawn from a sample of large private and public defined benefit plans. The simulations yield distributions of both DC and DB wealth at retirement. Average retirement wealth accruals under current DC plans exceed average accruals under private sector DB plans, although DC plans are also more likely to generate very low retirement wealth outcomes. The comparison of current DC plans with more generous public sector DB plans is less definitive, because public sector DB plans are more generous on average than their private sector counterparts. PMID:21057597

  15. Application of multivariate optimization procedures for preconcentration and determination of Au(III) and Pt(IV) in aqueous samples with graphene oxide by X-ray fluorescence spectrometry.

    PubMed

    Rofouei, Mohammad K; Amiri, Nayereh; Ghasemi, Jahan B

    2015-03-01

    A simple method was developed for the determination of Au(III) and Pt(IV) contents in aqueous samples after preconcentration. The method was based on the sorption of analytes as 2-amino-5-mercapto-1,3,4-thiadiazol complexes onto graphene oxide and subsequent direct determination by wavelength dispersive X-ray fluorescence (WDXRF). The optimization step was carried out using two-level full-factorial and Box-Behnken designs. The effects of four variables (pH, ligand mass, sonication time, and temperature) were studied by a full-factorial design to find significant variables and their interactions. Results of two-level full-factorial design for Au extraction showed that the factors: pH, ligand mass, temperature of sonication beside the interaction of pH-ligand mass, and interaction sonication temperature-ligand mass were significant. For Pt, the results revealed pH, ligand mass, sonication time, and interaction of pH-ligand mass were statistically significant. Box-Behnken matrix design was applied to determine the optimum level of significant parameters for extraction of two analytes simultaneously. The optimum values of the factors were pH 2.5, 0.9 mL ligand solution, 56 min sonication time and 15 °C temperature. The limits of detection (LOD) were found to be 8 ng mL(-1) for Au and 6 ng mL(-1) for Pt. The adsorption capacity for Au and Pt were 115 and 169 μg mg(-1), respectively. The relative standard deviation (RSD) was lower than 1.4 % (n = 5), and the extraction percentage was more than 95 % for both elements. The method was validated by determination of Au and Pt in spiked water samples and certified reference standard materials. PMID:25720970

  16. Defining nodes in complex brain networks

    PubMed Central

    Stanley, Matthew L.; Moussa, Malaak N.; Paolini, Brielle M.; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.

    2013-01-01

    Network science holds great promise for expanding our understanding of the human brain in health, disease, development, and aging. Network analyses are quickly becoming the method of choice for analyzing functional MRI data. However, many technical issues have yet to be confronted in order to optimize results. One particular issue that remains controversial in functional brain network analyses is the definition of a network node. In functional brain networks a node represents some predefined collection of brain tissue, and an edge measures the functional connectivity between pairs of nodes. The characteristics of a node, chosen by the researcher, vary considerably in the literature. This manuscript reviews the current state of the art based on published manuscripts and highlights the strengths and weaknesses of three main methods for defining nodes. Voxel-wise networks are constructed by assigning a node to each, equally sized brain area (voxel). The fMRI time-series recorded from each voxel is then used to create the functional network. Anatomical methods utilize atlases to define the nodes based on brain structure. The fMRI time-series from all voxels within the anatomical area are averaged and subsequently used to generate the network. Functional activation methods rely on data from traditional fMRI activation studies, often from databases, to identify network nodes. Such methods identify the peaks or centers of mass from activation maps to determine the location of the nodes. Small (~10–20 millimeter diameter) spheres located at the coordinates of the activation foci are then applied to the data being used in the network analysis. The fMRI time-series from all voxels in the sphere are then averaged, and the resultant time series is used to generate the network. We attempt to clarify the discussion and move the study of complex brain networks forward. While the “correct” method to be used remains an open, possibly unsolvable question that deserves

  17. Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2013-01-01

    The design priority for manned space exploration missions is almost always placed on human safety. Proposed manned surface exploration tasks (lunar, asteroid sample returns, Mars) have the possibility of astronauts traveling several kilometers away from a home base. Deviations from preplanned paths are expected while exploring. In a time-critical emergency situation, there is a need to develop an optimal home base return path. The return path may or may not be similar to the outbound path, and what defines optimal may change with, and even within, each mission. A novel path planning algorithm and prototype program was developed using biologically inspired particle swarm optimization (PSO) that generates an optimal path of traversal while avoiding obstacles. Applications include emergency path planning on lunar, Martian, and/or asteroid surfaces, generating multiple scenarios for outbound missions, Earth-based search and rescue, as well as human manual traversal and/or path integration into robotic control systems. The strategy allows for a changing environment, and can be re-tasked at will and run in real-time situations. Given a random extraterrestrial planetary or small body surface position, the goal was to find the fastest (or shortest) path to an arbitrary position such as a safe zone or geographic objective, subject to possibly varying constraints. The problem requires a workable solution 100% of the time, though it does not require the absolute theoretical optimum. Obstacles should be avoided, but if they cannot be, then the algorithm needs to be smart enough to recognize this and deal with it. With some modifications, it works with non-stationary error topologies as well.

  18. Defining moments in leadership character development.

    PubMed

    Bleich, Michael R

    2015-06-01

    Critical moments in life define one's character and clarify true values. Reflective leadership is espoused as an important practice for transformational leaders. Professional development educators can help surface and explore defining moments, strengthen leadership behavior with defining moments as a catalyst for change, and create safe spaces for leaders to expand their leadership capacity. PMID:26057159

  19. Defined contribution: a part of our future.

    PubMed Central

    Baugh, Reginald F.

    2003-01-01

    Rising employer health care costs and consumer backlash against managed care are trends fostering the development of defined contribution plans. Defined contribution plans limit employer responsibility to a fixed financial contribution rather than a benefit program and dramatically increase consumer responsibility for health care decision making. Possible outcomes of widespread adoption of defined contribution plans are presented. PMID:12934869

  20. 7 CFR 29.9201 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Terms defined. 29.9201 Section 29.9201 Agriculture... Tobacco Produced and Marketed in a Quota Area Definitions § 29.9201 Terms defined. As used in this subpart... hereinafter defined shall have the indicated meanings so assigned....

  1. 7 CFR 1206.200 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1206.200 Section 1206.200 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... INFORMATION Rules and Regulations § 1206.200 Terms defined. Unless otherwise defined in this subpart,...

  2. 7 CFR 1210.500 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1210.500 Section 1210.500 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... PLAN Rules and Regulations Definitions § 1210.500 Terms defined. Unless otherwise defined in...

  3. 7 CFR 29.12 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Terms defined. 29.12 Section 29.12 Agriculture... INSPECTION Regulations Definitions § 29.12 Terms defined. As used in this subpart and in all instructions, forms, and documents in connection therewith, the words and phrases hereinafter defined shall have...

  4. 16 CFR 502.2 - Terms defined.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Terms defined. 502.2 Section 502.2... FAIR PACKAGING AND LABELING ACT Definitions § 502.2 Terms defined. As used in this part, unless the... those terms are defined under part 500 of this chapter. (b) The term packager and labeler means...

  5. 20 CFR 725.703 - Physician defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Physician defined. 725.703 Section 725.703... defined. The term “physician” includes only doctors of medicine (MD) and osteopathic practitioners within the scope of their practices as defined by State law. No treatment or medical services performed...

  6. 29 CFR 779.107 - Goods defined.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Goods defined. 779.107 Section 779.107 Labor Regulations... Engaged in Commerce Or in the Production of Goods for Commerce § 779.107 Goods defined. The term goods is defined in section 3(i) of the Act and has a well established meaning under the Act since it has...

  7. 20 CFR 404.429 - Earnings; defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Earnings; defined. 404.429 Section 404.429...- ) Deductions; Reductions; and Nonpayments of Benefits § 404.429 Earnings; defined. (a) General. The term... purpose of the earnings test under this subpart: (i) If you reach full retirement age, as defined in §...

  8. 29 CFR 779.107 - Goods defined.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Goods defined. 779.107 Section 779.107 Labor Regulations... Engaged in Commerce Or in the Production of Goods for Commerce § 779.107 Goods defined. The term goods is defined in section 3(i) of the Act and has a well established meaning under the Act since it has...

  9. 20 CFR 725.703 - Physician defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Physician defined. 725.703 Section 725.703... AND HEALTH ACT, AS AMENDED Medical Benefits and Vocational Rehabilitation § 725.703 Physician defined... scope of their practices as defined by State law. No treatment or medical services performed by...

  10. Taking Stock of Unrealistic Optimism

    PubMed Central

    Shepperd, James A.; Klein, William M. P.; Waters, Erika A.; Weinstein, Neil D.

    2015-01-01

    Researchers have used terms such as unrealistic optimism and optimistic bias to refer to concepts that are similar but not synonymous. Drawing from three decades of research, we critically discuss how researchers define unrealistic optimism and we identify four types that reflect different measurement approaches: unrealistic absolute optimism at the individual and group level and unrealistic comparative optimism at the individual and group level. In addition, we discuss methodological criticisms leveled against research on unrealistic optimism and note that the criticisms are primarily relevant to only one type—the group form of unrealistic comparative optimism. We further clarify how the criticisms are not nearly as problematic even for unrealistic comparative optimism as they might seem. Finally, we note boundary conditions on the different types of unrealistic optimism and reflect on five broad questions that deserve further attention. PMID:26045714

  11. Sampling functions for geophysics

    NASA Technical Reports Server (NTRS)

    Giacaglia, G. E. O.; Lunquist, C. A.

    1972-01-01

    A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.

  12. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory

  13. Oscillator metrology with software defined radio.

    PubMed

    Sherman, Jeff A; Jördens, Robert

    2016-05-01

    Analog electrical elements such as mixers, filters, transfer oscillators, isolating buffers, dividers, and even transmission lines contribute technical noise and unwanted environmental coupling in time and frequency measurements. Software defined radio (SDR) techniques replace many of these analog components with digital signal processing (DSP) on rapidly sampled signals. We demonstrate that, generically, commercially available multi-channel SDRs are capable of time and frequency metrology, outperforming purpose-built devices by as much as an order-of-magnitude. For example, for signals at 10 MHz and 6 GHz, we observe SDR time deviation noise floors of about 20 fs and 1 fs, respectively, in under 10 ms of averaging. Examining the other complex signal component, we find a relative amplitude measurement instability of 3 × 10(-7) at 5 MHz. We discuss the scalability of a SDR-based system for simultaneous measurement of many clocks. SDR's frequency agility allows for comparison of oscillators at widely different frequencies. We demonstrate a novel and extreme example with optical clock frequencies differing by many terahertz: using a femtosecond-laser frequency comb and SDR, we show femtosecond-level time comparisons of ultra-stable lasers with zero measurement dead-time. PMID:27250455

  14. Capillary sample

    MedlinePlus

    ... using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited amount of blood can be drawn using this method. The procedure has some risks (see below). Capillary ...

  15. General shape optimization capability

    NASA Technical Reports Server (NTRS)

    Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson

    1991-01-01

    A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.

  16. Optimal Scaling of Digital Transcriptomes

    PubMed Central

    Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy

    2013-01-01

    Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126

  17. Organic solvent-free air-assisted liquid-liquid microextraction for optimized extraction of illegal azo-based dyes and their main metabolite from spices, cosmetics and human bio-fluid samples in one step.

    PubMed

    Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Sabzalian, Sedigheh

    2015-08-15

    Air-assisted liquid-liquid microextraction (AALLME) has unique capabilities to develop as an organic solvent-free and one-step microextraction method, applying ionic-liquids as extraction solvent and avoiding centrifugation step. Herein, a novel and simple eco-friendly method, termed one-step air-assisted liquid-liquid microextraction (OS-AALLME), was developed to extract some illegal azo-based dyes (including Sudan I to IV, and Orange G) from food and cosmetic products. A series of experiments were investigated to achieve the most favorable conditions (including extraction solvent: 77μL of 1-Hexyl-3-methylimidazolium hexafluorophosphate; sample pH 6.3, without salt addition; and extraction cycles: 25 during 100s of sonication) using a central composite design strategy. Under these conditions, limits of detection, linear dynamic ranges, enrichment factors and consumptive indices were in the range of 3.9-84.8ngmL(-1), 0.013-3.1μgmL(-1), 33-39, and 0.13-0.15, respectively. The results showed that -as well as its simplicity, fastness, and use of no hazardous disperser and extraction solvents- OS-AALLME is an enough sensitive and efficient method for the extraction of these dyes from complex matrices. After optimization and validation, OS-AALLME was applied to estimate the concentration of 1-amino-2-naphthol in human bio-fluids as a main reductive metabolite of selected dyes. Levels of 1-amino-2-naphthol in plasma and urinary excretion suggested that this compound may be used as a new potential biomarker of these dyes in human body. PMID:26149246

  18. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  19. Depth-discrete sampling port

    DOEpatents

    Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.

    1998-07-07

    A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.

  20. Depth-discrete sampling port

    DOEpatents

    Pemberton, Bradley E.; May, Christopher P.; Rossabi, Joseph; Riha, Brian D.; Nichols, Ralph L.

    1999-01-01

    A sampling port is provided which has threaded ends for incorporating the port into a length of subsurface pipe. The port defines an internal receptacle which is in communication with subsurface fluids through a series of fine filtering slits. The receptacle is in further communication through a bore with a fitting carrying a length of tubing there which samples are transported to the surface. Each port further defines an additional bore through which tubing, cables, or similar components of adjacent ports may pass.

  1. Sampling Development

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of the enterprise. This article discusses how to sample development in order to accurately discern the shape of developmental change. The ideal solution is daunting: to summarize behavior over 24-hour intervals and collect daily samples over the critical periods of change. We discuss the magnitude of errors due to undersampling, and the risks associated with oversampling. When daily sampling is not feasible, we offer suggestions for sampling methods that can provide preliminary reference points and provisional sketches of the general shape of a developmental trajectory. Denser sampling then can be applied strategically during periods of enhanced variability, inflections in the rate of developmental change, or in relation to key events or processes that may affect the course of change. Despite the challenges of dense repeated sampling, researchers must take seriously the problem of sampling on a developmental time scale if we are to know the true shape of developmental change. PMID:22140355

  2. 47 CFR 2.908 - Identical defined.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Identical defined. 2.908 Section 2.908 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.908 Identical defined. As used in this subpart, the...

  3. 47 CFR 2.908 - Identical defined.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Identical defined. 2.908 Section 2.908 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.908 Identical defined. As used in this subpart, the...

  4. 47 CFR 2.908 - Identical defined.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Identical defined. 2.908 Section 2.908 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.908 Identical defined. As used in this subpart, the...

  5. 47 CFR 2.908 - Identical defined.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Identical defined. 2.908 Section 2.908 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.908 Identical defined. As used in this subpart, the...

  6. 47 CFR 2.908 - Identical defined.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Identical defined. 2.908 Section 2.908 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL RULES AND REGULATIONS Equipment Authorization Procedures General Provisions § 2.908 Identical defined. As used in this subpart, the...

  7. 16 CFR 301.1 - Terms defined.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Terms defined. 301.1 Section 301.1 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS UNDER SPECIFIC ACTS OF CONGRESS RULES AND REGULATIONS UNDER FUR PRODUCTS LABELING ACT Regulations § 301.1 Terms defined. (a) As used in this part, unless the context otherwise specifically requires:...

  8. 20 CFR 702.404 - Physician defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... § 702.404 Physician defined. The term physician includes doctors of medicine (MD), surgeons, podiatrists, dentists, clinical psychologists, optometrists, chiropractors, and osteopathic practitioners within the... correct a subluxation shown by X-ray or clinical findings. Physicians defined in this part may...

  9. Dilution Confusion: Conventions for Defining a Dilution

    ERIC Educational Resources Information Center

    Fishel, Laurence A.

    2010-01-01

    Two conventions for preparing dilutions are used in clinical laboratories. The first convention defines an "a:b" dilution as "a" volumes of solution A plus "b" volumes of solution B. The second convention defines an "a:b" dilution as "a" volumes of solution A diluted into a final volume of "b". Use of the incorrect dilution convention could affect…

  10. 7 CFR 1280.401 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1280.401 Section 1280.401 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... INFORMATION ORDER Rules and Regulations § 1280.401 Terms defined. As used throughout this subpart, unless...

  11. 7 CFR 1260.301 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1260.301 Section 1260.301 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... and Regulations § 1260.301 Terms defined. As used throughout this subpart, unless the...

  12. 42 CFR 422.580 - Reconsideration defined.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Reconsideration defined. 422.580 Section 422.580 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 422.580 Reconsideration defined. A reconsideration consists of a review of an adverse...

  13. 16 CFR 1608.1 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Terms defined. 1608.1 Section 1608.1 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT REGULATIONS GENERAL RULES AND REGULATIONS UNDER THE FLAMMABLE FABRICS ACT § 1608.1 Terms defined. As used in the rules and regulations...

  14. 16 CFR 304.1 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Terms defined. 304.1 Section 304.1 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS UNDER SPECIFIC ACTS OF CONGRESS RULES AND REGULATIONS UNDER THE HOBBY PROTECTION ACT § 304.1 Terms defined. (a) Act means the Hobby Protection...

  15. 22 CFR 92.36 - Authentication defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Authentication defined. 92.36 Section 92.36 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES NOTARIAL AND RELATED SERVICES Specific Notarial Acts § 92.36 Authentication defined. An authentication is a certification of the genuineness...

  16. 42 CFR 422.580 - Reconsideration defined.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Reconsideration defined. 422.580 Section 422.580 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 422.580 Reconsideration defined. A reconsideration consists of a review of an adverse...

  17. 45 CFR 504.1 - Claim defined.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Claim defined. 504.1 Section 504.1 Public Welfare Regulations Relating to Public Welfare (Continued) FOREIGN CLAIMS SETTLEMENT COMMISSION OF THE UNITED STATES... 1948, AS AMENDED FILING OF CLAIMS AND PROCEDURES THEREFOR § 504.1 Claim defined. (a) This subchapter...

  18. 9 CFR 592.2 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Terms defined. 592.2 Section 592.2 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Definitions § 592.2 Terms defined. For the purpose of...

  19. 22 CFR 92.30 - Acknowledgment defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Acknowledgment defined. 92.30 Section 92.30 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES NOTARIAL AND RELATED SERVICES Specific Notarial Acts § 92.30 Acknowledgment defined. An acknowledgment is a proceeding by which a person who...

  20. 7 CFR 1215.100 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1215.100 Section 1215.100 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION Rules and Regulations Definitions § 1215.100 Terms defined. Unless otherwise...

  1. 7 CFR 1230.100 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1230.100 Section 1230.100 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION Rules and Regulations Definitions § 1230.100 Terms defined. As used throughout...

  2. 7 CFR 75.2 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Terms defined. 75.2 Section 75.2 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... AND CERTIFICATION OF QUALITY OF AGRICULTURAL AND VEGETABLE SEEDS Definitions § 75.2 Terms defined....

  3. 7 CFR 28.950 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Terms defined. 28.950 Section 28.950 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing..., TESTING, AND STANDARDS Cotton Fiber and Processing Tests Definitions § 28.950 Terms defined. As...

  4. 22 CFR 92.36 - Authentication defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Authentication defined. 92.36 Section 92.36 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES NOTARIAL AND RELATED SERVICES Specific Notarial Acts § 92.36 Authentication defined. An authentication is a certification of the genuineness...

  5. 7 CFR 1280.601 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1280.601 Section 1280.601 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... INFORMATION ORDER Procedures To Request a Referendum Definitions § 1280.601 Terms defined. As used...

  6. 45 CFR 504.1 - Claim defined.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Claim defined. 504.1 Section 504.1 Public Welfare Regulations Relating to Public Welfare (Continued) FOREIGN CLAIMS SETTLEMENT COMMISSION OF THE UNITED STATES... 1948, AS AMENDED FILING OF CLAIMS AND PROCEDURES THEREFOR § 504.1 Claim defined. (a) This subchapter...

  7. 16 CFR 1608.1 - Terms defined.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Terms defined. 1608.1 Section 1608.1 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT REGULATIONS GENERAL RULES AND REGULATIONS UNDER THE FLAMMABLE FABRICS ACT § 1608.1 Terms defined. As used in the rules and regulations...

  8. 7 CFR 160.3 - Rosin defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Rosin defined. 160.3 Section 160.3 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... STANDARDS FOR NAVAL STORES General § 160.3 Rosin defined. Except as provided in § 160.15, rosin is...

  9. 20 CFR 725.491 - Operator defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Operator defined. 725.491 Section 725.491 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS, DEPARTMENT OF LABOR FEDERAL COAL MINE HEALTH... SAFETY AND HEALTH ACT, AS AMENDED Responsible Coal Mine Operators § 725.491 Operator defined. (a)...

  10. 22 CFR 92.30 - Acknowledgment defined.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Acknowledgment defined. 92.30 Section 92.30 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES NOTARIAL AND RELATED SERVICES Specific Notarial Acts § 92.30 Acknowledgment defined. An acknowledgment is a proceeding by which a person who...

  11. 20 CFR 401.25 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Terms defined. 401.25 Section 401.25... INFORMATION General § 401.25 Terms defined. Access means making a record available to a subject individual... means communication to an individual whether he is a subject individual. (Subject individual is...

  12. 7 CFR 1205.500 - Terms defined.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accordance with 7 CFR 713.55. (o) Importer means any person who enters, or withdraws from warehouse, cotton... 7 Agriculture 10 2011-01-01 2011-01-01 false Terms defined. 1205.500 Section 1205.500 Agriculture... Board Rules and Regulations Definitions § 1205.500 Terms defined. As used throughout this...

  13. 16 CFR 304.1 - Terms defined.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Terms defined. 304.1 Section 304.1 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS UNDER SPECIFIC ACTS OF CONGRESS RULES AND REGULATIONS UNDER THE HOBBY PROTECTION ACT § 304.1 Terms defined. (a) Act means the Hobby Protection...

  14. Sample holder with optical features

    DOEpatents

    Milas, Mirko; Zhu, Yimei; Rameau, Jonathan David

    2013-07-30

    A sample holder for holding a sample to be observed for research purposes, particularly in a transmission electron microscope (TEM), generally includes an external alignment part for directing a light beam in a predetermined beam direction, a sample holder body in optical communication with the external alignment part and a sample support member disposed at a distal end of the sample holder body opposite the external alignment part for holding a sample to be analyzed. The sample holder body defines an internal conduit for the light beam and the sample support member includes a light beam positioner for directing the light beam between the sample holder body and the sample held by the sample support member.

  15. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  16. Optimal experience among teachers: new insights into the work paradox.

    PubMed

    Bassi, Marta; Delle Fave, Antonella

    2012-01-01

    Several studies highlighted that individuals perceive work as an opportunity for flow or optimal experience, but not as desirable and pleasant. This finding was defined as the work paradox. The present study addressed this issue among teachers from the perspective of self-determination theory, investigating work-related intrinsic and extrinsic motivation, as well as autonomous and controlled behavior regulation. In Study 1, 14 teachers were longitudinally monitored with Experience Sampling Method for one work week. In Study 2, 184 teachers were administered Flow Questionnaire and Work Preference Inventory, respectively investigating opportunities for optimal experience, and motivational orientations at work. Results showed that work-related optimal experiences were associated with both autonomous regulation and with controlled regulation. Moreover, teachers reported both intrinsic and extrinsic motivation at work, with a prevailing intrinsic orientation. Findings provide novel insights on the work paradox, and suggestions for teachers' well-being promotion. PMID:22931008

  17. Design Optimization of a Centrifugal Fan with Splitter Blades

    NASA Astrophysics Data System (ADS)

    Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong

    2015-05-01

    Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.

  18. Sampling Development

    ERIC Educational Resources Information Center

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  19. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors.

    PubMed

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212

  20. Optimal Sensor Selection for Classifying a Set of Ginsengs Using Metal-Oxide Sensors

    PubMed Central

    Miao, Jiacheng; Zhang, Tinglin; Wang, You; Li, Guang

    2015-01-01

    The sensor selection problem was investigated for the application of classification of a set of ginsengs using a metal-oxide sensor-based homemade electronic nose with linear discriminant analysis. Samples (315) were measured for nine kinds of ginsengs using 12 sensors. We investigated the classification performances of combinations of 12 sensors for the overall discrimination of combinations of nine ginsengs. The minimum numbers of sensors for discriminating each sample set to obtain an optimal classification performance were defined. The relation of the minimum numbers of sensors with number of samples in the sample set was revealed. The results showed that as the number of samples increased, the average minimum number of sensors increased, while the increment decreased gradually and the average optimal classification rate decreased gradually. Moreover, a new approach of sensor selection was proposed to estimate and compare the effective information capacity of each sensor. PMID:26151212