Sample records for optimal sample size

  1. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  3. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    NASA Astrophysics Data System (ADS)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Conditional Optimal Design in Three- and Four-Level Experiments

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Borenstein, Michael

    2014-01-01

    The precision of estimates of treatment effects in multilevel experiments depends on the sample sizes chosen at each level. It is often desirable to choose sample sizes at each level to obtain the smallest variance for a fixed total cost, that is, to obtain optimal sample allocation. This article extends previous results on optimal allocation to…

  6. Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.

    PubMed

    Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T

    2015-03-01

    It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.

  7. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  8. Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples

    NASA Astrophysics Data System (ADS)

    Petit, Johan; Lallemant, Lucile

    2017-05-01

    In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.

  9. Accounting for between-study variation in incremental net benefit in value of information methodology.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  10. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  11. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  12. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  13. Minimizing the Maximum Expected Sample Size in Two-Stage Phase II Clinical Trials with Continuous Outcomes

    PubMed Central

    Wason, James M. S.; Mander, Adrian P.

    2012-01-01

    Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118

  14. Dimensions of design space: a decision-theoretic approach to optimal research design.

    PubMed

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  15. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  16. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    PubMed

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  17. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  18. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  19. A general approach for sample size calculation for the three-arm 'gold standard' non-inferiority design.

    PubMed

    Stucke, Kathrin; Kieser, Meinhard

    2012-12-10

    In the three-arm 'gold standard' non-inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  1. Numerical study of ultra-low field nuclear magnetic resonance relaxometry utilizing a single axis magnetometer for signal detection.

    PubMed

    Vogel, Michael W; Vegh, Viktor; Reutens, David C

    2013-05-01

    This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.

  2. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  3. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  4. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  5. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  6. Design of gefitinib-loaded poly (l-lactic acid) microspheres via a supercritical anti-solvent process for dry powder inhalation.

    PubMed

    Lin, Qing; Liu, Guijin; Zhao, Ziyi; Wei, Dongwei; Pang, Jiafeng; Jiang, Yanbin

    2017-10-30

    To develop a safer, more stable and potent formulation of gefitinib (GFB), micro-spheres of GFB encapsulated into poly (l-lactic acid) (PLLA) have been prepared by supercritical anti-solvent (SAS) technology in this study. Operating factors were optimized using a selected OA 16 (4 5 ) orthogonal array design, and the properties of the raw material and SAS processed samples were characterized by different methods The results show that the GFB-loaded PLLA particles prepared were spherical, having a smaller and narrower particle size compared with raw GFB. The optimal GFB-loaded PLLA sample was prepared with less aggregation, highest GFB loading (15.82%) and smaller size (D 50 =2.48μm, which meets the size of dry powder inhalers). The results of XRD and DSC indicate that GFB is encapsulated into PLLA matrix in a polymorphic form different from raw GFB. FT-IR results show that the chemical structure of GFB does not change after the SAS process. The results of in vitro release show that the optimal sample release was slower compared with raw GFB particles. Moreover, the results of in vitro anti-cancer trials show that the optimal sample had a higher cytotoxicity than raw GFB. After blending with sieved lactose, the flowability and aerosolization performance of the optimal sample for DPI were improved, with angle of repose, emitted dose and fine particles fractions from 38.4° to 23°, 63.21% to >90%, 23.37% to >30%, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    NASA Astrophysics Data System (ADS)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.

  8. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Treesearch

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  9. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  10. A comparative review of methods for comparing means using partially paired data.

    PubMed

    Guo, Beibei; Yuan, Ying

    2017-06-01

    In medical experiments with the objective of testing the equality of two means, data are often partially paired by design or because of missing data. The partially paired data represent a combination of paired and unpaired observations. In this article, we review and compare nine methods for analyzing partially paired data, including the two-sample t-test, paired t-test, corrected z-test, weighted t-test, pooled t-test, optimal pooled t-test, multiple imputation method, mixed model approach, and the test based on a modified maximum likelihood estimate. We compare the performance of these methods through extensive simulation studies that cover a wide range of scenarios with different effect sizes, sample sizes, and correlations between the paired variables, as well as true underlying distributions. The simulation results suggest that when the sample size is moderate, the test based on the modified maximum likelihood estimator is generally superior to the other approaches when the data is normally distributed and the optimal pooled t-test performs the best when the data is not normally distributed, with well-controlled type I error rates and high statistical power; when the sample size is small, the optimal pooled t-test is to be recommended when both variables have missing data and the paired t-test is to be recommended when only one variable has missing data.

  11. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  12. Application of ultra-high energy hollow cathode helium-silver laser (224.3 nm) as Jc's, grain size surface's promoter for Ir-optimally doped-Mg0.94Ir0.06B2 superconductors

    NASA Astrophysics Data System (ADS)

    Elsabawy, Khaled M.; Fallatah, Ahmed M.; Alharthi, Salman S.

    2018-07-01

    For the first time high energy Helium-Silver laser which belongs to the category of metal-vapor lasers applied as microstructure promoter for optimally Ir-doped-MgB2sample. The Ir-optimally doped-Mg0.94Ir 0.06B2 superconducting sample was selected from previously published article for one of authors themselves. The samples were irradiated by a three different doses 1, 2 and 3 h from an ultrahigh energy He-Ag-Laser with average power of 103 W/cm2 at distance of 3 cm. Superconducting measurements and micro-structural features were investigated as function of He-Ag Laser irradiation doses. Results indicated that irradiations via an ultrahigh energy He-Ag-Laser promoted grains to lower sizes and consequently measured Jc's values enhanced and increased. Furthermore Tc-offsets for all irradiated samples are better than non-irradiated Mg0.94Ir 0.06B2.

  13. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    USGS Publications Warehouse

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  14. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  15. Optimization of Scat Detection Methods for a Social Ungulate, the Wild Pig, and Experimental Evaluation of Factors Affecting Detection of Scat.

    PubMed

    Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C

    2016-01-01

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.

  16. Optimization of scat detection methods for a social ungulate, the wild pig, and experimental evaluation of factors affecting detection of scat

    DOE PAGES

    Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...

    2016-05-25

    Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less

  17. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Sampling bee communities using pan traps: alternative methods increase sample size

    USDA-ARS?s Scientific Manuscript database

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  19. Purification of complex samples: Implementation of a modular and reconfigurable droplet-based microfluidic platform with cascaded deterministic lateral displacement separation modules

    PubMed Central

    Pudda, Catherine; Boizot, François; Verplanck, Nicolas; Revol-Cavalier, Frédéric; Berthier, Jean; Thuaire, Aurélie

    2018-01-01

    Particle separation in microfluidic devices is a common problematic for sample preparation in biology. Deterministic lateral displacement (DLD) is efficiently implemented as a size-based fractionation technique to separate two populations of particles around a specific size. However, real biological samples contain components of many different sizes and a single DLD separation step is not sufficient to purify these complex samples. When connecting several DLD modules in series, pressure balancing at the DLD outlets of each step becomes critical to ensure an optimal separation efficiency. A generic microfluidic platform is presented in this paper to optimize pressure balancing, when DLD separation is connected either to another DLD module or to a different microfluidic function. This is made possible by generating droplets at T-junctions connected to the DLD outlets. Droplets act as pressure controllers, which perform at the same time the encapsulation of DLD sorted particles and the balance of output pressures. The optimized pressures to apply on DLD modules and on T-junctions are determined by a general model that ensures the equilibrium of the entire platform. The proposed separation platform is completely modular and reconfigurable since the same predictive model applies to any cascaded DLD modules of the droplet-based cartridge. PMID:29768490

  20. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  1. Guidelines for sampling aboveground biomass and carbon in mature central hardwood forests

    Treesearch

    Martin A. Spetich; Stephen R. Shifley

    2017-01-01

    As impacts of climate change expand, determining accurate measures of forest biomass and associated carbon storage in forests is critical. We present sampling guidance for 12 combinations of percent error, plot size, and alpha levels by disturbance regime to help determine the optimal size of plots to estimate aboveground biomass and carbon in an old-growth Central...

  2. Development of a magnetic lab-on-a-chip for point-of-care sepsis diagnosis

    NASA Astrophysics Data System (ADS)

    Schotter, Joerg; Shoshi, Astrit; Brueckl, Hubert

    2009-05-01

    We present design criteria, operation principles and experimental examples of magnetic marker manipulation for our magnetic lab-on-a-chip prototype. It incorporates both magnetic sample preparation and detection by embedded GMR-type magnetoresistive sensors and is optimized for the automated point-of-care detection of four different sepsis-indicative cytokines directly from about 5 μl of whole blood. The sample volume, magnetic particle size and cytokine concentration determine the microfluidic volume, sensor size and dimensioning of the magnetic gradient field generators. By optimizing these parameters to the specific diagnostic task, best performance is expected with respect to sensitivity, analysis time and reproducibility.

  3. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  4. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  5. Malaria prevalence metrics in low- and middle-income countries: an assessment of precision in nationally-representative surveys.

    PubMed

    Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M

    2017-11-21

    One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.

  6. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  8. Optimally estimating the sample mean from the sample size, median, mid-range, and/or mid-quartile range.

    PubMed

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-06-01

    The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.

  9. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    PubMed

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A cluster-randomised trial with baseline observations always requires a larger sample size than the optimised stepped wedge trial. The hybrid design can always give an equally or more efficient design, but will be at most 5% more efficient. We provide a strategy for selecting a design if the optimal number of sequences is unfeasible. For a non-optimal number of sequences, the sample size may be reduced by allowing a proportion of observations before the first or after the final sequence has switched. Conclusion The standard stepped wedge trial is inefficient. To reduce sample sizes when a hybrid design is unfeasible, stepped wedge trial designs should have no observations before the first sequence switches or after the final sequence switches.

  10. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  11. Designing a multiple dependent state sampling plan based on the coefficient of variation.

    PubMed

    Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan

    2016-01-01

    A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.

  12. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    NASA Astrophysics Data System (ADS)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  13. RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.

    PubMed

    Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu

    2018-05-30

    One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.

  14. Energy Storage Sizing Taking Into Account Forecast Uncertainties and Receding Horizon Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Hug, Gabriela; Li, Xin

    Energy storage systems (ESS) have the potential to be very beneficial for applications such as reducing the ramping of generators, peak shaving, and balancing not only the variability introduced by renewable energy sources, but also the uncertainty introduced by errors in their forecasts. Optimal usage of storage may result in reduced generation costs and an increased use of renewable energy. However, optimally sizing these devices is a challenging problem. This paper aims to provide the tools to optimally size an ESS under the assumption that it will be operated under a model predictive control scheme and that the forecast ofmore » the renewable energy resources include prediction errors. A two-stage stochastic model predictive control is formulated and solved, where the optimal usage of the storage is simultaneously determined along with the optimal generation outputs and size of the storage. Wind forecast errors are taken into account in the optimization problem via probabilistic constraints for which an analytical form is derived. This allows for the stochastic optimization problem to be solved directly, without using sampling-based approaches, and sizing the storage to account not only for a wide range of potential scenarios, but also for a wide range of potential forecast errors. In the proposed formulation, we account for the fact that errors in the forecast affect how the device is operated later in the horizon and that a receding horizon scheme is used in operation to optimally use the available storage.« less

  15. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  16. Evaluating information content of SNPs for sample-tagging in re-sequencing projects.

    PubMed

    Hu, Hao; Liu, Xiang; Jin, Wenfei; Hilger Ropers, H; Wienker, Thomas F

    2015-05-15

    Sample-tagging is designed for identification of accidental sample mix-up, which is a major issue in re-sequencing studies. In this work, we develop a model to measure the information content of SNPs, so that we can optimize a panel of SNPs that approach the maximal information for discrimination. The analysis shows that as low as 60 optimized SNPs can differentiate the individuals in a population as large as the present world, and only 30 optimized SNPs are in practice sufficient in labeling up to 100 thousand individuals. In the simulated populations of 100 thousand individuals, the average Hamming distances, generated by the optimized set of 30 SNPs are larger than 18, and the duality frequency, is lower than 1 in 10 thousand. This strategy of sample discrimination is proved robust in large sample size and different datasets. The optimized sets of SNPs are designed for Whole Exome Sequencing, and a program is provided for SNP selection, allowing for customized SNP numbers and interested genes. The sample-tagging plan based on this framework will improve re-sequencing projects in terms of reliability and cost-effectiveness.

  17. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  18. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  19. [Comparison study on sampling methods of Oncomelania hupensis snail survey in marshland schistosomiasis epidemic areas in China].

    PubMed

    An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang

    2016-06-29

    To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.

  20. Effect of kernel size and mill type on protein, milling yield, and baking quality of hard red spring wheat

    USDA-ARS?s Scientific Manuscript database

    Optimization of flour yield and quality is important in the milling industry. The objective of this study was to determine the effect of kernel size and mill type on flour yield and end-use quality. A hard red spring wheat composite sample was segregated, based on kernel size, into large, medium, ...

  1. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    ERIC Educational Resources Information Center

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  2. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    PubMed

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Optimal Inspection of Imports to Prevent Invasive Pest Introduction.

    PubMed

    Chen, Cuicui; Epanchin-Niell, Rebecca S; Haight, Robert G

    2018-03-01

    The United States imports more than 1 billion live plants annually-an important and growing pathway for introduction of damaging nonnative invertebrates and pathogens. Inspection of imports is one safeguard for reducing pest introductions, but capacity constraints limit inspection effort. We develop an optimal sampling strategy to minimize the costs of pest introductions from trade by posing inspection as an acceptance sampling problem that incorporates key features of the decision context, including (i) simultaneous inspection of many heterogeneous lots, (ii) a lot-specific sampling effort, (iii) a budget constraint that limits total inspection effort, (iv) inspection error, and (v) an objective of minimizing cost from accepted defective units. We derive a formula for expected number of accepted infested units (expected slippage) given lot size, sample size, infestation rate, and detection rate, and we formulate and analyze the inspector's optimization problem of allocating a sampling budget among incoming lots to minimize the cost of slippage. We conduct an empirical analysis of live plant inspection, including estimation of plant infestation rates from historical data, and find that inspections optimally target the largest lots with the highest plant infestation rates, leaving some lots unsampled. We also consider that USDA-APHIS, which administers inspections, may want to continue inspecting all lots at a baseline level; we find that allocating any additional capacity, beyond a comprehensive baseline inspection, to the largest lots with the highest infestation rates allows inspectors to meet the dual goals of minimizing the costs of slippage and maintaining baseline sampling without substantial compromise. © 2017 Society for Risk Analysis.

  4. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  5. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  6. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  7. Sample allocation balancing overall representativeness and stratum precision.

    PubMed

    Diaz-Quijano, Fredi Alexander

    2018-05-07

    In large-scale surveys, it is often necessary to distribute a preset sample size among a number of strata. Researchers must make a decision between prioritizing overall representativeness or precision of stratum estimates. Hence, I evaluated different sample allocation strategies based on stratum size. The strategies evaluated herein included allocation proportional to stratum population; equal sample for all strata; and proportional to the natural logarithm, cubic root, and square root of the stratum population. This study considered the fact that, from a preset sample size, the dispersion index of stratum sampling fractions is correlated with the population estimator error and the dispersion index of stratum-specific sampling errors would measure the inequality in precision distribution. Identification of a balanced and efficient strategy was based on comparing those both dispersion indices. Balance and efficiency of the strategies changed depending on overall sample size. As the sample to be distributed increased, the most efficient allocation strategies were equal sample for each stratum; proportional to the logarithm, to the cubic root, to square root; and that proportional to the stratum population, respectively. Depending on sample size, each of the strategies evaluated could be considered in optimizing the sample to keep both overall representativeness and stratum-specific precision. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Comparative study of soft thermal printing and lamination of dry thick photoresist films for the uniform fabrication of polymer MOEMS on small-sized samples

    NASA Astrophysics Data System (ADS)

    Abada, S.; Salvi, L.; Courson, R.; Daran, E.; Reig, B.; Doucet, J. B.; Camps, T.; Bardinal, V.

    2017-05-01

    A method called ‘soft thermal printing’ (STP) was developed to ensure the optimal transfer of 50 µm-thick dry epoxy resist films (DF-1050) on small-sized samples. The aim was the uniform fabrication of high aspect ratio polymer-based MOEMS (micro-optical-electrical-mechanical system) on small and/or fragile samples, such as GaAs. The printing conditions were optimized, and the resulting thickness uniformity profiles were compared to those obtained via lamination and SU-8 standard spin-coating. Under the best conditions tested, STP and lamination produced similar results, with a maximum deviation to the central thickness of 3% along the sample surface, compared to greater than 40% for SU-8 spin-coating. Both methods were successfully applied to the collective fabrication of DF1050-based MOEMS designed for the dynamic focusing of VCSELs (vertical-cavity surface-emitting lasers). Similar, efficient electro-thermo-mechanical behaviour was obtained in both cases.

  9. Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun

    2011-10-01

    To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."

  10. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  11. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  12. Preparation of Salicylic Acid Loaded Nanostructured Lipid Carriers Using Box-Behnken Design: Optimization, Characterization and Physicochemical Stability.

    PubMed

    Pantub, Ketrawee; Wongtrakul, Paveena; Janwitayanuchit, Wicharn

    2017-01-01

    Nanostructured lipid carriers loaded salicylic acid (NLCs-SA) were developed and optimized by using the design of experiment (DOE). Box-Behnken experimental design of 3-factor, 3-level was applied for optimization of nanostructured lipid carriers prepared by emulsification method. The independent variables were total lipid concentration (X 1 ), stearic acid to Lexol ® GT-865 ratio (X 2 ) and Tween ® 80 concentration (X 3 ) while the particle size was a dependent variable (Y). Box-Behnken design could create 15 runs by setting response optimizer as minimum particle size. The optimized formulation consists of 10% of total lipid, a mixture of stearic acid and capric/caprylic triglyceride at a 4:1 ratio, and 25% of Tween ® 80 which the formulation was applied in order to prepare in both loaded and unloaded salicylic acid. After preparation for 24 hours, the particle size of loaded and unloaded salicylic acid was 189.62±1.82 nm and 369.00±3.37 nm, respectively. Response surface analysis revealed that the amount of total lipid is a main factor which could affect the particle size of lipid carriers. In addition, the stability studies showed a significant change in particle size by time. Compared to unloaded nanoparticles, the addition of salicylic acid into the particles resulted in physically stable dispersion. After 30 days, sedimentation of unloaded lipid carriers was clearly observed. Absolute values of zeta potential of both systems were in the range of 3 to 18 mV since non-ionic surfactant, Tween ® 80, providing steric barrier was used. Differential thermograms indicated a shift of endothermic peak from 55°C for α-crystal form in freshly prepared samples to 60°C for β´-crystal form in storage samples. It was found that the presence of capric/caprylic triglyceride oil could enhance encapsulation efficiency up to 80% and facilitate stability of the particles.

  13. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  14. The relationship between offspring size and fitness: integrating theory and empiricism.

    PubMed

    Rollinson, Njal; Hutchings, Jeffrey A

    2013-02-01

    How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.

  15. Maximizing the reliability of genomic selection by optimizing the calibration set of reference individuals: comparison of methods in two diverse groups of maize inbreds (Zea mays L.).

    PubMed

    Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L

    2012-10-01

    Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.

  16. Optimal design of a plot cluster for monitoring

    Treesearch

    Charles T. Scott

    1993-01-01

    Traveling costs incurred during extensive forest surveys make cluster sampling cost-effective. Clusters are specified by the type of plots, plot size, number of plots, and the distance between plots within the cluster. A method to determine the optimal cluster design when different plot types are used for different forest resource attributes is described. The method...

  17. Behavior and sensitivity of an optimal tree diameter growth model under data uncertainty

    Treesearch

    Don C. Bragg

    2005-01-01

    Using loblolly pine, shortleaf pine, white oak, and northern red oak as examples, this paper considers the behavior of potential relative increment (PRI) models of optimal tree diameter growth under data uncertainity. Recommendations on intial sample size and the PRI iteractive curve fitting process are provided. Combining different state inventories prior to PRI model...

  18. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  19. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  20. Design of pilot studies to inform the construction of composite outcome measures.

    PubMed

    Edland, Steven D; Ard, M Colin; Li, Weiwei; Jiang, Lingjing

    2017-06-01

    Composite scales have recently been proposed as outcome measures for clinical trials. For example, the Prodromal Alzheimer's Cognitive Composite (PACC) is the sum of z-score normed component measures assessing episodic memory, timed executive function, and global cognition. Alternative methods of calculating composite total scores using the weighted sum of the component measures that maximize signal-to-noise of the resulting composite score have been proposed. Optimal weights can be estimated from pilot data, but it is an open question how large a pilot trial is required to calculate reliably optimal weights. In this manuscript, we describe the calculation of optimal weights, and use large-scale computer simulations to investigate the question of how large a pilot study sample is required to inform the calculation of optimal weights. The simulations are informed by the pattern of decline observed in cognitively normal subjects enrolled in the Alzheimer's Disease Cooperative Study (ADCS) Prevention Instrument cohort study, restricting to n=75 subjects age 75 and over with an ApoE E4 risk allele and therefore likely to have an underlying Alzheimer neurodegenerative process. In the context of secondary prevention trials in Alzheimer's disease, and using the components of the PACC, we found that pilot studies as small as 100 are sufficient to meaningfully inform weighting parameters. Regardless of the pilot study sample size used to inform weights, the optimally weighted PACC consistently outperformed the standard PACC in terms of statistical power to detect treatment effects in a clinical trial. Pilot studies of size 300 produced weights that achieved near-optimal statistical power, and reduced required sample size relative to the standard PACC by more than half. These simulations suggest that modestly sized pilot studies, comparable to that of a phase 2 clinical trial, are sufficient to inform the construction of composite outcome measures. Although these findings apply only to the PACC in the context of prodromal AD, the observation that weights only have to approximate the optimal weights to achieve near-optimal performance should generalize. Performing a pilot study or phase 2 trial to inform the weighting of proposed composite outcome measures is highly cost-effective. The net effect of more efficient outcome measures is that smaller trials will be required to test novel treatments. Alternatively, second generation trials can use prior clinical trial data to inform weighting, so that greater efficiency can be achieved as we move forward.

  1. Nanometer-sized alumina packed microcolumn solid-phase extraction combined with field-amplified sample stacking-capillary electrophoresis for the speciation analysis of inorganic selenium in environmental water samples.

    PubMed

    Duan, Jiankuan; Hu, Bin; He, Man

    2012-10-01

    In this paper, a new method of nanometer-sized alumina packed microcolumn SPE combined with field-amplified sample stacking (FASS)-CE-UV detection was developed for the speciation analysis of inorganic selenium in environmental water samples. Self-synthesized nanometer-sized alumina was packed in a microcolumn as the SPE adsorbent to retain Se(IV) and Se(VI) simultaneously at pH 6 and the retained inorganic selenium was eluted by concentrated ammonia. The eluent was used for FASS-CE-UV analysis after NH₃ evaporation. The factors affecting the preconcentration of both Se(IV) and Se(VI) by SPE and FASS were studied and the optimal CE separation conditions for Se(IV) and Se(VI) were obtained. Under the optimal conditions, the LODs of 57 ng L⁻¹ (Se(IV)) and 71 ng L⁻¹ (Se(VI)) were obtained, respectively. The developed method was validated by the analysis of a certified reference material of GBW(E)080395 environmental water and the determined value was in a good agreement with the certified value. It was also successfully applied to the speciation analysis of inorganic selenium in environmental water samples, including Yangtze River water, spring water, and tap water. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  3. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  4. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  5. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  6. OLT-centralized sampling frequency offset compensation scheme for OFDM-PON.

    PubMed

    Chen, Ming; Zhou, Hui; Zheng, Zhiwei; Deng, Rui; Chen, Qinghui; Peng, Miao; Liu, Cuiwei; He, Jing; Chen, Lin; Tang, Xionggui

    2017-08-07

    We propose an optical line terminal (OLT)-centralized sampling frequency offset (SFO) compensation scheme for adaptively-modulated OFDM-PON systems. By using the proposed SFO scheme, the phase rotation and inter-symbol interference (ISI) caused by SFOs between OLT and multiple optical network units (ONUs) can be centrally compensated in the OLT, which reduces the complexity of ONUs. Firstly, the optimal fast Fourier transform (FFT) size is identified in the intensity-modulated and direct-detection (IMDD) OFDM system in the presence of SFO. Then, the proposed SFO compensation scheme including phase rotation modulation (PRM) and length-adaptive OFDM frame has been experimentally demonstrated in the downlink transmission of an adaptively modulated optical OFDM with the optimal FFT size. The experimental results show that up to ± 300 ppm SFO can be successfully compensated without introducing any receiver performance penalties.

  7. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  8. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  9. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  10. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  11. Development of a magnetic solid-phase extraction coupled with high-performance liquid chromatography method for the analysis of polyaromatic hydrocarbons.

    PubMed

    Ma, Yan; Xie, Jiawen; Jin, Jing; Wang, Wei; Yao, Zhijian; Zhou, Qing; Li, Aimin; Liang, Ying

    2015-07-01

    A novel magnetic solid phase extraction coupled with high-performance liquid chromatography method was established to analyze polyaromatic hydrocarbons in environmental water samples. The extraction conditions, including the amount of extraction agent, extraction time, pH and the surface structure of the magnetic extraction agent, were optimized. The results showed that the amount of extraction agent and extraction time significantly influenced the extraction performance. The increase in the specific surface area, the enlargement of pore size, and the reduction of particle size could enhance the extraction performance of the magnetic microsphere. The optimized magnetic extraction agent possessed a high surface area of 1311 m(2) /g, a large pore size of 6-9 nm, and a small particle size of 6-9 μm. The limit of detection for phenanthrene and benzo[g,h,i]perylene in the developed analysis method was 3.2 and 10.5 ng/L, respectively. When applied to river water samples, the spiked recovery of phenanthrene and benzo[g,h,i]perylene ranged from 89.5-98.6% and 82.9-89.1%, respectively. Phenanthrene was detected over a concentration range of 89-117 ng/L in three water samples withdrawn from the midstream of the Huai River, and benzo[g,h,i]perylene was below the detection limit. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2015-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a 'Color-Enhanced' sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  13. MaNGA: Target selection and Optimization

    NASA Astrophysics Data System (ADS)

    Wake, David

    2016-01-01

    The 6-year SDSS-IV MaNGA survey will measure spatially resolved spectroscopy for 10,000 nearby galaxies using the Sloan 2.5m telescope and the BOSS spectrographs with a new fiber arrangement consisting of 17 individually deployable IFUs. We present the simultaneous design of the target selection and IFU size distribution to optimally meet our targeting requirements. The requirements for the main samples were to use simple cuts in redshift and magnitude to produce an approximately flat number density of targets as a function of stellar mass, ranging from 1x109 to 1x1011 M⊙, and radial coverage to either 1.5 (Primary sample) or 2.5 (Secondary sample) effective radii, while maximizing S/N and spatial resolution. In addition we constructed a "Color-Enhanced" sample where we required 25% of the targets to have an approximately flat number density in the color and mass plane. We show how these requirements are met using simple absolute magnitude (and color) dependent redshift cuts applied to an extended version of the NASA Sloan Atlas (NSA), how this determines the distribution of IFU sizes and the resulting properties of the MaNGA sample.

  14. Sample size for positive and negative predictive value in diagnostic research using case–control designs

    PubMed Central

    Steinberg, David M.; Fine, Jason; Chappell, Rick

    2009-01-01

    Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fong, Erika J.; Huang, Chao; Hamilton, Julie

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  16. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    PubMed

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    PubMed

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  19. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  20. PAVENET OS: A Compact Hard Real-Time Operating System for Precise Sampling in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki

    The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).

  1. Optimizing variable radius plot size and LiDAR resolution to model standing volume in conifer forests

    Treesearch

    Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak

    2016-01-01

    The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....

  2. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  3. Effect of Pore Size and Porosity on the Biomechanical Properties and Cytocompatibility of Porous NiTi Alloys

    PubMed Central

    Jian, Yu-Tao; Yang, Yue; Tian, Tian; Stanford, Clark; Zhang, Xin-Ping; Zhao, Ke

    2015-01-01

    Five types of porous Nickel-Titanium (NiTi) alloy samples of different porosities and pore sizes were fabricated. According to compressive and fracture strengths, three groups of porous NiTi alloy samples underwent further cytocompatibility experiments. Porous NiTi alloys exhibited a lower Young’s modulus (2.0 GPa ~ 0.8 GPa). Both compressive strength (108.8 MPa ~ 56.2 MPa) and fracture strength (64.6 MPa ~ 41.6 MPa) decreased gradually with increasing mean pore size (MPS). Cells grew and spread well on all porous NiTi alloy samples. Cells attached more strongly on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cell adhesion on porous NiTi alloys was correlated negatively to MPS (277.2 μm ~ 566.5 μm; p < 0.05). More cells proliferated on control group and blank group than on all porous NiTi alloy samples (p < 0.05). Cellular ALP activity on all porous NiTi alloy samples was higher than on control group and blank group (p < 0.05). The porous NiTi alloys with optimized pore size could be a potential orthopedic material. PMID:26047515

  4. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  5. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning

    PubMed Central

    Ichnowski, Jeffrey; Prins, Jan F.; Alterovitz, Ron

    2014-01-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU’s cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot’s configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot. PMID:25419474

  6. Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning.

    PubMed

    Ichnowski, Jeffrey; Prins, Jan F; Alterovitz, Ron

    2014-05-01

    We present CARRT* (Cache-Aware Rapidly Exploring Random Tree*), an asymptotically optimal sampling-based motion planner that significantly reduces motion planning computation time by effectively utilizing the cache memory hierarchy of modern central processing units (CPUs). CARRT* can account for the CPU's cache size in a manner that keeps its working dataset in the cache. The motion planner progressively subdivides the robot's configuration space into smaller regions as the number of configuration samples rises. By focusing configuration exploration in a region for periods of time, nearest neighbor searching is accelerated since the working dataset is small enough to fit in the cache. CARRT* also rewires the motion planning graph in a manner that complements the cache-aware subdivision strategy to more quickly refine the motion planning graph toward optimality. We demonstrate the performance benefit of our cache-aware motion planning approach for scenarios involving a point robot as well as the Rethink Robotics Baxter robot.

  7. Optimization and determination of Cd (II) in different environmental water samples with dispersive liquid-liquid microextraction preconcentration combined with inductively coupled plasma optical emission spectrometry.

    PubMed

    Salahinejad, Maryam; Aflaki, Fereydoon

    2011-06-01

    Dispersive liquid-liquid microextraction followed by inductively coupled plasma-optical emission spectrometry has been investigated for determination of Cd(II) ions in water samples. Ammonium pyrrolidine dithiocarbamate was used as chelating agent. Several factors influencing the microextraction efficiency of Cd (II) ions such as extracting and dispersing solvent type and their volumes, pH, sample volume, and salting effect were optimized. The optimization was performed both via one variable at a time, and central composite design methods and the optimum conditions were selected. Both optimization methods showed nearly the same results: sample size 5 mL; dispersive solvent ethanol; dispersive solvent volume 2 mL; extracting solvent chloroform; extracting solvent volume 200 [Formula: see text]L; pH and salt amount do not affect significantly the microextraction efficiency. The limits of detection and quantification were 0.8 and 2.5 ng L( - 1), respectively. The relative standard deviation for five replicate measurements of 0.50 mg L( - 1) of Cd (II) was 4.4%. The recoveries for the spiked real samples from tap, mineral, river, dam, and sea waters samples ranged from 92.2% to 104.5%.

  8. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    DTIC Science & Technology

    2016-03-01

    NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper -reduction procedure is used when constructing a reduced-order model...ScaLAPACK and performance results are presented. nonnegative least squares, model order reduction, hyper -reduction, Energy Conserving Sampling and...optimal solution. ........................................ 20 Table 6 Reduced mesh sizes produced for each solver in the ECSW hyper -reduction step

  9. Optimization of the two-sample rank Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  10. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  11. Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size

    PubMed Central

    Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa

    2016-01-01

    Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913

  12. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  13. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  14. Topology synthesis and size optimization of morphing wing structures

    NASA Astrophysics Data System (ADS)

    Inoyama, Daisaku

    This research demonstrates a novel topology and size optimization methodology for synthesis of distributed actuation systems with specific applications to morphing air vehicle structures. The main emphasis is placed on the topology and size optimization problem formulations and the development of computational modeling concepts. The analysis model is developed to meet several important criteria: It must allow a rigid-body displacement, as well as a variation in planform area, with minimum strain on structural members while retaining acceptable numerical stability for finite element analysis. Topology optimization is performed on a semi-ground structure with design variables that control the system configuration. In effect, the optimization process assigns morphing members as "soft" elements, non-morphing load-bearing members as "stiff' elements, and non-existent members as "voids." The optimization process also determines the optimum actuator placement, where each actuator is represented computationally by equal and opposite nodal forces with soft axial stiffness. In addition, the configuration of attachments that connect the morphing structure to a non-morphing structure is determined simultaneously. Several different optimization problem formulations are investigated to understand their potential benefits in solution quality, as well as meaningfulness of the formulations. Extensions and enhancements to the initial concept and problem formulations are made to accommodate multiple-configuration definitions. In addition, the principal issues on the external-load dependency and the reversibility of a design, as well as the appropriate selection of a reference configuration, are addressed in the research. The methodology to control actuator distributions and concentrations is also discussed. Finally, the strategy to transfer the topology solution to the sizing optimization is developed and cross-sectional areas of existent structural members are optimized under applied aerodynamic loads. That is, the optimization process is implemented in sequential order: The actuation system layout is first determined through multi-disciplinary topology optimization process, and then the thickness or cross-sectional area of each existent member is optimized under given constraints and boundary conditions. Sample problems are solved to demonstrate the potential capabilities of the presented methodology. The research demonstrates an innovative structural design procedure from a computational perspective and opens new insights into the potential design requirements and characteristics of morphing structures.

  15. Microwave resonances in dielectric samples probed in Corbino geometry: simulation and experiment.

    PubMed

    Felger, M Maximilian; Dressel, Martin; Scheffler, Marc

    2013-11-01

    The Corbino approach, where the sample of interest terminates a coaxial cable, is a well-established method for microwave spectroscopy. If the sample is dielectric and if the probe geometry basically forms a conductive cavity, this combination can sustain well-defined microwave resonances that are detrimental for broadband measurements. Here, we present detailed simulations and measurements to investigate the resonance frequencies as a function of sample and probe size and of sample permittivity. This allows a quantitative optimization to increase the frequency of the lowest-lying resonance.

  16. R software package based statistical optimization of process components to simultaneously enhance the bacterial growth, laccase production and textile dye decolorization with cytotoxicity study

    PubMed Central

    Dudhagara, Pravin; Tank, Shantilal

    2018-01-01

    The thermophilic bacterium, Bacillus licheniformis U1 is used for the optimization of bacterial growth (R1), laccase production (R2) and synthetic disperse blue DBR textile dye decolorization (R3) in the present study. Preliminary optimization has been performed by one variable at time (OVAT) approach using four media components viz., dye concentration, copper sulphate concentration, pH, and inoculum size. Based on OVAT result further statistical optimization of R1, R2 and R3 performed by Box–Behnken design (BBD) using response surface methodology (RSM) in R software with R Commander package. The total 29 experimental runs conducted in the experimental design study towards the construction of a quadratic model. The model indicated that dye concentration 110 ppm, copper sulphate 0.2 mM, pH 7.5 and inoculum size 6% v/v were found to be optimum to maximize the laccase production and bacterial growth. Whereas, maximum dye decolorization achieved in media containing dye concentration 110 ppm, copper sulphate 0.6 mM, pH 6 and inoculum size 6% v/v. R package predicted R2 of R1, R2 and R3 were 0.9917, 0.9831 and 0.9703 respectively; likened to Design-Expert (Stat-Ease) (DOE) predicted R2 of R1, R2, and R3 were 0.9893, 0.9822 and 0.8442 respectively. The values obtained by R software were more precise, reliable and reproducible, compared to the DOE model. The laccase production was 1.80 fold increased, and 2.24 fold enhancement in dye decolorization was achieved using optimized medium than initial experiments. Moreover, the laccase-treated sample demonstrated the less cytotoxic effect on L132 and MCF-7 cell lines compared to untreated sample using MTT assay. Higher cell viability and lower cytotoxicity observed in a laccase-treated sample suggest the impending application of bacterial laccase in the reduction of toxicity of dye to design rapid biodegradation process. PMID:29718934

  17. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  18. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    NASA Astrophysics Data System (ADS)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  19. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  20. Fully automatic characterization and data collection from crystals of biological macromolecules.

    PubMed

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier; Bowler, Matthew W

    2015-08-01

    Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  1. Simplex optimization of headspace factors for headspace gas chromatography determination of residual solvents in pharmaceutical products.

    PubMed

    Grodowska, Katarzyna; Parczewski, Andrzej

    2013-01-01

    The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).

  2. LENGTH-HETEROGENEITY POLYMERASE CHAIN REACTION (LH-PCR) AS AN INDICATOR OF STREAM SANITARY AND ECOLOGICAL CONDITION: OPTIMAL SAMPLE SIZE AND HOLDING CONDITIONS

    EPA Science Inventory

    The use of coliform plate count data to assess stream sanitary and ecological condition is limited by the need to store samples at 4oC and analyze them within a 24-hour period. We are testing LH-PCR as an alternative tool to assess the bacterial load of streams, offering a cost ...

  3. Thermomechanical treatment for improved neutron irradiation resistance of austenitic alloy (Fe-21Cr-32Ni)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. Tan; J. T. Busby; H. J. M. Chichester

    2013-06-01

    An optimized thermomechanical treatment (TMT) applied to austenitic alloy 800H (Fe-21Cr-32Ni) had shown significant improvements in corrosion resistance and basic mechanical properties. This study examined its effect on radiation resistance by irradiating both the solution-annealed (SA) and TMT samples at 500 degrees C for 3 dpa. Microstructural characterization using transmission electron microscopy revealed that the radiation-induced Frank loops, voids, and y'-Ni3(Ti,Al) precipitates had similar sizes between the SA and TMT samples. The amounts of radiation-induced defects and more significantly y' precipitates, however, were reduced in the TMT samples. These reductions would approximately reduce by 40.9% the radiation hardening compared tomore » the SA samples. This study indicates that optimized-TMT is an economical approach for effective overall property improvements.« less

  4. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    PubMed Central

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; Coe, Jesse; Conrad, Chelsie E.; Dörner, Katerina; Sierra, Raymond G.; Stevenson, Hilary P.; Camacho-Alanis, Fernanda; Grant, Thomas D.; Nelson, Garrett; James, Daniel; Calero, Guillermo; Wachter, Rebekka M.; Spence, John C. H.; Weierstall, Uwe; Fromme, Petra; Ros, Alexandra

    2015-01-01

    The advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles can be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ∼4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. This method will also permit an analysis of the dependence of crystal quality on crystal size. PMID:26798818

  5. Microfluidic sorting of protein nanocrystals by size for X-ray free-electron laser diffraction

    DOE PAGES

    Abdallah, Bahige G.; Zatsepin, Nadia A.; Roy-Chowdhury, Shatabdi; ...

    2015-08-19

    We report that the advent and application of the X-ray free-electron laser (XFEL) has uncovered the structures of proteins that could not previously be solved using traditional crystallography. While this new technology is powerful, optimization of the process is still needed to improve data quality and analysis efficiency. One area is sample heterogeneity, where variations in crystal size (among other factors) lead to the requirement of large data sets (and thus 10–100 mg of protein) for determining accurate structure factors. To decrease sample dispersity, we developed a high-throughput microfluidic sorter operating on the principle of dielectrophoresis, whereby polydisperse particles canmore » be transported into various fluid streams for size fractionation. Using this microsorter, we isolated several milliliters of photosystem I nanocrystal fractions ranging from 200 to 600 nm in size as characterized by dynamic light scattering, nanoparticle tracking, and electron microscopy. Sorted nanocrystals were delivered in a liquid jet via the gas dynamic virtual nozzle into the path of the XFEL at the Linac Coherent Light Source. We obtained diffraction to ~4 Å resolution, indicating that the small crystals were not damaged by the sorting process. We also observed the shape transforms of photosystem I nanocrystals, demonstrating that our device can optimize data collection for the shape transform-based phasing method. Using simulations, we show that narrow crystal size distributions can significantly improve merged data quality in serial crystallography. From this proof-of-concept work, we expect that the automated size-sorting of protein crystals will become an important step for sample production by reducing the amount of protein needed for a high quality final structure and the development of novel phasing methods that exploit inter-Bragg reflection intensities or use variations in beam intensity for radiation damage-induced phasing. Ultimately, this method will also permit an analysis of the dependence of crystal quality on crystal size.« less

  6. A gold nanoparticle-based immunochromatographic assay: the influence of nanoparticulate size.

    PubMed

    Lou, Sha; Ye, Jia-ying; Li, Ke-qiang; Wu, Aiguo

    2012-03-07

    Four different sized gold nanoparticles (14 nm, 16 nm, 35 nm and 38 nm) were prepared to conjugate an antibody for a gold nanoparticle-based immunochromatographic assay which has many applications in both basic research and clinical diagnosis. This study focuses on the conjugation efficiency of the antibody with different sized gold nanoparticles. The effect of factors such as pH value and concentration of antibody has been quantificationally discussed using spectra methods after adding 1 wt% NaCl which induced gold nanoparticle aggregation. It was found that different sized gold nanoparticles had different conjugation efficiencies under different pH values and concentrations of antibody. Among the four sized gold nanoparticles, the 16 nm gold nanoparticles have the minimum requirement for antibody concentrations to avoid aggregation comparing to other sized gold nanoparticles but are less sensitive for detecting the real sample compared to the 38 nm gold nanoparticles. Consequently, different sized gold nanoparticles should be labeled with antibody under optimal pH value and optimal concentrations of antibody. It will be helpful for the application of antibody-labeled gold nanoparticles in the fields of clinic diagnosis, environmental analysis and so on in future.

  7. Fractionating power and outlet stream polydispersity in asymmetrical flow field-flow fractionation. Part I: isocratic operation.

    PubMed

    Williams, P Stephen

    2016-05-01

    Asymmetrical flow field-flow fractionation (As-FlFFF) has become the most commonly used of the field-flow fractionation techniques. However, because of the interdependence of the channel flow and the cross flow through the accumulation wall, it is the most difficult of the techniques to optimize, particularly for programmed cross flow operation. For the analysis of polydisperse samples, the optimization should ideally be guided by the predicted fractionating power. Many experimentalists, however, neglect fractionating power and rely on light scattering detection simply to confirm apparent selectivity across the breadth of the eluted peak. The size information returned by the light scattering software is assumed to dispense with any reliance on theory to predict retention, and any departure of theoretical predictions from experimental observations is therefore considered of no importance. Separation depends on efficiency as well as selectivity, however, and efficiency can be a strong function of retention. The fractionation of a polydisperse sample by field-flow fractionation never provides a perfectly separated series of monodisperse fractions at the channel outlet. The outlet stream has some residual polydispersity, and it will be shown in this manuscript that the residual polydispersity is inversely related to the fractionating power. Due to the strong dependence of light scattering intensity and its angular distribution on the size of the scattering species, the outlet polydispersity must be minimized if reliable size data are to be obtained from the light scattering detector signal. It is shown that light scattering detection should be used with careful control of fractionating power to obtain optimized analysis of polydisperse samples. Part I is concerned with isocratic operation of As-FlFFF, and part II with programmed operation.

  8. Around Marshall

    NASA Image and Video Library

    1996-06-10

    The dart and associated launching system was developed by engineers at MSFC to collect a sample of the aluminum oxide particles during the static fire testing of the Shuttle's solid rocket motor. The dart is launched through the exhaust and recovered post test. The particles are collected on sticky copper tapes affixed to a cylindrical shaft in the dart. A protective sleeve draws over the tape after the sample is collected to prevent contamination. The sample is analyzed under a scarning electron microscope under high magnification and a particle size distribution is determined. This size distribution is input into the analytical model to predict the radiative heating rates from the motor exhaust. Good prediction models are essential to optimizing the development of the thermal protection system for the Shuttle.

  9. Optimization of metals and plastics recovery from electric cable wastes using a plate-type electrostatic separator.

    PubMed

    Richard, Gontran; Touhami, Seddik; Zeghloul, Thami; Dascalescu, Lucien

    2017-02-01

    Plate-type electrostatic separators are commonly employed for the selective sorting of conductive and non-conductive granular materials. The aim of this work is to identify the optimal operating conditions of such equipment, when employed for separating copper and plastics from either flexible or rigid electric wire wastes. The experiments are performed according to the response surface methodology, on samples composed of either "calibrated" particles, obtained by manually cutting of electric wires at a predefined length (4mm), or actual machine-grinded scraps, characterized by a relatively-wide size distribution (1-4mm). The results point out the effect of particle size and shape on the effectiveness of the electrostatic separation. Different optimal operating conditions are found for flexible and rigid wires. A separate processing of the two classes of wire wastes is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  11. Molecular Level Design Principle behind Optimal Sizes of Photosynthetic LH2 Complex: Taming Disorder through Cooperation of Hydrogen Bonding and Quantum Delocalization.

    PubMed

    Jang, Seogjoo; Rivera, Eva; Montemayor, Daniel

    2015-03-19

    The light harvesting 2 (LH2) antenna complex from purple photosynthetic bacteria is an efficient natural excitation energy carrier with well-known symmetric structure, but the molecular level design principle governing its structure-function relationship is unknown. Our all-atomistic simulations of nonnatural analogues of LH2 as well as those of a natural LH2 suggest that nonnatural sizes of LH2-like complexes could be built. However, stable and consistent hydrogen bonding (HB) between bacteriochlorophyll and the protein is shown to be possible only near naturally occurring sizes, leading to significantly smaller disorder than for nonnatural ones. Extensive quantum calculations of intercomplex exciton transfer dynamics, sampled for a large set of disorder, reveal that taming the negative effect of disorder through a reliable HB as well as quantum delocalization of the exciton is a critical mechanism that makes LH2 highly functional, which also explains why the natural sizes of LH2 are indeed optimal.

  12. Grain Size and Phase Purity Characterization of U 3Si 2 Pellet Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoggan, Rita E.; Tolman, Kevin R.; Cappia, Fabiola

    Characterization of U 3Si 2 fresh fuel pellets is important for quality assurance and validation of the finished product. Grain size measurement methods, phase identification methods using scanning electron microscopes equipped with energy dispersive spectroscopy and x-ray diffraction, and phase quantification methods via image analysis have been developed and implemented on U 3Si 2 pellet samples. A wide variety of samples have been characterized including representative pellets from an initial irradiation experiment, and samples produced using optimized methods to enhance phase purity from an extended fabrication effort. The average grain size for initial pellets was between 16 and 18 µm.more » The typical average grain size for pellets from the extended fabrication was between 20 and 30 µm with some samples exhibiting irregular grain growth. Pellets from the latter half of extended fabrication had a bimodal grain size distribution consisting of coarsened grains (>80 µm) surrounded by the typical (20-30 µm) grain structure around the surface. Phases identified in initial uranium silicide pellets included: U 3Si 2 as the main phase composing about 80 vol. %, Si rich phases (USi and U 5Si 4) composing about 13 vol. %, and UO 2 composing about 5 vol. %. Initial batches from the extended U 3Si 2 pellet fabrication had similar phases and phase quantities. The latter half of the extended fabrication pellet batches did not contain Si rich phases, and had between 1-5% UO 2: achieving U 3Si 2 phase purity between 95 vol. % and 98 vol. % U 3Si 2. The amount of UO 2 in sintered U 3Si 2 pellets is correlated to the length of time between U 3Si 2 powder fabrication and pellet formation. These measurements provide information necessary to optimize fabrication efforts and a baseline for future work on this fuel compound.« less

  13. Exploiting Size-Dependent Drag and Magnetic Forces for Size-Specific Separation of Magnetic Nanoparticles

    PubMed Central

    Rogers, Hunter B.; Anani, Tareq; Choi, Young Suk; Beyers, Ronald J.; David, Allan E.

    2015-01-01

    Realizing the full potential of magnetic nanoparticles (MNPs) in nanomedicine requires the optimization of their physical and chemical properties. Elucidation of the effects of these properties on clinical diagnostic or therapeutic properties, however, requires the synthesis or purification of homogenous samples, which has proved to be difficult. While initial simulations indicated that size-selective separation could be achieved by flowing magnetic nanoparticles through a magnetic field, subsequent in vitro experiments were unable to reproduce the predicted results. Magnetic field-flow fractionation, however, was found to be an effective method for the separation of polydisperse suspensions of iron oxide nanoparticles with diameters greater than 20 nm. While similar methods have been used to separate magnetic nanoparticles before, no previous work has been done with magnetic nanoparticles between 20 and 200 nm. Both transmission electron microscopy (TEM) and dynamic light scattering (DLS) analysis were used to confirm the size of the MNPs. Further development of this work could lead to MNPs with the narrow size distributions necessary for their in vitro and in vivo optimization. PMID:26307980

  14. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  15. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  16. Phase-contrast x-ray computed tomography for biological imaging

    NASA Astrophysics Data System (ADS)

    Momose, Atsushi; Takeda, Tohoru; Itai, Yuji

    1997-10-01

    We have shown so far that 3D structures in biological sot tissues such as cancer can be revealed by phase-contrast x- ray computed tomography using an x-ray interferometer. As a next step, we aim at applications of this technique to in vivo observation, including radiographic applications. For this purpose, the size of view field is desired to be more than a few centimeters. Therefore, a larger x-ray interferometer should be used with x-rays of higher energy. We have evaluated the optimal x-ray energy from an aspect of does as a function of sample size. Moreover, desired spatial resolution to an image sensor is discussed as functions of x-ray energy and sample size, basing on a requirement in the analysis of interference fringes.

  17. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  18. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    PubMed Central

    Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.

    2016-01-01

    We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation. PMID:27313569

  19. Advantages of Unfair Quantum Ground-State Sampling.

    PubMed

    Zhang, Brian Hu; Wagenbreth, Gene; Martin-Mayor, Victor; Hen, Itay

    2017-04-21

    The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that (i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers (ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and (iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.

  20. Integration of Microdialysis Sampling and Microchip Electrophoresis with Electrochemical Detection

    PubMed Central

    Mecker, Laura C.; Martin, R. Scott

    2009-01-01

    Here we describe the fabrication, optimization, and application of a microfluidic device that integrates microdialysis (MD) sampling, microchip electrophoresis (ME), and electrochemical detection (EC). The manner in which the chip is produced is reproducible and enables the fixed alignment of the MD/ME and ME/EC interfaces. Poly(dimethylsiloxane) (PDMS) -based valves were used for the discrete injection of sample from the hydrodynamic MD dialysate stream into a separation channel for analysis with ME. To enable the integration of ME with EC detection, a palladium decoupler was used to isolate the high voltages associated with electrophoresis from micron-sized carbon ink detection electrodes. Optimization of the ME/EC interface was needed to allow the use of biologically appropriate perfusate buffers containing high salt content. This optimization included changes in the fabrication procedure, increases in the decoupler surface area, and a programmed voltage shutoff. The ability of the MD/ME/EC system to sample a biological system was demonstrated by using a linear probe to monitor the stimulated release of dopamine from a confluent layer of PC 12 cells. To our knowledge, this is the first report of a microchip-based system that couples microdialysis sampling with microchip electrophoresis and electrochemical detection. PMID:19551945

  1. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  2. A laser-deposition approach to compositional-spread discovery of materials on conventional sample sizes

    NASA Astrophysics Data System (ADS)

    Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  3. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance.

    PubMed

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; Ep Mundhofir, Farmaditya; Mh Faradz, Sultana; Hisatome, Ichiro

    2017-03-01

    High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100-400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1 .

  4. Optimization of PCR Condition: The First Study of High Resolution Melting Technique for Screening of APOA1 Variance

    PubMed Central

    Wahyuningsih, Hesty; K Cayami, Ferdy; Bahrudin, Udin; A Sobirin, Mochamad; EP Mundhofir, Farmaditya; MH Faradz, Sultana; Hisatome, Ichiro

    2017-01-01

    Background High resolution melting (HRM) is a post-PCR technique for variant screening and genotyping based on the different melting points of DNA fragments. The advantages of this technique are that it is fast, simple, and efficient and has a high output, particularly for screening of a large number of samples. APOA1 encodes apolipoprotein A1 (apoA1) which is a major component of high density lipoprotein cholesterol (HDL-C). This study aimed to obtain an optimal quantitative polymerase chain reaction (qPCR)-HRM condition for screening of APOA1 variance. Methods Genomic DNA was isolated from a peripheral blood sample using the salting out method. APOA1 was amplified using the RotorGeneQ 5Plex HRM. The PCR product was visualized with the HRM amplification curve and confirmed using gel electrophoresis. The melting profile was confirmed by looking at the melting curve. Results Five sets of primers covering the translated region of APOA1 exons were designed with expected PCR product size of 100–400 bps. The amplified segments of DNA were amplicons 2, 3, 4A, 4B, and 4C. Amplicons 2, 3 and 4B were optimized at an annealing temperature of 60 °C at 40 PCR cycles. Amplicon 4A was optimized at an annealing temperature of 62 °C at 45 PCR cycles. Amplicon 4C was optimized at an annealing temperature of 63 °C at 50 PCR cycles. Conclusion In addition to the suitable procedures of DNA isolation and quantification, primer design and an estimated PCR product size, the data of this study showed that appropriate annealing temperature and PCR cycles were important factors in optimization of HRM technique for variant screening in APOA1. PMID:28331418

  5. The material from Lampung as coarse aggregate to substitute andesite for concrete-making

    NASA Astrophysics Data System (ADS)

    Amin, M.; Supriyatna, Y. I.; Sumardi, S.

    2018-01-01

    Andesite stone is usually used for split stone material in the concrete making. However, its availability is decreasing. Lampung province has natural resources that can be used for coarse aggregate materials to substitute andesite stone. These natural materials include limestone, feldspar stone, basalt, granite, and slags from iron processing waste. Therefore, a research on optimizing natural materials in Lampung to substitute andesite stone for concrete making is required. This research used laboratory experiment method. The research activities included making cubical object samples of 150 x 150 x 150 mm with material composition referring to a standard of K.200 and w/c 0.61. Concrete making by using varying types of aggregates (basalt, limestone, slag) and aggregate sizes (A = 5-15 mm, B = 15-25 mm, and 25-50 mm) was followed by compressive strength test. The results showed that the obtained optimal compressive strengths for basalt were 24.47 MPa for 50-150 mm aggregate sizes, 21.2 MPa for 15-25 mm aggregate sizes, and 20.7 MPa for 25-50 mm aggregate sizes. These results of basalt compressive strength values were higher than the same result for andesite (19.69 MPa for 50-150 mm aggregate sizes), slag (22.72 MPa for 50-150 mm aggregate sizes), and limestone (19.69 Mpa for 50-150 mm aggregate sizes). These results indicated that basalt, limestone, and slag aggregates were good enough to substitute andesite as materials for concrete making. Therefore, natural resources in Lampung can be optimized as construction materials in concrete making.

  6. Design and analysis of three-arm trials with negative binomially distributed endpoints.

    PubMed

    Mütze, Tobias; Munk, Axel; Friede, Tim

    2016-02-20

    A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Hybrid Optimal Design of the Eco-Hydrological Wireless Sensor Network in the Middle Reach of the Heihe River Basin, China

    PubMed Central

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-01-01

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762

  8. Hybrid optimal design of the eco-hydrological wireless sensor network in the middle reach of the Heihe River Basin, China.

    PubMed

    Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao

    2014-10-14

    The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.

  9. Using lod scores to detect sex differences in male-female recombination fractions.

    PubMed

    Feenstra, B; Greenberg, D A; Hodge, S E

    2004-01-01

    Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel

  10. Optimization of MR fluid Yield stress using Taguchi Method and Response Surface Methodology Techniques

    NASA Astrophysics Data System (ADS)

    Mangal, S. K.; Sharma, Vivek

    2018-02-01

    Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.

  11. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  12. Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.; Dye, Charles

    2016-01-01

    An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…

  13. Construction of pore network models for Berea and Fontainebleau sandstones using non-linear programing and optimization techniques

    NASA Astrophysics Data System (ADS)

    Sharqawy, Mostafa H.

    2016-12-01

    Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.

  14. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  15. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  16. MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant

    2014-01-01

    Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurationsmore » are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.« less

  17. Optimizing ultrafast illumination for multiphoton-excited fluorescence imaging

    PubMed Central

    Stoltzfus, Caleb R.; Rebane, Aleksander

    2016-01-01

    We study the optimal conditions for high throughput two-photon excited fluorescence (2PEF) and three-photon excited fluorescence (3PEF) imaging using femtosecond lasers. We derive relations that allow maximization of the rate of imaging depending on the average power, pulse repetition rate, and noise characteristics of the laser, as well as on the size and structure of the sample. We perform our analysis using ~100 MHz, ~1 MHz and 1 kHz pulse rates and using both a tightly-focused illumination beam with diffraction-limited image resolution, as well loosely focused illumination with a relatively low image resolution, where the latter utilizes separate illumination and fluorescence detection beam paths. Our theoretical estimates agree with the experiments, which makes our approach especially useful for optimizing high throughput imaging of large samples with a field-of-view up to 10x10 cm2. PMID:27231620

  18. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  19. Genome-Wide Chromosomal Targets of Oncogenic Transcription Factors

    DTIC Science & Technology

    2008-04-01

    axis. (a) Comparison between STAGE and ChIP-chip when the same sample was analyzed by both methods. The gray line indicates all predicted STAGE targets...numbers of single-hit tags (Y-axis) were plotted against the frequen- cies of those tags in the random ( gray bars) and experimental (black bars) tag...size of 500 bp gave an optimal separation between random and real data. Data shown is for a window size of 500 bp. The gray bars indicate log10 of the

  20. On sample size and different interpretations of snow stability datasets

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar aspect distributions to the large dataset. We used 100 different subsets for each sample size. Statistical variations obtained in the complete dataset were also tested on the smaller subsets using the Mann-Whitney or the Kruskal-Wallis test. For each subset size, the number of subsets were counted in which the significance level was reached. For these tests no nominal data scale was assumed. (iii) For the same subsets described above, the distribution of the aspect median was determined. A count of how often this distribution was substantially different from the distribution obtained with the complete dataset was made. Since two valid stability interpretations were available (an objective and a subjective interpretation as described above), the effect of the arbitrary choice of the interpretation on spatial variability results was tested. In over one third of the cases the two interpretations came to different results. The effect of these differences were studied in a similar method as described in (iii): the distribution of the aspect median was determined for subsets of the complete dataset using both interpretations, compared against each other as well as to the results of the complete dataset. For the complete dataset the two interpretations showed mainly identical results. Therefore the subset size was determined from the point at which the results of the two interpretations converged. A universal result for the optimal subset size cannot be presented since results differed between different situations contained in the dataset. The optimal subset size is thus dependent on stability variation in a given situation, which is unknown initially. There are indications that for some situations even the complete dataset might be not large enough. At a subset size of approximately 25, the significant differences between aspect groups (as determined using the whole dataset) were only obtained in one out of five situations. In some situations, up to 20% of the subsets showed a substantially different distribution of the aspect median. Thus, in most cases, 25 measurements (which can be achieved by six two-person teams in one day) did not allow to draw reliable conclusions.

  1. Proof of concept demonstration of optimal composite MRI endpoints for clinical trials.

    PubMed

    Edland, Steven D; Ard, M Colin; Sridhar, Jaiashre; Cobia, Derin; Martersteck, Adam; Mesulam, M Marsel; Rogalski, Emily J

    2016-09-01

    Atrophy measures derived from structural MRI are promising outcome measures for early phase clinical trials, especially for rare diseases such as primary progressive aphasia (PPA), where the small available subject pool limits our ability to perform meaningfully powered trials with traditional cognitive and functional outcome measures. We investigated a composite atrophy index in 26 PPA participants with longitudinal MRIs separated by two years. Rogalski et al . [ Neurology 2014;83:1184-1191] previously demonstrated that atrophy of the left perisylvian temporal cortex (PSTC) is a highly sensitive measure of disease progression in this population and a promising endpoint for clinical trials. Using methods described by Ard et al . [ Pharmaceutical Statistics 2015;14:418-426], we constructed a composite atrophy index composed of a weighted sum of volumetric measures of 10 regions of interest within the left perisylvian cortex using weights that maximize signal-to-noise and minimize sample size required of trials using the resulting score. Sample size required to detect a fixed percentage slowing in atrophy in a two-year clinical trial with equal allocation of subjects across arms and 90% power was calculated for the PSTC and optimal composite surrogate biomarker endpoints. The optimal composite endpoint required 38% fewer subjects to detect the same percent slowing in atrophy than required by the left PSTC endpoint. Optimal composites can increase the power of clinical trials and increase the probability that smaller trials are informative, an observation especially relevant for PPA, but also for related neurodegenerative disorders including Alzheimer's disease.

  2. Structural diversity requires individual optimization of ethanol concentration in polysaccharide precipitation.

    PubMed

    Xu, Jun; Yue, Rui-Qi; Liu, Jing; Ho, Hing-Man; Yi, Tao; Chen, Hu-Biao; Han, Quan-Bin

    2014-06-01

    Ethanol precipitation is one of the most widely used methods for preparing natural polysaccharides, in which ethanol concentration significantly affects the precipitate yield, however, is usually set at 70-80%. Whether the standardization of ethanol concentration is appropriate has not been investigated. In the present study, the precipitation yields produced in varied ethanol concentrations (10-90%) were qualitatively and quantitatively evaluated by HPGPC (high-performance gel-permeation chromatography), using two series of standard glucans, namely dextrans and pullulans, as reference samples, and then eight natural samples. The results indicated that the response of a polysaccharide's chemical structure, with diversity in structural features and molecular sizes, to ethanol concentration is the decisive factor in precipitation of these glucans. Polysaccharides with different structural features, even though they have similar molecular weights, exhibit significantly different precipitation behaviors. For a specific glucan, the lower its molecular size, the higher the ethanol concentration needed for complete precipitation. The precipitate yield varied from 10% to 100% in 80% ethanol as the molecular size increased from 1kDa to 270kDa. This paper aims to draw scientists' attention to the fact that, in extracting natural polysaccharides by ethanol precipitation, the ethanol concentration must be individually optimized for each type of material. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. The assessment of data sources for influenza virologic surveillance in New York State.

    PubMed

    Escuyer, Kay L; Waters, Christine L; Gowie, Donna L; Maxted, Angie M; Farrell, Gregory M; Fuschino, Meghan E; St George, Kirsten

    2017-03-01

    Following the 2013 USA release of the Influenza Virologic Surveillance Right Size Roadmap, the New York State Department of Health (NYSDOH) embarked on an evaluation of data sources for influenza virologic surveillance. To assess NYS data sources, additional to data generated by the state public health laboratory (PHL), which could enhance influenza surveillance at the state and national level. Potential sources of laboratory test data for influenza were analyzed for quantity and quality. Computer models, designed to assess sample sizes and the confidence of data for statistical representation of influenza activity, were used to compare PHL test data to results from clinical and commercial laboratories, reported between June 8, 2013 and May 31, 2014. Sample sizes tested for influenza at the state PHL were sufficient for situational awareness surveillance with optimal confidence levels, only during peak weeks of the influenza season. Influenza data pooled from NYS PHLs and clinical laboratories generated optimal confidence levels for situational awareness throughout the influenza season. For novel influenza virus detection in NYS, combined real-time (rt) RT-PCR data from state and regional PHLs achieved ≥85% confidence during peak influenza activity, and ≥95% confidence for most of low season and all of off-season. In NYS, combined data from clinical, commercial, and public health laboratories generated optimal influenza surveillance for situational awareness throughout the season. Statistical confidence for novel virus detection, which is reliant on only PHL data, was achieved for most of the year. © 2016 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  4. Sparse targets in hydroacoustic surveys: Balancing quantity and quality of in situ target strength data

    USGS Publications Warehouse

    DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher

    2017-01-01

    Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.

  5. Inverse Statistics and Asset Allocation Efficiency

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam

    In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.

  6. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  7. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    DOE PAGES

    Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; ...

    2016-06-02

    We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less

  8. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian

    We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less

  9. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    PubMed

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.

  10. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    PubMed Central

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046

  11. Study on detection geometry and detector shielding for portable PGNAA system using PHITS

    NASA Astrophysics Data System (ADS)

    Ithnin, H.; Dahing, L. N. S.; Lip, N. M.; Rashid, I. Q. Abd; Mohamad, E. J.

    2018-01-01

    Prompt gamma-ray neutron activation analysis (PGNAA) measurements require efficient detectors for gamma-ray detection. Apart from experimental studies, the Monte Carlo (MC) method has become one of the most popular tools in detector studies. The absolute efficiency for a 2 × 2 inch cylindrical Sodium Iodide (NaI) detector has been modelled using the PHITS software and compared with previous studies in literature. In the present work, PHITS code is used for optimization of portable PGNAA system using the validated NaI detector. The detection geometry is optimized by moving the detector along the sample to find the highest intensity of the prompt gamma generated from the sample. Shielding material for the validated NaI detector is also studied to find the best option for the PGNAA system setup. The result shows the optimum distance for detector is on the surface of the sample and around 15 cm from the source. The results specify that this process can be followed to determine the best setup for PGNAA system for a different sample size and detector type. It can be concluded that data from PHITS code is a strong tool not only for efficiency studies but also for optimization of PGNAA system.

  12. Miniaturization and Optimization of Nanoscale Resonant Oscillators

    DTIC Science & Technology

    2013-09-07

    carried out over a range of core sizes. Using a double 4-f imaging system in conjunction with a pump filter ( Semrock RazorEdge long wavelength pass...Using a double 4-f imaging system in conjunction with a pump filter ( Semrock RazorEdge long wavelength pass), the samples are imaged onto either an

  13. Optimized mtDNA Control Region Primer Extension Capture Analysis for Forensically Relevant Samples and Highly Compromised mtDNA of Different Age and Origin

    PubMed Central

    Eduardoff, Mayra; Xavier, Catarina; Strobl, Christina; Casas-Vargas, Andrea; Parson, Walther

    2017-01-01

    The analysis of mitochondrial DNA (mtDNA) has proven useful in forensic genetics and ancient DNA (aDNA) studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR) is commonly sequenced using established Sanger-type Sequencing (STS) protocols involving fragment sizes down to approximately 150 base pairs (bp). Recent developments include Massively Parallel Sequencing (MPS) of (multiplex) PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC) methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less), and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples), and tested challenging forensic samples (n = 2) as well as compromised solid tissue samples (n = 15) up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS method for final implementation in forensic genetic laboratories. PMID:28934125

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  16. Identification of usual interstitial pneumonia pattern using RNA-Seq and machine learning: challenges and solutions.

    PubMed

    Choi, Yoonha; Liu, Tiffany Ting; Pankratz, Daniel G; Colby, Thomas V; Barth, Neil M; Lynch, David A; Walsh, P Sean; Raghu, Ganesh; Kennedy, Giulia C; Huang, Jing

    2018-05-09

    We developed a classifier using RNA sequencing data that identifies the usual interstitial pneumonia (UIP) pattern for the diagnosis of idiopathic pulmonary fibrosis. We addressed significant challenges, including limited sample size, biological and technical sample heterogeneity, and reagent and assay batch effects. We identified inter- and intra-patient heterogeneity, particularly within the non-UIP group. The models classified UIP on transbronchial biopsy samples with a receiver-operating characteristic area under the curve of ~ 0.9 in cross-validation. Using in silico mixed samples in training, we prospectively defined a decision boundary to optimize specificity at ≥85%. The penalized logistic regression model showed greater reproducibility across technical replicates and was chosen as the final model. The final model showed sensitivity of 70% and specificity of 88% in the test set. We demonstrated that the suggested methodologies appropriately addressed challenges of the sample size, disease heterogeneity and technical batch effects and developed a highly accurate and robust classifier leveraging RNA sequencing for the classification of UIP.

  17. DoE optimization of a mercury isotope ratio determination method for environmental studies.

    PubMed

    Berni, Alex; Baschieri, Carlo; Covelli, Stefano; Emili, Andrea; Marchetti, Andrea; Manzini, Daniela; Berto, Daniela; Rampazzo, Federico

    2016-05-15

    By using the experimental design (DoE) technique, we optimized an analytical method for the determination of mercury isotope ratios by means of cold-vapor multicollector ICP-MS (CV-MC-ICP-MS) to provide absolute Hg isotopic ratio measurements with a suitable internal precision. By running 32 experiments, the influence of mercury and thallium internal standard concentrations, total measuring time and sample flow rate was evaluated. Method was optimized varying Hg concentration between 2 and 20 ng g(-1). The model finds out some correlations within the parameters affect the measurements precision and predicts suitable sample measurement precisions for Hg concentrations from 5 ng g(-1) Hg upwards. The method was successfully applied to samples of Manila clams (Ruditapes philippinarum) coming from the Marano and Grado lagoon (NE Italy), a coastal environment affected by long term mercury contamination mainly due to mining activity. Results show different extents of both mass dependent fractionation (MDF) and mass independent fractionation (MIF) phenomena in clams according to their size and sampling sites in the lagoon. The method is fit for determinations on real samples, allowing for the use of Hg isotopic ratios to study mercury biogeochemical cycles in complex ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  19. Substrate milling pretreatment as a key parameter for Solid-State Anaerobic Digestion optimization.

    PubMed

    Motte, J-C; Escudié, R; Hamelin, J; Steyer, J-P; Bernet, N; Delgenes, J-P; Dumas, C

    2014-12-01

    The effect of milling pretreatment on performances of Solid-State Anaerobic Digestion (SS-AD) of raw lignocellulosic residue is still controverted. Three batch reactors treating different straw particle sizes (milled 0.25 mm, 1 mm and 10 mm) were followed during 62 days (6 sampling dates). Although a fine milling improves substrate accessibility and conversion rate (up to 30% compared to coarse milling), it also increases the risk of media acidification because of rapid and high acids production during fermentation of the substrate soluble fraction. Meanwhile, a gradual adaptation of microbial communities, were observed according to both reaction progress and methanogenic performances. The study concluded that particle size reduction affected strongly the performances of the reaction due to an increase of substrate bioaccessibility. An optimization of SS-AD processes thanks to particle size reduction could therefore be applied at farm or industrial scale only if a specific management of the soluble compounds is established. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Optimal tumor sampling for immunostaining of biomarkers in breast carcinoma

    PubMed Central

    2011-01-01

    Introduction Biomarkers, such as Estrogen Receptor, are used to determine therapy and prognosis in breast carcinoma. Immunostaining assays of biomarker expression have a high rate of inaccuracy; for example, estimates are as high as 20% for Estrogen Receptor. Biomarkers have been shown to be heterogeneously expressed in breast tumors and this heterogeneity may contribute to the inaccuracy of immunostaining assays. Currently, no evidence-based standards exist for the amount of tumor that must be sampled in order to correct for biomarker heterogeneity. The aim of this study was to determine the optimal number of 20X fields that are necessary to estimate a representative measurement of expression in a whole tissue section for selected biomarkers: ER, HER-2, AKT, ERK, S6K1, GAPDH, Cytokeratin, and MAP-Tau. Methods Two collections of whole tissue sections of breast carcinoma were immunostained for biomarkers. Expression was quantified using the Automated Quantitative Analysis (AQUA) method of quantitative immunofluorescence. Simulated sampling of various numbers of fields (ranging from one to thirty five) was performed for each marker. The optimal number was selected for each marker via resampling techniques and minimization of prediction error over an independent test set. Results The optimal number of 20X fields varied by biomarker, ranging between three to fourteen fields. More heterogeneous markers, such as MAP-Tau protein, required a larger sample of 20X fields to produce representative measurement. Conclusions The optimal number of 20X fields that must be sampled to produce a representative measurement of biomarker expression varies by marker with more heterogeneous markers requiring a larger number. The clinical implication of these findings is that breast biopsies consisting of a small number of fields may be inadequate to represent whole tumor biomarker expression for many markers. Additionally, for biomarkers newly introduced into clinical use, especially if therapeutic response is dictated by level of expression, the optimal size of tissue sample must be determined on a marker-by-marker basis. PMID:21592345

  1. Modeling the transport of engineered nanoparticles in saturated porous media - an experimental setup

    NASA Astrophysics Data System (ADS)

    Braun, A.; Neukum, C.; Azzam, R.

    2011-12-01

    The accelerating production and application of engineered nanoparticles is causing concerns regarding their release and fate in the environment. For assessing the risk that is posed to drinking water resources it is important to understand the transport and retention mechanisms of engineered nanoparticles in soil and groundwater. In this study an experimental setup for analyzing the mobility of silver and titanium dioxide nanoparticles in saturated porous media is presented. Batch and column experiments with glass beads and two different soils as matrices are carried out under varied conditions to study the impact of electrolyte concentration and pore water velocities. The analysis of nanoparticles implies several challenges, such as the detection and characterization and the preparation of a well dispersed sample with defined properties, as nanoparticles tend to form agglomerates when suspended in an aqueous medium. The analytical part of the experiments is mainly undertaken with Flow Field-Flow Fractionation (FlFFF). This chromatography like technique separates a particulate sample according to size. It is coupled to a UV/Vis and a light scattering detector for analyzing concentration and size distribution of the sample. The advantage of this technique is the ability to analyze also complex environmental samples, such as the effluent of column experiments including soil components, and the gentle sample treatment. For optimization of the sample preparation and for getting a first idea of the aggregation behavior in soil solutions, in sedimentation experiments the effect of ionic strength, sample concentration and addition of a surfactant on particle or aggregate size and temporal dispersion stability was investigated. In general the samples are more stable the lower the concentration of particles is. For TiO2 nanoparticles, the addition of a surfactant yielded the most stable samples with smallest aggregate sizes. Furthermore the suspension stability is increasing with electrolyte concentration. Depending on the dispersing medium the results show that TiO2 nanoparticles tend to form aggregates between 100-200 nm in diameter while the primary particle size is given as 21 nm by the manufacturer. Aggregate sizes are increasing with time. The particle size distribution of the silver nanoparticle samples is quite uniform in each medium. The fresh samples show aggregate sizes between 40 and 45 nm while the primary particle size is 15 nm according to the manufacturer. Aggregate size is only slightly increasing with time during the sedimentation experiments. These results are used as a reference when analyzing the effluent of column experiments.

  2. The effects of relative food item size on optimal tooth cusp sharpness during brittle food item processing

    PubMed Central

    Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.

    2014-01-01

    Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068

  3. Sample preparation techniques for the determination of trace residues and contaminants in foods.

    PubMed

    Ridgway, Kathy; Lalljie, Sam P D; Smith, Roger M

    2007-06-15

    The determination of trace residues and contaminants in complex matrices, such as food, often requires extensive sample extraction and preparation prior to instrumental analysis. Sample preparation is often the bottleneck in analysis and there is a need to minimise the number of steps to reduce both time and sources of error. There is also a move towards more environmentally friendly techniques, which use less solvent and smaller sample sizes. Smaller sample size becomes important when dealing with real life problems, such as consumer complaints and alleged chemical contamination. Optimal sample preparation can reduce analysis time, sources of error, enhance sensitivity and enable unequivocal identification, confirmation and quantification. This review considers all aspects of sample preparation, covering general extraction techniques, such as Soxhlet and pressurised liquid extraction, microextraction techniques such as liquid phase microextraction (LPME) and more selective techniques, such as solid phase extraction (SPE), solid phase microextraction (SPME) and stir bar sorptive extraction (SBSE). The applicability of each technique in food analysis, particularly for the determination of trace organic contaminants in foods is discussed.

  4. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  5. Online submicron particle sizing by dynamic light scattering using autodilution

    NASA Technical Reports Server (NTRS)

    Nicoli, David F.; Elings, V. B.

    1989-01-01

    Efficient production of a wide range of commercial products based on submicron colloidal dispersions would benefit from instrumentation for online particle sizing, permitting real time monitoring and control of the particle size distribution. Recent advances in the technology of dynamic light scattering (DLS), especially improvements in algorithms for inversion of the intensity autocorrelation function, have made it ideally suited to the measurement of simple particle size distributions in the difficult submicron region. Crucial to the success of an online DSL based instrument is a simple mechanism for automatically sampling and diluting the starting concentrated sample suspension, yielding a final concentration which is optimal for the light scattering measurement. A proprietary method and apparatus was developed for performing this function, designed to be used with a DLS based particle sizing instrument. A PC/AT computer is used as a smart controller for the valves in the sampler diluter, as well as an input-output communicator, video display and data storage device. Quantitative results are presented for a latex suspension and an oil-in-water emulsion.

  6. Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research

    NASA Astrophysics Data System (ADS)

    Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.

    2010-12-01

    Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.

  7. Modeling and Simulation of A Microchannel Cooling System for Vitrification of Cells and Tissues.

    PubMed

    Wang, Y; Zhou, X M; Jiang, C J; Yu, Y T

    The microchannel heat exchange system has several advantages and can be used to enhance heat transfer for vitrification. To evaluate the microchannel cooling method and to analyze the effects of key parameters such as channel structure, flow rate and sample size. A computational flow dynamics model is applied to study the two-phase flow in microchannels and its related heat transfer process. The fluid-solid coupling problem is solved with a whole field solution method (i.e., flow profile in channels and temperature distribution in the system being simulated simultaneously). Simulation indicates that a cooling rate >10 4 C/min is easily achievable using the microchannel method with the high flow rate for a board range of sample sizes. Channel size and material used have significant impact on cooling performance. Computational flow dynamics is useful for optimizing the design and operation of the microchannel system.

  8. A critical evaluation of an asymmetrical flow field-flow fractionation system for colloidal size characterization of natural organic matter.

    PubMed

    Zhou, Zhengzhen; Guo, Laodong

    2015-06-19

    Colloidal retention characteristics, recovery and size distribution of model macromolecules and natural dissolved organic matter (DOM) were systematically examined using an asymmetrical flow field-flow fractionation (AFlFFF) system under various membrane size cutoffs and carrier solutions. Polystyrene sulfonate (PSS) standards with known molecular weights (MW) were used to determine their permeation and recovery rates by membranes with different nominal MW cutoffs (NMWCO) within the AFlFFF system. Based on a ≥90% recovery rate for PSS standards by the AFlFFF system, the actual NMWCOs were determined to be 1.9 kDa for the 0.3 kDa membrane, 2.7 kDa for the 1 kDa membrane, and 33 kDa for the 10 kDa membrane, respectively. After membrane calibration, natural DOM samples were analyzed with the AFlFFF system to determine their colloidal size distribution and the influence from membrane NMWCOs and carrier solutions. Size partitioning of DOM samples showed a predominant colloidal size fraction in the <5 nm or <10 kDa size range, consistent with the size characteristics of humic substances as the main terrestrial DOM component. Recovery of DOM by the AFlFFF system, as determined by UV-absorbance at 254 nm, decreased significantly with increasing membrane NMWCO, from 45% by the 0.3 kDa membrane to 2-3% by the 10 kDa membrane. Since natural DOM is mostly composed of lower MW substances (<10 kDa) and the actual membrane cutoffs are normally larger than their manufacturer ratings, a 0.3 kDa membrane (with an actual NMWCO of 1.9 kDa) is highly recommended for colloidal size characterization of natural DOM. Among the three carrier solutions, borate buffer seemed to provide the highest recovery and optimal separation of DOM. Rigorous calibration with macromolecular standards and optimization of system conditions are a prerequisite for quantifying colloidal size distribution using the flow field-flow fractionation technique. In addition, the coupling of AFlFFF with fluorescence EEMs could provide new insights into DOM heterogeneity in different colloidal size fractions. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Fully automatic characterization and data collection from crystals of biological macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to themore » optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.« less

  10. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  11. Optimization of sintering conditions for cerium-doped yttrium aluminum garnet

    NASA Astrophysics Data System (ADS)

    Cranston, Robert Wesley McEachern

    YAG:Ce phosphors have become widely used as blue/yellow light converters in camera projectors, white light emitting diodes (WLEDs) and general lighting applications. Many studies have been published on the production, characterization, and analysis of this optical ceramic but few have been done on determining optimal synthesis conditions. In this work, YAG:Ce phosphors were synthesized through solid state mixing and sintering. The synthesized powders and the highest quality commercially available powders were pressed and sintered to high densities and their photoluminescence (PL) intensity measured. The optimization process involved the sintering temperature, sintering time, annealing temperature and the level of Ce concentration. In addition to the PL intensity, samples were also characterized using particle size analysis, X-ray diffraction (XRD), and scanning electron microscopy (SEM). The PL data was compared with data produced from a YAG:Ce phosphor sample provided by Christie Digital. The peak intensities of the samples were converted to a relative percentage of this industry product. The highest value for the intensity of the commercial powder was measured for a Ce concentration of 0.3 mole% with a sintering temperature of 1540°C and a sintering dwell time of 7 hours. The optimal processing parameters for the in-house synthesized powder were slightly different from those of commercial powders. The optimal Ce concentration was 0.4 mole% Ce, sintering temperature was 1560°C and sintering dwell time was 10 hours. These optimal conditions produced a relative intensity of 94.20% and 95.28% for the in-house and commercial powders respectively. Polishing of these samples resulted in an increase of 5% in the PL intensity.

  12. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    PubMed

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Pharmacogenomics in neurology: current state and future steps.

    PubMed

    Chan, Andrew; Pirmohamed, Munir; Comabella, Manuel

    2011-11-01

    In neurology, as in any other clinical specialty, there is a need to develop treatment strategies that allow stratification of therapies to optimize efficacy and minimize toxicity. Pharmacogenomics is one such method for therapy optimization: it aims to elucidate the relationship between human genome sequence variation and differential drug responses. Approaches have focused on candidate approaches investigating absorption-, distribution-, metabolism, and elimination (ADME)-related genes (pharmacokinetic pathways), and potential drug targets (pharmacodynamic pathways). To date, however, only few genetic variants have been incorporated into clinical algorithms. Unfortunately, a large number of studies have thrown up contradictory results due to a number of deficiencies, including small sample sizes, inadequate phenotyping, and genotyping strategies. Thus, there still exists an urgent need to establish biomarkers that could help to select for patients with an optimal benefit to risk relationship. Here we review recent advances, and limitations, in pharmacogenomics for agents used in neuroimmunology, neurodegenerative diseases, ischemic stroke, epilepsy, and primary headaches. Further work is still required in all of these areas, which really needs to progress on several fronts, including better standardized phenotyping, appropriate sample sizes through multicenter collaborations and judicious use of new technological advances such as genome-wide approaches, next generation sequencing and systems biology. In time, this is likely to lead to improvements in the benefit-harm balance of neurological therapies, cost efficiency, and identification of new drugs. Copyright © 2011 American Neurological Association.

  14. Positivity in healthcare: relation of optimism to performance.

    PubMed

    Luthans, Kyle W; Lebsack, Sandra A; Lebsack, Richard R

    2008-01-01

    The purpose of this paper is to explore the linkage between nurses' levels of optimism and performance outcomes. The study sample consisted of 78 nurses in all areas of a large healthcare facility (hospital) in the Midwestern United States. The participants completed surveys to determine their current state of optimism. Supervisory performance appraisal data were gathered in order to measure performance outcomes. Spearman correlations and a one-way ANOVA were used to analyze the data. The results indicated a highly significant positive relationship between the nurses' measured state of optimism and their supervisors' ratings of their commitment to the mission of the hospital, a measure of contribution to increasing customer satisfaction, and an overall measure of work performance. This was an exploratory study. Larger sample sizes and longitudinal data would be beneficial because it is probable that state optimism levels will vary and that it might be more accurate to measure state optimism at several points over time in order to better predict performance outcomes. Finally, the study design does not imply causation. Suggestions for effectively developing and managing nurses' optimism to positively impact their performance are provided. To date, there has been very little empirical evidence assessing the impact that positive psychological capacities such as optimism of key healthcare professionals may have on performance. This paper was designed to help begin to fill this void by examining the relationship between nurses' self-reported optimism and their supervisors' evaluations of their performance.

  15. Procedures for analysis of debris relative to Space Shuttle systems

    NASA Technical Reports Server (NTRS)

    Kim, Hae Soo; Cummings, Virginia J.

    1993-01-01

    Debris samples collected from various Space Shuttle systems have been submitted to the Microchemical Analysis Branch. This investigation was initiated to develop optimal techniques for the analysis of debris. Optical microscopy provides information about the morphology and size of crystallites, particle sizes, amorphous phases, glass phases, and poorly crystallized materials. Scanning electron microscopy with energy dispersive spectrometry is utilized for information on surface morphology and qualitative elemental content of debris. Analytical electron microscopy with wavelength dispersive spectrometry provides information on the quantitative elemental content of debris.

  16. Optimal Sample Size Determinations for the Heteroscedastic Two One-Sided Tests of Mean Equivalence: Design Schemes and Software Implementations

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2017-01-01

    Equivalence assessment is becoming an increasingly important topic in many application areas including behavioral and social sciences research. Although there exist more powerful tests, the two one-sided tests (TOST) procedure is a technically transparent and widely accepted method for establishing statistical equivalence. Alternatively, a direct…

  17. Robust Approximations to the Non-Null Distribution of the Product Moment Correlation Coefficient I: The Phi Coefficient.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Meyers, Sarah A.

    Correlation coefficients are frequently reported in educational and psychological research. The robustness properties and optimality among practical approximations when phi does not equal 0 with moderate sample sizes are not well documented. Three major approximations and their variations are examined: (1) a normal approximation of Fisher's Z,…

  18. Communicating to Learn: Infants' Pointing Gestures Result in Optimal Learning

    ERIC Educational Resources Information Center

    Lucca, Kelsey; Wilbourn, Makeba Parramore

    2018-01-01

    Infants' pointing gestures are a critical predictor of early vocabulary size. However, it remains unknown precisely how pointing relates to word learning. The current study addressed this question in a sample of 108 infants, testing one mechanism by which infants' pointing may influence their learning. In Study 1, 18-month-olds, but not…

  19. Minimax Estimation of Functionals of Discrete Distributions

    PubMed Central

    Jiao, Jiantao; Venkat, Kartik; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We propose a general methodology for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the support size S is unknown and may be comparable with or even much larger than the number of observations n. We treat the respective regions where the functional is nonsmooth and smooth separately. In the nonsmooth regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the smooth regime, we apply a bias-corrected version of the maximum likelihood estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing the performance of the resulting schemes for estimating two important information measures: 1) the entropy H(P)=∑i=1S−pilnpi and 2) Fα(P)=∑i=1Spiα, α > 0. We obtain the minimax L2 rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity n ≍ S/ln S for entropy estimation. We also demonstrate that the sample complexity for estimating Fα(P), 0 < α < 1, is n ≍ S1/α/ln S, which can be achieved by our estimator but not the MLE. For 1 < α < 3/2, we show the minimax L2 rate for estimating Fα(P) is (n ln n)−2(α−1) for infinite support size, while the maximum L2 rate for the MLE is n−2(α−1). For all the above cases, the behavior of the minimax rate-optimal estimators with n samples is essentially that of the MLE (plug-in rule) with n ln n samples, which we term “effective sample size enlargement.” We highlight the practical advantages of our schemes for the estimation of entropy and mutual information. We compare our performance with various existing approaches, and demonstrate that our approach reduces running time and boosts the accuracy. Moreover, we show that the minimax rate-optimal mutual information estimator yielded by our framework leads to significant performance boosts over the Chow–Liu algorithm in learning graphical models. The wide use of information measure estimation suggests that the insights and estimators obtained in this paper could be broadly applicable. PMID:29375152

  20. Agile convolutional neural network for pulmonary nodule classification using CT images.

    PubMed

    Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei

    2018-04-01

    To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

  1. Salmonella enteritidis surveillance by egg immunology: impact of the sampling scheme on the release of contaminated table eggs.

    PubMed

    Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie

    2011-08-01

    Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.

  2. Applying information theory to small groups assessment: emotions and well-being at work.

    PubMed

    García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano

    2010-05-01

    This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.

  3. Headspace Single-Drop Microextraction Gas Chromatography Mass Spectrometry for the Analysis of Volatile Compounds from Herba Asari

    PubMed Central

    Wang, Guan-Jie; Tian, Li; Fan, Yu-Ming; Qi, Mei-Ling

    2013-01-01

    A rapid headspace single-drop microextraction gas chromatography mass spectrometry (SDME-GC-MS) for the analysis of the volatile compounds in Herba Asari was developed in this study. The extraction solvent, extraction temperature and time, sample amount, and particle size were optimized. A mixed solvent of n-tridecane and butyl acetate (1 : 1) was finally used for the extraction with sample amount of 0.750 g and 100-mesh particle size at 70°C for 15 min. Under the determined conditions, the pound samples of Herba Asari were directly applied for the analysis. The result showed that SDME-GC–MS method was a simple, effective, and inexpensive way to measure the volatile compounds in Herba Asari and could be used for the analysis of volatile compounds in Chinese medicine. PMID:23607049

  4. Variable criteria sequential stopping rule: Validity and power with repeated measures ANOVA, multiple correlation, MANOVA and relation to Chi-square distribution.

    PubMed

    Fitts, Douglas A

    2017-09-21

    The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.

  5. Hollow Au-Ag Nanoparticles Labeled Immunochromatography Strip for Highly Sensitive Detection of Clenbuterol

    NASA Astrophysics Data System (ADS)

    Wang, Jingyun; Zhang, Lei; Huang, Youju; Dandapat, Anirban; Dai, Liwei; Zhang, Ganggang; Lu, Xuefei; Zhang, Jiawei; Lai, Weihua; Chen, Tao

    2017-01-01

    The probe materials play a significant role in improving the detection efficiency and sensitivity of lateral-flow immunochromatographic test strip (ICTS). Unlike conventional ICTS assay usually uses single-component, solid gold nanoparticles as labeled probes, in our present study, a bimetallic, hollow Au-Ag nanoparticles (NPs) labeled ICTS was successfully developed for the detection of clenbuterol (CLE). The hollow Au-Ag NPs with different Au/Ag mole ratio and tunable size were synthesized by varying the volume ratio of [HAuCl4]:[Ag NPs] via the galvanic replacement reaction. The surface of hollow Ag-Au NPs was functionalized with 11-mercaptoundecanoic acid (MUA) for further covalently bonded with anti-CLE monoclonal antibody. Overall size of the Au-Ag NPs, size of the holes within individual NPs and also Au/Ag mole ratio have been systematically optimized to amplify both the visual inspection signals and the quantitative data. The sensitivity of optimized hollow Au-Ag NPs probes has been achieved even as low as 2 ppb in a short time (within 15 min), which is superior over the detection performance of conventional test strip using Au NPs. The optimized hollow Au-Ag NPs labeled test strip can be used as an ideal candidate for the rapid screening of CLE in food samples.

  6. New polymorphs of 9-nitro-camptothecin prepared using a supercritical anti-solvent process.

    PubMed

    Huang, Yinxia; Wang, Hongdi; Liu, Guijin; Jiang, Yanbin

    2015-12-30

    Recrystallization and micronization of 9-nitro-camptothecin (9-NC) has been investigated using the supercritical anti-solvent (SAS) technology in this study. Five operating factors, i.e., the type of organic solvent, the concentration of 9-NC in the solution, the flow rate of 9-NC solution, the precipitation pressure and the temperature, were optimized using a selected OA16 (4(5)) orthogonal array design and a series of characterizations were performed for all samples. The results showed that the processed 9-NC particles exhibited smaller particle size and narrower particle size distribution as compared with 9-NC raw material (Form I), and the optimum micronization conditions for preparing 9-NC with minimum particle size were determined by variance analysis, where the solvent plays the most important role in the formation and transformation of polymorphs. Three new polymorphic forms (Form II, III and IV) of 9-NC, which present different physicochemical properties, were generated after the SAS process. The predicted structures of the 9-NC crystals, which were consistent with the experiments, were performed from their experimental XRD data by the direct space approach using the Reflex module of Materials Studio. Meanwhile, the optimal sample (Form III) was proved to have higher cytotoxicity against the cancer cells, which suggested the therapeutic efficacy of 9-NC is polymorph-dependent. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less

  8. Size exclusion chromatography for analyses of fibroin in silk: optimization of sampling and separation conditions

    NASA Astrophysics Data System (ADS)

    Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna

    2014-02-01

    A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.

  9. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  10. Improved capacitance characteristics of electrospun ACFs by pore size control and vanadium catalyst.

    PubMed

    Im, Ji Sun; Woo, Sang-Wook; Jung, Min-Jung; Lee, Young-Seak

    2008-11-01

    Nano-sized carbon fibers were prepared by using electrospinning, and their electrochemical properties were investigated as a possible electrode material for use as an electric double-layer capacitor (EDLC). To improve the electrode capacitance of EDLC, we implemented a three-step optimization. First, metal catalyst was introduced into the carbon fibers due to the excellent conductivity of metal. Vanadium pentoxide was used because it could be converted to vanadium for improved conductivity as the pore structure develops during the carbonization step. Vanadium catalyst was well dispersed in the carbon fibers, improving the capacitance of the electrode. Second, pore-size development was manipulated to obtain small mesopore sizes ranging from 2 to 5 nm. Through chemical activation, carbon fibers with controlled pore sizes were prepared with a high specific surface and pore volume, and their pore structure was investigated by using a BET apparatus. Finally, polyacrylonitrile was used as a carbon precursor to enrich for nitrogen content in the final product because nitrogen is known to improve electrode capacitance. Ultimately, the electrospun activated carbon fibers containing vanadium show improved functionality in charge/discharge, cyclic voltammetry, and specific capacitance compared with other samples because of an optimal combination of vanadium, nitrogen, and fixed pore structures.

  11. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  12. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  13. Optimizing a Test Method to Evaluate Resistance of Pervious Concrete to Cycles of Freezing and Thawing in the Presence of Different Deicing Salts

    PubMed Central

    Tsang, Chehong; Shehata, Medhat H.; Lotfy, Abdurrahmaan

    2016-01-01

    The lack of a standard test method for evaluating the resistance of pervious concrete to cycles of freezing and thawing in the presence of deicing salts is the motive behind this study. Different sample size and geometry, cycle duration, and level of submersion in brine solutions were investigated to achieve an optimized test method. The optimized test method was able to produce different levels of damage when different types of deicing salts were used. The optimized duration of one cycle was found to be 24 h with twelve hours of freezing at −18 °C and twelve hours of thawing at +21 °C, with the bottom 10 mm of the sample submerged in the brine solution. Cylinder samples with a diameter of 100 mm and height of 150 mm were used and found to produce similar results to 150 mm-cubes. Based on the obtained results a mass loss of 3%–5% is proposed as a failure criterion of cylindrical samples. For the materials and within the cycles of freezing/thawing investigated here, the deicers that caused the most damage were NaCl, CaCl2 and urea, followed by MgCl2, potassium acetate, sodium acetate and calcium-magnesium acetate. More testing is needed to validate the effects of different deicers under long term exposures and different temperature ranges. PMID:28773998

  14. Optimizing adaptive design for Phase 2 dose-finding trials incorporating long-term success and financial considerations: A case study for neuropathic pain.

    PubMed

    Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin

    2017-06-01

    In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Detecting recurrence domains of dynamical systems by symbolic dynamics.

    PubMed

    beim Graben, Peter; Hutt, Axel

    2013-04-12

    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots. In phase space, recurrence plots yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.

  16. Optimal decision making modeling for copper-matte Peirce-Smith converting process by means of data mining

    NASA Astrophysics Data System (ADS)

    Song, Yanpo; Peng, Xiaoqi; Tang, Ying; Hu, Zhikun

    2013-07-01

    To improve the operation level of copper converter, the approach to optimal decision making modeling for coppermatte converting process based on data mining is studied: in view of the characteristics of the process data, such as containing noise, small sample size and so on, a new robust improved ANN (artificial neural network) modeling method is proposed; taking into account the application purpose of decision making model, three new evaluation indexes named support, confidence and relative confidence are proposed; using real production data and the methods mentioned above, optimal decision making model for blowing time of S1 period (the 1st slag producing period) are developed. Simulation results show that this model can significantly improve the converting quality of S1 period, increase the optimal probability from about 70% to about 85%.

  17. Design of focused and restrained subsets from extremely large virtual libraries.

    PubMed

    Jamois, Eric A; Lin, Chien T; Waldman, Marvin

    2003-11-01

    With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).

  18. A low-volume cavity ring-down spectrometer for sample-limited applications

    NASA Astrophysics Data System (ADS)

    Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.

    2014-08-01

    In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.

  19. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  20. Effect of plot and sample size on timing and precision of urban forest assessments

    Treesearch

    David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn

    2008-01-01

    Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...

  1. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  2. The Role of Pubertal Timing in What Adolescent Boys Do Online

    ERIC Educational Resources Information Center

    Skoog, Therese; Stattin, Hakan; Kerr, Margaret

    2009-01-01

    The aim of this study was to investigate associations between pubertal timing and boys' Internet use, particularly their viewing of pornography. We used a sample comprising of 97 boys in grade 8 (M age, 14.22 years) from two schools in a medium-sized Swedish town. This age should be optimal for differentiating early, on-time, and later-maturing…

  3. Human breath metabolomics using an optimized noninvasive exhaled breath condensate sampler

    PubMed Central

    Zamuruyev, Konstantin O.; Aksenov, Alexander A.; Pasamontes, Alberto; Brown, Joshua F.; Pettit, Dayna R.; Foutouhi, Soraya; Weimer, Bart C.; Schivo, Michael; Kenyon, Nicholas J.; Delplanque, Jean-Pierre; Davis, Cristina E.

    2017-01-01

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017). PMID:28004639

  4. Human breath metabolomics using an optimized non-invasive exhaled breath condensate sampler.

    PubMed

    Zamuruyev, Konstantin O; Aksenov, Alexander A; Pasamontes, Alberto; Brown, Joshua F; Pettit, Dayna R; Foutouhi, Soraya; Weimer, Bart C; Schivo, Michael; Kenyon, Nicholas J; Delplanque, Jean-Pierre; Davis, Cristina E

    2016-12-22

    Exhaled breath condensate (EBC) analysis is a developing field with tremendous promise to advance personalized, non-invasive health diagnostics as new analytical instrumentation platforms and detection methods are developed. Multiple commercially-available and researcher-built experimental samplers are reported in the literature. However, there is very limited information available to determine an effective breath sampling approach, especially regarding the dependence of breath sample metabolomic content on the collection device design and sampling methodology. This lack of an optimal standard procedure results in a range of reported results that are sometimes contradictory. Here, we present a design of a portable human EBC sampler optimized for collection and preservation of the rich metabolomic content of breath. The performance of the engineered device is compared to two commercially available breath collection devices: the RTube ™ and TurboDECCS. A number of design and performance parameters are considered, including: condenser temperature stability during sampling, collection efficiency, condenser material choice, and saliva contamination in the collected breath samples. The significance of the biological content of breath samples, collected with each device, is evaluated with a set of mass spectrometry methods and was the primary factor for evaluating device performance. The design includes an adjustable mass-size threshold for aerodynamic filtering of saliva droplets from the breath flow. Engineering an inexpensive device that allows efficient collection of metalomic-rich breath samples is intended to aid further advancement in the field of breath analysis for non-invasive health diagnostic. EBC sampling from human volunteers was performed under UC Davis IRB protocol 63701-3 (09/30/2014-07/07/2017).

  5. Status report: Implementation of gas measurements at the MAMS 14C AMS facility in Mannheim, Germany

    NASA Astrophysics Data System (ADS)

    Hoffmann, Helene; Friedrich, Ronny; Kromer, Bernd; Fahrni, Simon

    2017-11-01

    By implementing a Gas Interface System (GIS), CO2 gas measurements for radiocarbon dating of small environmental samples (<100 μgC) have been established at the MICADAS (Mini Carbon Dating System) AMS instrument in Mannheim, Germany. The system performance has been optimized and tested with respect to stability and ion yield by repeated blank and standard measurements for sample sizes down to 3 μgC. The highest 12C- low-energy (LE) ion currents, typically reaching 8-15 μA, could be achieved for a mixing ratio of 4% CO2 in Helium, resulting in relative counting errors of 1-2% for samples larger than 10 μgC and 3-7% for sample sizes below 10 μgC. The average count rate was ca. 500 counts per microgram C for OxII standard material. The blank is on the order of 35,000-40,000 radiocarbon years, which is comparable to similar systems. The complete setup thus enables reliable dating for most environmental samples (>3 μgC).

  6. Go big or … don't? A field-based diet evaluation of freshwater piscivore and prey fish size relationships

    PubMed Central

    Ahrenstorff, Tyler D.; Diana, James S.; Fetzer, William W.; Jones, Thomas S.; Lawson, Zach J.; McInerny, Michael C.; Santucci, Victor J.; Vander Zanden, M. Jake

    2018-01-01

    Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10–20% and 32–46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9–2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey. PMID:29543856

  7. Go big or … don't? A field-based diet evaluation of freshwater piscivore and prey fish size relationships.

    PubMed

    Gaeta, Jereme W; Ahrenstorff, Tyler D; Diana, James S; Fetzer, William W; Jones, Thomas S; Lawson, Zach J; McInerny, Michael C; Santucci, Victor J; Vander Zanden, M Jake

    2018-01-01

    Body size governs predator-prey interactions, which in turn structure populations, communities, and food webs. Understanding predator-prey size relationships is valuable from a theoretical perspective, in basic research, and for management applications. However, predator-prey size data are limited and costly to acquire. We quantified predator-prey total length and mass relationships for several freshwater piscivorous taxa: crappie (Pomoxis spp.), largemouth bass (Micropterus salmoides), muskellunge (Esox masquinongy), northern pike (Esox lucius), rock bass (Ambloplites rupestris), smallmouth bass (Micropterus dolomieu), and walleye (Sander vitreus). The range of prey total lengths increased with predator total length. The median and maximum ingested prey total length varied with predator taxon and length, but generally ranged from 10-20% and 32-46% of predator total length, respectively. Predators tended to consume larger fusiform prey than laterally compressed prey. With the exception of large muskellunge, predators most commonly consumed prey between 16 and 73 mm. A sensitivity analysis indicated estimates can be very accurate at sample sizes greater than 1,000 diet items and fairly accurate at sample sizes greater than 100. However, sample sizes less than 50 should be evaluated with caution. Furthermore, median log10 predator-prey body mass ratios ranged from 1.9-2.5, nearly 50% lower than values previously reported for freshwater fishes. Managers, researchers, and modelers could use our findings as a tool for numerous predator-prey evaluations from stocking size optimization to individual-based bioenergetics analyses identifying prey size structure. To this end, we have developed a web-based user interface to maximize the utility of our models that can be found at www.LakeEcologyLab.org/pred_prey.

  8. Optimizing cyanobacteria growth conditions in a sealed environment to enable chemical inhibition tests with volatile chemicals.

    PubMed

    Johnson, Tylor J; Zahler, Jacob D; Baldwin, Emily L; Zhou, Ruanbao; Gibbons, William R

    2016-07-01

    Cyanobacteria are currently being engineered to photosynthetically produce next-generation biofuels and high-value chemicals. Many of these chemicals are highly toxic to cyanobacteria, thus strains with increased tolerance need to be developed. The volatility of these chemicals may necessitate that experiments be conducted in a sealed environment to maintain chemical concentrations. Therefore, carbon sources such as NaHCO3 must be used for supporting cyanobacterial growth instead of CO2 sparging. The primary goal of this study was to determine the optimal initial concentration of NaHCO3 for use in growth trials, as well as if daily supplementation of NaHCO3 would allow for increased growth. The secondary goal was to determine the most accurate method to assess growth of Anabaena sp. PCC 7120 in a sealed environment with low biomass titers and small sample volumes. An initial concentration of 0.5g/L NaHCO3 was found to be optimal for cyanobacteria growth, and fed-batch additions of NaHCO3 marginally improved growth. A separate study determined that a sealed test tube environment is necessary to maintain stable titers of volatile chemicals in solution. This study also showed that a SYTO® 9 fluorescence-based assay for cell viability was superior for monitoring filamentous cyanobacterial growth compared to absorbance, chlorophyll α (chl a) content, and biomass content due to its accuracy, small sampling size (100μL), and high throughput capabilities. Therefore, in future chemical inhibition trials, it is recommended that 0.5g/L NaHCO3 is used as the carbon source, and that culture viability is monitored via the SYTO® 9 fluorescence-based assay that requires minimum sample size. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Improving Classification of Cancer and Mining Biomarkers from Gene Expression Profiles Using Hybrid Optimization Algorithms and Fuzzy Support Vector Machine

    PubMed Central

    Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud

    2018-01-01

    Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919

  10. Tensile behavior of porous scaffolds made from poly(para phenylene) - biomed 2013.

    PubMed

    Dirienzo, Amy L; Yakacki, Christopher M; Safranski, David L; Frick, Carl P

    2013-01-01

    The goal of this study was to fabricate and mechanically characterize a high-strength porous polymer scaffold for potential use as an orthopedic device. Poly(para-phenylene) (PPP) is an excellent candidate due to its exceptional strength and stiffness and relative inertness, but has never been explicitly investigated for use as a biomedical device. PPP has strength values 3 to 10 times higher and an elastic modulus nearly an order of magnitude higher than traditional polymers such as poly(methyl methacrylate) (PMMA), polycaprolactone (PCL), ultra-high molecular weight polyethylene (UHMWPE), and polyurethane (PU) and is significantly stronger and stiffer than polyetheretherketone (PEEK). By utilizing PPP we can overcome the mechanical limitations of traditional porous polymeric scaffolds since the outstanding stiffness of PPP allows for a highly porous structure appropriate for osteointegration that can match the stiffness of bone (100-250 MPa), while maintaining suitable mechanical properties for soft-tissue fixation. Porous samples were manufactured by powder sintering followed by particle leaching. The pore volume fraction was systematically varied from 50–80 vol% for a pore sizes from150-500 µm, as indicated by previous studies for optimal osteointegration. The tensile modulus of the porous samples was compared to the rule of mixtures, and closely matches foam theory up to 70 vol%. The experimental modulus for 70 vol% porous samples matches the stiffness of bone and contains pore sizes optimal for osteointegration.

  11. Electrofishing effort requirements for estimating species richness in the Kootenai River, Idaho

    USGS Publications Warehouse

    Watkins, Carson J.; Quist, Michael C.; Shepard, Bradley B.; Ireland, Susan C.

    2016-01-01

    This study was conducted on the Kootenai River, Idaho to provide insight on sampling requirements to optimize future monitoring effort associated with the response of fish assemblages to habitat rehabilitation. Our objective was to define the electrofishing effort (m) needed to have a 95% probability of sampling 50, 75, and 100% of the observed species richness and to evaluate the relative influence of depth, velocity, and instream woody cover on sample size requirements. Sidechannel habitats required more sampling effort to achieve 75 and 100% of the total species richness than main-channel habitats. The sampling effort required to have a 95% probability of sampling 100% of the species richness was 1100 m for main-channel sites and 1400 m for side-channel sites. We hypothesized that the difference in sampling requirements between main- and side-channel habitats was largely due to differences in habitat characteristics and species richness between main- and side-channel habitats. In general, main-channel habitats had lower species richness than side-channel habitats. Habitat characteristics (i.e., depth, current velocity, and woody instream cover) were not related to sample size requirements. Our guidelines will improve sampling efficiency during monitoring effort in the Kootenai River and provide insight on sampling designs for other large western river systems where electrofishing is used to assess fish assemblages.

  12. Sample types applied for molecular diagnosis of therapeutic management of advanced non-small cell lung cancer in the precision medicine.

    PubMed

    Han, Yanxi; Li, Jinming

    2017-10-26

    In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.

  13. Construction of Core Collections Suitable for Association Mapping to Optimize Use of Mediterranean Olive (Olea europaea L.) Genetic Resources

    PubMed Central

    El Bakkali, Ahmed; Haouane, Hicham; Moukhli, Abdelmajid; Costes, Evelyne; Van Damme, Patrick; Khadari, Bouchaib

    2013-01-01

    Phenotypic characterisation of germplasm collections is a decisive step towards association mapping analyses, but it is particularly expensive and tedious for woody perennial plant species. Characterisation could be more efficient if focused on a reasonably sized subset of accessions, or so-called core collection (CC), reflecting the geographic origin and variability of the germplasm. The questions that arise concern the sample size to use and genetic parameters that should be optimized in a core collection to make it suitable for association mapping. Here we investigated these questions in olive (Olea europaea L.), a perennial fruit species. By testing different sampling methods and sizes in a worldwide olive germplasm bank (OWGB Marrakech, Morocco) containing 502 unique genotypes characterized by nuclear and plastid loci, a two-step sampling method was proposed. The Shannon-Weaver diversity index was found to be the best criterion to be maximized in the first step using the Core Hunter program. A primary core collection of 50 entries (CC50) was defined that captured more than 80% of the diversity. This latter was subsequently used as a kernel with the Mstrat program to capture the remaining diversity. 200 core collections of 94 entries (CC94) were thus built for flexibility in the choice of varieties to be studied. Most entries of both core collections (CC50 and CC94) were revealed to be unrelated due to the low kinship coefficient, whereas a genetic structure spanning the eastern and western/central Mediterranean regions was noted. Linkage disequilibrium was observed in CC94 which was mainly explained by a genetic structure effect as noted for OWGB Marrakech. Since they reflect the geographic origin and diversity of olive germplasm and are of reasonable size, both core collections will be of major interest to develop long-term association studies and thus enhance genomic selection in olive species. PMID:23667437

  14. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  15. Effects of particle size on magnetostrictive properties of magnetostrictive composites with low particulate volume fraction

    NASA Astrophysics Data System (ADS)

    Dong, Xufeng; Guan, Xinchun; Ou, Jinping

    2009-03-01

    In the past ten years, there have been several investigations on the effects of particle size on magnetostrictive properties of polymer-bonded Terfenol-D composites, but they didn't get an agreement. To solve the conflict among them, Terfenol-D/unsaturated polyester resin composite samples were prepared from Tb0.3Dy0.7Fe2 powder with 20% volume fraction in six particle-size ranges (30-53, 53-150, 150-300, 300-450, 450-500 and 30-500μm). Then their magnetostrictive properties were tested. The results indicate the 53-150μm distribution presents the largest static and dynamic magnetostriction among the five monodispersed distribution samples. But the 30-500μm (polydispersed) distribution shows even larger response than 53-150μm distribution. It indicates the particle size level plays a doubleedged sword on magnetostrictive properties of magnetostrictive composites. The existence of the optimal particle size to prepare polymer-bonded Terfenol-D, whose composition is Tb0.3Dy0.7Fe2, is resulted from the competition between the positive effects and negative effects of increasing particle size. At small particle size level, the voids and the demagnetization effect decrease significantly with increasing particle size and leads to the increase of magnetostriction; while at lager particle size level, the percentage of single-crystal particles and packing density becomes increasingly smaller with increasing particle size and results in the decrease of magnetostriction. The reason for the other scholars got different results is analyzed.

  16. Some comments on Anderson and Pospahala's correction of bias in line transect sampling

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Chain, B.R.

    1980-01-01

    ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.

  17. In Vitro Cell Proliferation and Mechanical Behaviors Observed in Porous Zirconia Ceramics

    PubMed Central

    Li, Jing; Wang, Xiaobei; Lin, Yuanhua; Deng, Xuliang; Li, Ming; Nan, Cewen

    2016-01-01

    Zirconia ceramics with porous structure have been prepared by solid-state reaction using yttria-stabilized zirconia and stearic acid powders. Analysis of its microstructure and phase composition revealed that a pure zirconia phase can be obtained. Our results indicated that its porosity and pore size as well as the mechanical characteristics can be tuned by changing the content of stearic acid powder. The optimal porosity and pore size of zirconia ceramic samples can be effective for the increase of surface roughness, which results in higher cell proliferation values without destroying the mechanical properties. PMID:28773341

  18. Efficient Bayesian mixed model analysis increases association power in large cohorts

    PubMed Central

    Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L

    2014-01-01

    Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633

  19. Experimental investigation of amount of nano-Al2O3 on mechanical properties of Al-based nano-composites fabricated by powder metallurgy (PM)

    NASA Astrophysics Data System (ADS)

    Razzaqi, A.; Liaghat, Gh.; Razmkhah, O.

    2017-10-01

    In this paper, mechanical properties of Aluminum (Al) matrix nano-composites, fabricated by Powder Metallurgy (PM) method, has been investigated. Alumina (Al2O3) nano particles were added in amounts of 0, 2.5, 5, 7.5 and 10 weight percentages (wt%). For this purpose, Al powder (particle size: 20 µm) and nano-Al2O3 (particle size: 20 nm) in various weight percentages were mixed and milled in a blade mixer for 15 minutes in 1500 rpm. Then, the obtained mixture, compacted by means of a two piece die and uniaxial cold press of about 600 MPa and cold iso-static press (CIP), required for different tests. After that, the samples sintered in 600°C for 90 minutes. Compression and three-point bending tests performed on samples and the results, led us to obtain the optimized particle size for achieving best mechanical properties.

  20. Optimizing use of the structural chemical analyser (variable pressure FESEM-EDX Raman spectroscopy) on micro-size complex historical paintings characterization.

    PubMed

    Guerra, I; Cardell, C

    2015-10-01

    The novel Structural Chemical Analyser (hyphenated Raman spectroscopy and scanning electron microscopy equipped with an X-ray detector) is gaining popularity since it allows 3-D morphological studies and elemental, molecular, structural and electronic analyses of a single complex micro-sized sample without transfer between instruments. However, its full potential remains unexploited in painting heritage where simultaneous identification of inorganic and organic materials in paintings is critically yet unresolved. Despite benefits and drawbacks shown in literature, new challenges have to be faced analysing multifaceted paint specimens. SEM-Structural Chemical Analyser systems differ since they are fabricated ad hoc by request. As configuration influences the procedure to optimize analyses, likewise analytical protocols have to be designed ad hoc. This paper deals with the optimization of the analytical procedure of a Variable Pressure Field Emission scanning electron microscopy equipped with an X-ray detector Raman spectroscopy system to analyse historical paint samples. We address essential parameters, technical challenges and limitations raised from analysing paint stratigraphies, archaeological samples and loose pigments. We show that accurate data interpretation requires comprehensive knowledge of factors affecting Raman spectra. We tackled: (i) the in-FESEM-Raman spectroscopy analytical sequence, (ii) correlations between FESEM and Structural Chemical Analyser/laser analytical position, (iii) Raman signal intensity under different VP-FESEM vacuum modes, (iv) carbon deposition on samples under FESEM low-vacuum mode, (v) crystal nature and morphology, (vi) depth of focus and (vii) surface-enhanced Raman scattering effect. We recommend careful planning of analysis strategies prior to research which, although time consuming, guarantees reliable results. The ultimate goal of this paper is to help to guide future users of a FESEM-Structural Chemical Analyser system in order to increase applications. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  1. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    PubMed

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  2. Size optimization for complex permeability measurement of magnetic thin films using a short-circuited microstrip line up to 30 GHz

    NASA Astrophysics Data System (ADS)

    Takeda, Shigeru; Naoe, Masayuki

    2018-03-01

    High-frequency permeability spectra of magnetic films were measured over a wideband frequency range of 0.1-30 GHz using a shielded and short-circuited microstrip line jig. In this measurement, spurious resonances had to be suppressed up to the highest frequency. To suppress these resonances, characteristic impedance of the microstrip line should approach 50 Ω at the junction between connector and microstrip line. The main factors dominating these resonances were structures of the jig and the sample. The dimensions were optimized in various experiments, and results demonstrated that the frequency could be raised to at least 20 GHz. For the transverse electromagnetic mode to transmit stably along the microstrip line, the preferred sample was rectangular, with the shorter side parallel to the line and the longer side perpendicular to it, and characteristic impedance strongly depended on the signal line width of the jig. However, too small a jig and sample led to a lower S/N ratio.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, Erik; Trolinger, James D.; Lacey, Ian

    This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less

  4. Bayesian Phase II optimization for time-to-event data based on historical information.

    PubMed

    Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard

    2017-01-01

    After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.

  5. An innovative method for analysis of Pb (II) in rice, milk and water samples based on TiO2 reinforced caprylic acid hollow fiber solid/liquid phase microextraction.

    PubMed

    Bahar, Shahriyar; Es'haghi, Zarrin; Nezhadali, Azizollah; Banaei, Alireza; Bohlooli, Shahab

    2017-04-15

    In the present study, nano-sized titanium oxides were applied for preconcentration and determination of Pb(II) in aqueous samples using hollow fiber based solid-liquid phase microextraction (HF-SLPME) combined with flame atomic absorption spectrometry (FAAS). In this work, the nanoparticles dispersed in caprylic acid as an extraction solvent was placed into a polypropylene porous hollow fiber segment supported by capillary forces and sonification. This membrane was in direct contact with solutions containing Pb (II). The effect of experimental conditions on the extraction, such as pH, stirring rate, sample volume, and extraction time were optimized. Under the optimal conditions, the performance of the proposed method was investigated for the determination of Pb (II) in food and water samples. The method was linear in the range of 0.6-3000μgmL -1 . The relative standard deviations and relative recovery of Pb (II) was 4.9% and 99.3%, respectively (n=5). Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Frequency optimization in the eddy current test for high purity niobium

    NASA Astrophysics Data System (ADS)

    Joung, Mijoung; Jung, Yoochul; Kim, Hyungjin

    2017-01-01

    The eddy current test (ECT) is frequently used as a non-destructive method to check for the defects of high purity niobium (RRR300, Residual Resistivity Ratio) in a superconducting radio frequency (SRF) cavity. Determining an optimal frequency corresponding to specific material properties and probe specification is a very important step. The ECT experiments for high purity Nb were performed to determine the optimal frequency using the standard sample of high purity Nb having artificial defects. The target depth was considered with the treatment step that the niobium receives as the SRF cavity material. The results were analysed via the selectivity that led to a specific result, depending on the size of the defects. According to the results, the optimal frequency was determined to be 200 kHz, and a few features of the ECT for the high purity Nb were observed.

  7. Optimizing concentration of shifter additive for plastic scintillators of different size

    NASA Astrophysics Data System (ADS)

    Adadurov, A. F.; Zhmurin, P. N.; Lebedev, V. N.; Titskaya, V. D.

    2009-02-01

    This paper concerns the influence of wavelength shifting (secondary) luminescent additive (LA 2) on the light yield of polystyrene-based plastic scintillator (PS) taking self-absorption into account. Calculations of light yield dependence on concentration of 1.4-bis(2-(5-phenyloxazolyl)-benzene (POPOP) as LA 2 were made for various path lengths of photons in PS. It is shown that there is an optimal POPOP concentration ( Copt), which provides a maximum light yield for a given path length. This optimal concentration is determined by the competition of luminescence and self-reflection processes. Copt values were calculated for PS of different dimensions. For small PS, Copt≈0.02%, which agree with a common (standard) value of POPOP concentration. For higher PS dimensions, the optimal POPOP concentration is decreased (to Copt≈0.006% for 320×30×2 cm sample), reducing the light yield from PS by almost 35%.

  8. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  9. Analytical dual-energy microtomography: A new method for obtaining three-dimensional mineral phase images and its application to Hayabusa samples

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, A.; Nakano, T.; Uesugi, K.; Uesugi, M.; Takeuchi, A.; Suzuki, Y.; Noguchi, R.; Matsumoto, T.; Matsuno, J.; Nagano, T.; Imai, Y.; Nakamura, T.; Ogami, T.; Noguchi, T.; Abe, M.; Yada, T.; Fujimura, A.

    2013-09-01

    We developed a novel technique called "analytical dual-energy microtomography" that uses the linear attenuation coefficients (LACs) of minerals at two different X-ray energies to nondestructively obtain three-dimensional (3D) images of mineral distribution in materials such as rock specimens. The two energies are above and below the absorption edge energy of an abundant element, which we call the "index element". The chemical compositions of minerals forming solid solution series can also be measured. The optimal size of a sample is of the order of the inverse of the LAC values at the X-ray energies used. We used synchrotron-based microtomography with an effective spatial resolution of >200 nm to apply this method to small particles (30-180 μm) collected from the surface of asteroid 25143 Itokawa by the Hayabusa mission of the Japan Aerospace Exploration Agency (JAXA). A 3D distribution of the minerals was successively obtained by imaging the samples at X-ray energies of 7 and 8 keV, using Fe as the index element (the K-absorption edge of Fe is 7.11 keV). The optimal sample size in this case is of the order of 50 μm. The chemical compositions of the minerals, including the Fe/Mg ratios of ferromagnesian minerals and the Na/Ca ratios of plagioclase, were measured. This new method is potentially applicable to other small samples such as cosmic dust, lunar regolith, cometary dust (recovered by the Stardust mission of the National Aeronautics and Space Administration [NASA]), and samples from extraterrestrial bodies (those from future sample return missions such as the JAXA Hayabusa2 mission and the NASA OSIRIS-REx mission), although limitations exist for unequilibrated samples. Further, this technique is generally suited for studying materials in multicomponent systems with multiple phases across several research fields.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, C; Kamal, H

    Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less

  11. Time and expected value of sample information wait for no patient.

    PubMed

    Eckermann, Simon; Willan, Andrew R

    2008-01-01

    The expected value of sample information (EVSI) from prospective trials has previously been modeled as the product of EVSI per patient, and the number of patients across the relevant time horizon less those "used up" in trials. However, this implicitly assumes the eligible patient population to which information from a trial can be applied across a time horizon are independent of time for trial accrual, follow-up and analysis. This article demonstrates that in calculating the EVSI of a trial, the number of patients who benefit from trial information should be reduced by those treated outside as well as within the trial over the time until trial evidence is updated, including time for accrual, follow-up and analysis. Accounting for time is shown to reduce the eligible patient population: 1) independent of the size of trial in allowing for time of follow-up and analysis, and 2) dependent on the size of trial for time of accrual, where the patient accrual rate is less than incidence. Consequently, the EVSI and expected net gain (ENG) at any given trial size are shown to be lower when accounting for time, with lower ENG reinforced in the case of trials undertaken while delaying decisions by additional opportunity costs of time. Appropriately accounting for time reduces the EVSI of trial design and increase opportunity costs of trials undertaken with delay, leading to lower likelihood of trialing being optimal and smaller trial designs where optimal.

  12. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  13. Ortho Image and DTM Generation with Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.

  14. PubChem3D: Conformer generation

    PubMed Central

    2011-01-01

    Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340

  15. Performance of Identifiler Direct and PowerPlex 16 HS on the Applied Biosystems 3730 DNA Analyzer for processing biological samples archived on FTA cards.

    PubMed

    Laurin, Nancy; DeMoors, Anick; Frégeau, Chantal

    2012-09-01

    Direct amplification of STR loci from biological samples collected on FTA cards without prior DNA purification was evaluated using Identifiler Direct and PowerPlex 16 HS in conjunction with the use of a high throughput Applied Biosystems 3730 DNA Analyzer. In order to reduce the overall sample processing cost, reduced PCR volumes combined with various FTA disk sizes were tested. Optimized STR profiles were obtained using a 0.53 mm disk size in 10 μL PCR volume for both STR systems. These protocols proved effective in generating high quality profiles on the 3730 DNA Analyzer from both blood and buccal FTA samples. Reproducibility, concordance, robustness, sample stability and profile quality were assessed using a collection of blood and buccal samples on FTA cards from volunteer donors as well as from convicted offenders. The new developed protocols offer enhanced throughput capability and cost effectiveness without compromising the robustness and quality of the STR profiles obtained. These results support the use of these protocols for processing convicted offender samples submitted to the National DNA Data Bank of Canada. Similar protocols could be applied to the processing of casework reference samples or in paternity or family relationship testing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Structure and properties of clinical coralline implants measured via 3D imaging and analysis.

    PubMed

    Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis

    2006-05-01

    The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.

  17. Optimal planning and design of a renewable energy based supply system for microgrids

    DOE PAGES

    Hafez, Omar; Bhattacharya, Kankar

    2012-03-03

    This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less

  18. An efficient one-step condensation and activation strategy to synthesize porous carbons with optimal micropore sizes for highly selective CO₂ adsorption.

    PubMed

    Wang, Jiacheng; Liu, Qian

    2014-04-21

    A series of microporous carbons (MPCs) were successfully prepared by an efficient one-step condensation and activation strategy using commercially available dialdehyde and diamine as carbon sources. The resulting MPCs have large surface areas (up to 1881 m(2) g(-1)), micropore volumes (up to 0.78 cm(3) g(-1)), and narrow micropore size distributions (0.7-1.1 nm). The CO₂ uptakes of the MPCs prepared at high temperatures (700-750 °C) are higher than those prepared under mild conditions (600-650 °C), because the former samples possess optimal micropore sizes (0.7-0.8 nm) that are highly suitable for CO₂ capture due to enhanced adsorbate-adsorbent interactions. At 1 bar, MPC-750 prepared at 750 °C demonstrates the best CO₂ capture performance and can efficiently adsorb CO₂ molecules at 2.86 mmol g(-1) and 4.92 mmol g(-1) at 25 and 0 °C, respectively. In particular, the MPCs with optimal micropore sizes (0.7-0.8 nm) have extremely high CO₂/N₂ adsorption ratios (47 and 52 at 25 and 0 °C, respectively) at 1 bar, and initial CO₂/N₂ adsorption selectivities of up to 81 and 119 at 25 °C and 0 °C, respectively, which are far superior to previously reported values for various porous solids. These excellent results, combined with good adsorption capacities and efficient regeneration/recyclability, make these carbons amongst the most promising sorbents reported so far for selective CO₂ adsorption in practical applications.

  19. Optimal Wavelength Selection on Hyperspectral Data with Fused Lasso for Biomass Estimation of Tropical Rain Forest

    NASA Astrophysics Data System (ADS)

    Takayama, T.; Iwasaki, A.

    2016-06-01

    Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  20. An optimal sample data usage strategy to minimize overfitting and underfitting effects in regression tree models based on remotely-sensed data

    USGS Publications Warehouse

    Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Picotte, Joshua J.; Howard, Danny; Smith, Kelcy; Nelson, Kurtis

    2016-01-01

    Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data) may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI) were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD) between the predicted and actual NDVI (scaled NDVI, value from 0–200) and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4), which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  1. Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation

    NASA Astrophysics Data System (ADS)

    Luo, L.; Cheng, Z.

    2016-12-01

    In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.

  2. Development and Characterization of Chitosan Cross-Linked With Tripolyphosphate as a Sustained Release Agent in Tablets, Part I: Design of Experiments and Optimization.

    PubMed

    Pinto, Colin A; Saripella, Kalyan K; Loka, Nikhil C; Neau, Steven H

    2018-04-01

    Certain issues with the use of particles of chitosan (Ch) cross-linked with tripolyphosphate (TPP) in sustained release formulations include inefficient drug loading, burst drug release, and incomplete drug release. Acetaminophen was added to Ch:TPP particles to test for advantages of drug addition extragranularly over drug addition made during cross-linking. The influences of Ch concentration, Ch:TPP ratio, temperature, ionic strength, and pH were assessed. Design of experiments allowed identification of factors and 2-factor interactions that have significant effects on average particle size and size distribution, yield, zeta potential, and true density of the particles, as well as drug release from the directly compressed tablets. Statistical model equations directed production of a control batch that minimized span, maximized yield, and targeted a t 50 of 90 min (sample A); sample B that differed by targeting a t 50 of 240-300 min to provide sustained release; and sample C that differed from sample B by maximizing span. Sample B maximized yield and provided its targeted t 50 and the smallest average particle size, with the higher zeta potential and the lower span of samples B and C. Extragranular addition of a drug to Ch:TPP particles achieved 100% drug loading, eliminated a burst drug release, and can accomplish complete drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. "Optimal" Size and Schooling: A Relative Concept.

    ERIC Educational Resources Information Center

    Swanson, Austin D.

    Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…

  4. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  5. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  6. Optimization of applied voltages for on-chip concentration of DNA using nanoslit

    NASA Astrophysics Data System (ADS)

    Azuma, Naoki; Itoh, Shintaro; Fukuzawa, Kenji; Zhang, Hedong

    2017-12-01

    On-chip sample concentration is an effective pretreatment to improve the detection sensitivity of lab-on-a-chip devices for biochemical analysis. In a previous study, we successfully achieved DNA sample concentration using a nanoslit fabricated in the microchannel of a device designed for DNA size separation. The nanoslit was a channel with a depth smaller than the diameter of a random coil-shaped DNA molecule. The concentration was achieved using the entropy trap at the boundary between the microchannel and the nanoslit. DNA molecules migrating toward the nanoslit owing to electrophoresis were trapped in front of the nanoslit and the concentration was enhanced over time. In this study, we successfully maximize the molecular concentration by optimizing the applied voltage for electrophoresis and verifying the effect of temperature. In addition, we propose a model formula that predicts the molecular concentration, the validity of which is confirmed through comparison with experimental results.

  7. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    PubMed

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  8. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    PubMed

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The effect of Nb additions on the thermal stability of melt-spun Nd2Fe14B

    NASA Astrophysics Data System (ADS)

    Lewis, L. H.; Gallagher, K.; Panchanathan, V.

    1999-04-01

    Elevated-temperature superconducting quantum interference device (SQUID) magnetometry was performed on two samples of melt-spun and optimally annealed Nd2Fe14B; one sample contained 2.3 wt % Nb and one was Nb-free. Continuous full hysteresis loops were measured with a SQUID magnetometer at T=630 K, above the Curie temperature of the 2-14-1 phase, as a function of field (1 T⩽H⩽-1 T) and time on powdered samples sealed in quartz tubes at a vacuum of 10-6 Torr. The measured hysteresis signals were deconstructed into a high-field linear paramagnetic portion and a low-field ferromagnetic signal of unclear origin. While the saturation magnetization of the ferromagnetic signal from both samples grows with time, the signal from the Nb-containing sample is always smaller. The coercivity data are consistent with a constant impurity particle size in the Nb-containing sample and an increasing impurity particle size in the Nb-free sample. The paramagnetic susceptibility signal from the Nd2Fe14B-type phase in the Nb-free sample increases with time, while that from the Nb-containing sample remains constant. It is suggested that the presence of Nb actively suppresses the thermally induced formation of poorly crystallized Fe-rich regions that apparently exist in samples of both compositions.

  10. Ionic liquid-based microwave-assisted extraction of flavonoids from Bauhinia championii (Benth.) Benth.

    PubMed

    Xu, Wei; Chu, Kedan; Li, Huang; Zhang, Yuqin; Zheng, Haiyin; Chen, Ruilan; Chen, Lidian

    2012-12-03

    An ionic liquids (IL)-based microwave-assisted approach for extraction and determination of flavonoids from Bauhinia championii (Benth.) Benth. was proposed for the first time. Several ILs with different cations and anions and the microwave-assisted extraction (MAE) conditions, including sample particle size, extraction time and liquid-solid ratio, were investigated. Two M 1-butyl-3-methylimidazolium bromide ([bmim] Br) solution with 0.80 M HCl was selected as the optimal solvent. Meanwhile the optimized conditions a ratio of liquid to material of 30:1, and the extraction for 10 min at 70 °C. Compared with conventional heat-reflux extraction (CHRE) and the regular MAE, IL-MAE exhibited a higher extraction yield and shorter extraction time (from 1.5 h to 10 min). The optimized extraction samples were analysed by LC-MS/MS. IL extracts of Bauhinia championii (Benth.)Benth consisted mainly of flavonoids, among which myricetin, quercetin and kaempferol, β-sitosterol, triacontane and hexacontane were identified. The study indicated that IL-MAE was an efficient and rapid method with simple sample preparation. LC-MS/MS was also used to determine the chemical composition of the ethyl acetate/MAE extract of Bauhinia championii (Benth.) Benth, and it maybe become a rapid method to determine the composition of new plant extracts.

  11. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  12. Effects of Moisture and Particle Size on Quantitative Determination of Total Organic Carbon (TOC) in Soils Using Near-Infrared Spectroscopy.

    PubMed

    Tamburini, Elena; Vincenzi, Fabio; Costa, Stefania; Mantovi, Paolo; Pedrini, Paola; Castaldelli, Giuseppe

    2017-10-17

    Near-Infrared Spectroscopy is a cost-effective and environmentally friendly technique that could represent an alternative to conventional soil analysis methods, including total organic carbon (TOC). Soil fertility and quality are usually measured by traditional methods that involve the use of hazardous and strong chemicals. The effects of physical soil characteristics, such as moisture content and particle size, on spectral signals could be of great interest in order to understand and optimize prediction capability and set up a robust and reliable calibration model, with the future perspective of being applied in the field. Spectra of 46 soil samples were collected. Soil samples were divided into three data sets: unprocessed, only dried and dried, ground and sieved, in order to evaluate the effects of moisture and particle size on spectral signals. Both separate and combined normalization methods including standard normal variate (SNV), multiplicative scatter correction (MSC) and normalization by closure (NCL), as well as smoothing using first and second derivatives (DV1 and DV2), were applied to a total of seven cases. Pretreatments for model optimization were designed and compared for each data set. The best combination of pretreatments was achieved by applying SNV and DV2 on partial least squares (PLS) modelling. There were no significant differences between the predictions using the three different data sets ( p < 0.05). Finally, a unique database including all three data sets was built to include all the sources of sample variability that were tested and used for final prediction. External validation of TOC was carried out on 16 unknown soil samples to evaluate the predictive ability of the final combined calibration model. Hence, we demonstrate that sample preprocessing has minor influence on the quality of near infrared spectroscopy (NIR) predictions, laying the ground for a direct and fast in situ application of the method. Data can be acquired outside the laboratory since the method is simple and does not need more than a simple band ratio of the spectra.

  13. Influence of process conditions during impulsed electrostatic droplet formation on size distribution of hydrogel beads.

    PubMed

    Lewińska, Dorota; Rosiński, Stefan; Weryński, Andrzej

    2004-02-01

    In the medical applications of microencapsulation of living cells there are strict requirements concerning the high size uniformity and the optimal diameter, the latter dependent on the kind of therapeutic application, of manufactured gel beads. The possibility of manufacturing small size gel bead samples (diameter 300 microm and below) with a low size dispersion (less than 10%), using an impulsed voltage droplet generator, was examined in this work. The main topic was the investigation of the influence of values of electric parameters (voltage U, impulse time tau and impulse frequency f) on the quality of obtained droplets. It was concluded that, owing to the implementation of the impulse mode and regulation of tau and f values, it is possible to work in a controlled manner in the jet flow regime (U> critical voltage UC). It is also possible to obtain uniform bead samples with the average diameter, deff, significantly lower than the nozzle inner diameter dI (bead diameters 0.12-0.25 mm by dI equal to 0.3 mm, size dispersion 5-7%). Alterations of the physical parameters of the process (polymer solution physico-chemical properties, flow rate, distance between nozzle and gellifying bath) enable one to manufacture uniform gel beads in the wide range of diameters using a single nozzle.

  14. A Cross-Validation of easyCBM Mathematics Cut Scores in Washington State: 2009-2010 Test. Technical Report #1105

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in the state of Washington. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Washington state…

  15. A Cross-Validation of easyCBM[R] Mathematics Cut Scores in Oregon: 2009-2010. Technical Report #1104

    ERIC Educational Resources Information Center

    Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    In this technical report, we document the results of a cross-validation study designed to identify optimal cut-scores for the use of the easyCBM[R] mathematics test in Oregon. A large sample, randomly split into two groups of roughly equal size, was used for this study. Students' performance classification on the Oregon state test was used as the…

  16. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Cost-efficient designs for three-arm trials with treatment delivered by health professionals: Sample sizes for a combination of nested and crossed designs

    PubMed Central

    Moerbeek, Mirjam

    2018-01-01

    Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807

  18. Sample preparation for the determination of 241Am in sediments utilizing gamma-spectroscopy.

    PubMed

    Ristic, M; Degetto, S; Ast, T; Cantallupi, C

    2002-01-01

    This paper describes a procedure developed to separate americium-241 from the bulk of a sample by coprecipitation followed by high sensitivity gamma-counting of the concentrate in a well-type detector. It enables the measurement of 241Am at low concentrations, e.g. fallout levels in soils and sediments, or where large sample sizes are not available. The method is much faster and more reliable than those involving separation from other alpha-emitters, electroplating and alpha-spectrometry. A number of tracer experiments was performed in order to optimize the conditions for coprecipitation of 241Am from sediment leachates. The general outline of the determination of americium is also given.

  19. Microstructure as a function of the grain size distribution for packings of frictionless disks: Effects of the size span and the shape of the distribution.

    PubMed

    Estrada, Nicolas; Oquendo, W F

    2017-10-01

    This article presents a numerical study of the effects of grain size distribution (GSD) on the microstructure of two-dimensional packings of frictionless disks. The GSD is described by a power law with two parameters controlling the size span and the shape of the distribution. First, several samples are built for each combination of these parameters. Then, by means of contact dynamics simulations, the samples are densified in oedometric conditions and sheared in a simple shear configuration. The microstructure is analyzed in terms of packing fraction, local ordering, connectivity, and force transmission properties. It is shown that the microstructure is notoriously affected by both the size span and the shape of the GSD. These findings confirm recent observations regarding the size span of the GSD and extend previous works by describing the effects of the GSD shape. Specifically, we find that if the GSD shape is varied by increasing the proportion of small grains by a certain amount, it is possible to increase the packing fraction, increase coordination, and decrease the proportion of floating particles. Thus, by carefully controlling the GSD shape, it is possible to obtain systems that are denser and better connected, probably increasing the system's robustness and optimizing important strength properties such as stiffness, cohesion, and fragmentation susceptibility.

  20. Optimal design of porous structures for the fastest liquid absorption.

    PubMed

    Shou, Dahua; Ye, Lin; Fan, Jintu; Fu, Kunkun

    2014-01-14

    Porous materials engineered for rapid liquid absorption are useful in many applications, including oil recovery, spacecraft life-support systems, moisture management fabrics, medical wound dressings, and microfluidic devices. Dynamic absorption in capillary tubes and porous media is driven by the capillary pressure, which is inversely proportional to the pore size. On the other hand, the permeability of porous materials scales with the square of the pore size. The dynamic competition between these two superimposed mechanisms for liquid absorption through a heterogeneous porous structure may lead to an overall minimum absorption time. In this work, we explore liquid absorption in two different heterogeneous porous structures [three-dimensional (3D) circular tubes and porous layers], which are composed of two sections with variations in radius/porosity and height. The absorption time to fill the voids of porous constructs is expressed as a function of radius/porosity and height of local sections, and the absorption process does not follow the classic Washburn's law. Under given height and void volume, these two-section structures with a negative gradient of radius/porosity against the absorption direction are shown to have faster absorption rates than control samples with uniform radius/porosity. In particular, optimal structural parameters, including radius/porosity and height, are found that account for the minimum absorption time. The liquid absorption in the optimized porous structure is up to 38% faster than in a control sample. The results obtained can be used a priori for the design of porous structures with excellent liquid management property in various fields.

  1. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    NASA Astrophysics Data System (ADS)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  2. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  3. The X-IFU end-to-end simulations performed for the TES array optimization exercise

    NASA Astrophysics Data System (ADS)

    Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.

    2015-09-01

    The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.

  4. Optimal iodine staining of cardiac tissue for X-ray computed tomography.

    PubMed

    Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui

    2014-01-01

    X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.

  5. Volatility measurement with directional change in Chinese stock market: Statistical property and investment strategy

    NASA Astrophysics Data System (ADS)

    Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei

    2017-04-01

    The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.

  6. Microwave Nondestructive Evaluation of Dielectric Materials with a Metamaterial Lens

    NASA Technical Reports Server (NTRS)

    Shreiber, Daniel; Gupta, Mool; Cravey, Robin L.

    2008-01-01

    A novel microwave Nondestructive Evaluation (NDE) sensor was developed in an attempt to increase the sensitivity of the microwave NDE method for detection of defects small relative to a wavelength. The sensor was designed on the basis of a negative index material (NIM) lens. Characterization of the lens was performed to determine its resonant frequency, index of refraction, focus spot size, and optimal focusing length (for proper sample location). A sub-wavelength spot size (3 dB) of 0.48 lambda was obtained. The proof of concept for the sensor was achieved when a fiberglass sample with a 3 mm diameter through hole (perpendicular to the propagation direction of the wave) was tested. The hole was successfully detected with an 8.2 cm wavelength electromagnetic wave. This method is able to detect a defect that is 0.037 lambda. This method has certain advantages over other far field and near field microwave NDE methods currently in use.

  7. Particle flow oriented electromagnetic calorimeter optimization for the circular electron positron collider

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Fu, C.; Yu, D.; Wang, Z.; Hu, T.; Ruan, M.

    2018-03-01

    The design and optimization of the Electromagnetic Calorimeter (ECAL) are crucial for the Circular Electron Positron Collider (CEPC) project, a proposed future Higgs/Z factory. Following the reference design of the International Large Detector (ILD), a set of silicon-tungsten sampling ECAL geometries are implemented into the Geant4 simulation, whose performance is then scanned using Arbor algorithm. The photon energy response at different ECAL longitudinal structures is analyzed, and the separation performance between nearby photon showers with different ECAL transverse cell sizes is investigated and parametrized. The overall performance is characterized by a set of physics benchmarks, including νν H events where Higgs boson decays into a pair of photons (EM objects) or gluons (jets) and Z→τ+τ- events. Based on these results, we propose an optimized ECAL geometry for the CEPC project.

  8. On the influence of crystal size and wavelength on native SAD phasing.

    PubMed

    Liebschner, Dorothee; Yamada, Yusuke; Matsugaki, Naohiro; Senda, Miki; Senda, Toshiya

    2016-06-01

    Native SAD is an emerging phasing technique that uses the anomalous signal of native heavy atoms to obtain crystallographic phases. The method does not require specific sample preparation to add anomalous scatterers, as the light atoms contained in the native sample are used as marker atoms. The most abundant anomalous scatterer used for native SAD, which is present in almost all proteins, is sulfur. However, the absorption edge of sulfur is at low energy (2.472 keV = 5.016 Å), which makes it challenging to carry out native SAD phasing experiments as most synchrotron beamlines are optimized for shorter wavelength ranges where the anomalous signal of sulfur is weak; for longer wavelengths, which produce larger anomalous differences, the absorption of X-rays by the sample, solvent, loop and surrounding medium (e.g. air) increases tremendously. Therefore, a compromise has to be found between measuring strong anomalous signal and minimizing absorption. It was thus hypothesized that shorter wavelengths should be used for large crystals and longer wavelengths for small crystals, but no thorough experimental analyses have been reported to date. To study the influence of crystal size and wavelength, native SAD experiments were carried out at different wavelengths (1.9 and 2.7 Å with a helium cone; 3.0 and 3.3 Å with a helium chamber) using lysozyme and ferredoxin reductase crystals of various sizes. For the tested crystals, the results suggest that larger sample sizes do not have a detrimental effect on native SAD data and that long wavelengths give a clear advantage with small samples compared with short wavelengths. The resolution dependency of substructure determination was analyzed and showed that high-symmetry crystals with small unit cells require higher resolution for the successful placement of heavy atoms.

  9. Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios

    NASA Technical Reports Server (NTRS)

    Juarez, Alfredo; Harper, Susana A.

    2016-01-01

    The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.

  10. Engineering two-wire optical antennas for near field enhancement

    NASA Astrophysics Data System (ADS)

    Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun

    2017-07-01

    We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.

  11. Quantifying learning in biotracer studies.

    PubMed

    Brown, Christopher J; Brett, Michael T; Adame, Maria Fernanda; Stewart-Koster, Ben; Bunn, Stuart E

    2018-04-12

    Mixing models have become requisite tools for analyzing biotracer data, most commonly stable isotope ratios, to infer dietary contributions of multiple sources to a consumer. However, Bayesian mixing models will always return a result that defaults to their priors if the data poorly resolve the source contributions, and thus, their interpretation requires caution. We describe an application of information theory to quantify how much has been learned about a consumer's diet from new biotracer data. We apply the approach to two example data sets. We find that variation in the isotope ratios of sources limits the precision of estimates for the consumer's diet, even with a large number of consumer samples. Thus, the approach which we describe is a type of power analysis that uses a priori simulations to find an optimal sample size. Biotracer data are fundamentally limited in their ability to discriminate consumer diets. We suggest that other types of data, such as gut content analysis, must be used as prior information in model fitting, to improve model learning about the consumer's diet. Information theory may also be used to identify optimal sampling protocols in situations where sampling of consumers is limited due to expense or ethical concerns.

  12. A uniplanar three-axis gradient set for in vivo magnetic resonance microscopy.

    PubMed

    Demyanenko, Andrey V; Zhao, Lin; Kee, Yun; Nie, Shuyi; Fraser, Scott E; Tyszka, J Michael

    2009-09-01

    We present an optimized uniplanar magnetic resonance gradient design specifically tailored for MR imaging applications in developmental biology and histology. Uniplanar gradient designs sacrifice gradient uniformity for high gradient efficiency and slew rate, and are attractive for surface imaging applications where open access from one side of the sample is required. However, decreasing the size of the uniplanar gradient set presents several unique engineering challenges, particularly for heat dissipation and thermal insulation of the sample from gradient heating. We demonstrate a new three-axis, target-field optimized uniplanar gradient coil design that combines efficient cooling and insulation to significantly reduce sample heating at sample-gradient distances of less than 5mm. The instrument is designed for microscopy in horizontal bore magnets. Empirical gradient current efficiencies in the prototype coils lie between 3.75G/cm/A and 4.5G/cm/A with current and heating-limited maximum gradient strengths between 235G/cm and 450G/cm at a 2% duty cycle. The uniplanar gradient prototype is demonstrated with non-linearity corrections for both high-resolution structural imaging of tissue slices and for long time-course imaging of live, developing amphibian embryos in a horizontal bore 7T magnet.

  13. The Bayesian group lasso for confounded spatial data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  14. Optimization of extraction parameters of pentacyclic triterpenoids from Swertia chirata stem using response surface methodology.

    PubMed

    Pandey, Devendra Kumar; Kaur, Prabhjot

    2018-03-01

    In the present investigation, pentacyclic triterpenoids were extracted from different parts of Swertia chirata by solid-liquid reflux extraction methods. The total pentacyclic triterpenoids (UA, OA, and BA) in extracted samples were determined by HPTLC method. Preliminary studies showed that stem part contains the maximum pentacyclic triterpenoid and was chosen for further studies. Response surface methodology (RSM) has been employed successfully by solid-liquid reflux extraction methods for the optimization of different extraction variables viz., temperature ( X 1 35-70 °C), extraction time ( X 2 30-60 min), solvent composition ( X 3 20-80%), solvent-to-solid ratio ( X 4 30-60 mlg -1 ), and particle size ( X 5 3-6 mm) on maximum recovery of triterpenoid from stem parts of Swertia chirata . A Plackett-Burman design has been used initially to screen out the three extraction factors viz., particle size, temperature, and solvent composition on yield of triterpenoid. Moreover, central composite design (CCD) was implemented to optimize the significant extraction parameters for maximum triterpenoid yield. Three extraction parameters viz., mean particle size (3 mm), temperature (65 °C), and methanol-ethyl acetate solvent composition (45%) can be considered as significant for the better yield of triterpenoid A second-order polynomial model satisfactorily fitted the experimental data with the R 2 values of 0.98 for the triterpenoid yield ( p  < 0.001), implying good agreement between the experimental triterpenoid yield (3.71%) to the predicted value (3.79%).

  15. Decision-theoretic approach to data acquisition for transit operations planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, S.G.

    The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less

  16. Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.

    PubMed

    Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei

    2017-09-01

    Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.

  17. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  18. Acoustic Purification of Extracellular Microvesicles

    PubMed Central

    Lee, Kyungheon; Shao, Huilin; Weissleder, Ralph; Lee, Hakho

    2015-01-01

    Microvesicles (MVs) are an increasingly important source for biomarker discovery and clinical diagnostics. The small size of MVs and their presence in complex biological environment, however, pose practical technical challenges, particularly when sample volumes are small. We herein present an acoustic nano-filter system that size-specifically separates MVs in a continuous and contact-free manner. The separation is based on ultrasound standing waves that exert differential acoustic force on MVs according to their size and density. By optimizing the design of the ultrasound transducers and underlying electronics, we were able to achieve a high separation yield and resolution. The “filter size-cutoff” can be controlled electronically in situ and enables versatile MV-size selection. We applied the acoustic nano-filter to isolate nanoscale (<200 nm) vesicles from cell culture media as well as MVs in stored red blood cell products. With the capacity for rapid and contact-free MV isolation, the developed system could become a versatile preparatory tool for MV analyses. PMID:25672598

  19. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  20. [Application of asymmetrical flow field-flow fractionation for size characterization of low density lipoprotein in egg yolk plasma].

    PubMed

    Zhang, Wenhui; Cai, Chunxue; Wang, Jing; Mao, Zhen; Li, Yueqiu; Ding, Liang; Shen, Shigang; Dou, Haiyang

    2017-08-08

    Home-made asymmetrical flow field-flow fractionation (AF4) system, online coupled with ultraviolet/visible (UV/Vis) detector was employed for the separation and size characterization of low density lipoprotein (LDL) in egg yolk plasma. At close to natural condition of egg yolk, the effects of cross flow rate, sample loading, and type of membrane on the size distribution of LDL were investigated. Under the optimal operation conditions, AF4-UV/Vis provides the size distribution of LDL. Moreover, the precision of AF4-UV/Vis method proposed in this work for the analysis of LDL in egg yolk plasma was evaluated. The intra-day precisions were 1.3% and 1.9% ( n =7) and the inter-day precisions were 2.4% and 2.3% ( n =7) for the elution peak height and elution peak area of LDL, respectively. Results reveal that AF4-UV/Vis is a useful tool for the separation and size characterization of LDL in egg yolk plasma.

  1. Optimal size for heating efficiency of superparamagnetic dextran-coated magnetite nanoparticles for application in magnetic fluid hyperthermia

    NASA Astrophysics Data System (ADS)

    Shaterabadi, Zhila; Nabiyouni, Gholamreza; Soleymani, Meysam

    2018-06-01

    Dextran-coated magnetite (Fe3O4) nanoparticles with average particle sizes of 4 and 19 nm were synthesized through in situ and semi-two-step co-precipitation methods, respectively. The experimental results confirm the formation of pure phase of magnetite as well as the presence of dextran layer on the surface of modified magnetite nanoparticles. The results also reveal that both samples have the superparamagnetic behavior. Furthermore, calorimetric measurements show that the dextran-coated Fe3O4 nanoparticles with an average size of 4 nm cannot produce any appreciable heat under a biologically safe alternating magnetic field used in hyperthermia therapy; whereas, the larger ones (average size of 19 nm) are able to increase the temperature of their surrounding medium up to above therapeutic range. In addition, measured specific absorption rate (SAR) values confirm that magnetite nanoparticles with an average size of 19 nm are very excellent candidates for application in magnetic hyperthermia therapy.

  2. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  3. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  4. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design

    PubMed Central

    Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.

    2015-01-01

    Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281

  5. Optimization of the fabrication of novel stealth PLA-based nanoparticles by dispersion polymerization using D-optimal mixture design.

    PubMed

    Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O

    2014-11-01

    Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.

  6. Low Cost High Performance Phased Array Antennas with Beam Steering Capabilities

    DTIC Science & Technology

    2009-12-01

    characteristics of BSTO, the RF vacuum sputtering technique has been used and we investigated effects of sputtering parameters such as substrate...sputtering parameters , various sets of BSTO films have been deposited on different substrates and various size of CPW phase shifters have been fabricated...measurement of phase shifter 18 4. Optimization of the sputtering parameters for BSTO deposition 19 4.1 The first BSTO film sample 20 4.2 The second BSTO

  7. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  8. Relationships of maternal body size and morphology with egg and clutch size in the diamondback terrapin, Malaclemys terrapin (Testudines: Emydidae)

    USGS Publications Warehouse

    Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.

    2016-01-01

    Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.

  9. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  10. Calculating an optimal box size for ligand docking and virtual screening against experimental and predicted binding pockets.

    PubMed

    Feinstein, Wei P; Brylinski, Michal

    2015-01-01

    Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.

  11. Thermal-Structural Optimization of Integrated Cryogenic Propellant Tank Concepts for a Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.

    2004-01-01

    A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.

  12. Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples

    NASA Astrophysics Data System (ADS)

    Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut

    1993-03-01

    The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.

  13. Evaluation of sample holders designed for long-lasting X-ray micro-tomographic scans of ex-vivo soft tissue samples

    NASA Astrophysics Data System (ADS)

    Dudak, J.; Zemlicka, J.; Krejci, F.; Karch, J.; Patzelt, M.; Zach, P.; Sykora, V.; Mrzilkova, J.

    2016-03-01

    X-ray microradiography and microtomography are imaging techniques with increasing applicability in the field of biomedical and preclinical research. Application of hybrid pixel detector Timepix enables to obtain very high contrast of low attenuating materials such as soft biological tissue. However X-ray imaging of ex-vivo soft tissue samples is a difficult task due to its structural instability. Ex-vivo biological tissue is prone to fast drying-out which is connected with undesired changes of sample size and shape producing later on artefacts within the tomographic reconstruction. In this work we present the optimization of our Timepix equipped micro-CT system aiming to maintain soft tissue sample in stable condition. Thanks to the suggested approach higher contrast of tomographic reconstructions can be achieved while also large samples that require detector scanning can be easily measured.

  14. The generalization ability of SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang

    2015-06-01

    The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.

  15. Constituents of Quality of Life and Urban Size

    ERIC Educational Resources Information Center

    Royuela, Vicente; Surinach, Jordi

    2005-01-01

    Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…

  16. Generalized SMO algorithm for SVM-based multitask learning.

    PubMed

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  17. Optimization of design and operating parameters of a space-based optical-electronic system with a distributed aperture.

    PubMed

    Tcherniavski, Iouri; Kahrizi, Mojtaba

    2008-11-20

    Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.

  18. The evolution of island gigantism and body size variation in tortoises and turtles

    PubMed Central

    Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.

    2011-01-01

    Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022

  19. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  20. Thermal conductivity enhancement and sedimentation reduction of magnetorheological fluids with nano-sized Cu and Al additives

    NASA Astrophysics Data System (ADS)

    Rahim, M. S. A.; Ismail, I.; Choi, S. B.; Azmi, W. H.; Aqida, S. N.

    2017-11-01

    This work presents enhanced material characteristics of smart magnetorheological (MR) fluids by utilizing nano-sized metal particles. Especially, enhancement of thermal conductivity and reduction of sedimentation rate of MR fluids those are crucial properties for applications of MR fluids are focussed. In order to achieve this goal, a series of MR fluid samples are prepared using carbonyl iron particles (CIP) and hydraulic oil, and adding nano-sized particles of copper (Cu), aluminium (Al), and fumed silica (SiO2). Subsequently, the thermal conductivity is measured by the thermal property analyser and the sedimentation of MR fluids is measured using glass tubes without any excitation for a long time. The measured thermal conductivity is then compared with theoretical models such as Maxwell model at various CIP concentrations. In addition, in order to show the effectiveness of MR fluids synthesized in this work, the thermal conductivity of MRF-132DG which is commercially available is measured and compared with those of the prepared samples. It is observed that the thermal conductivity of the samples is much better than MRF-132DG showing the 148% increment with 40 vol% of the magnetic particles. It is also observed that the sedimentation rate of the prepared MR fluid samples is less than that of MRF-132DG showing 9% reduction with 40 vol% of the magnetic particles. The mixture optimized sample with high conductivity and low sedimentation was also obtained. The magnetization of the sample recorded an enhancement of 70.5% when compared to MRF-132DG. Furthermore, the shear yield stress of the sample were also increased with and without the influence of magnetic field.

  1. Purifying, Separating, and Concentrating Cells From a Sample Low in Biomass

    NASA Technical Reports Server (NTRS)

    Benardini, James N.; LaDuc, Myron T.; Diamond, Rochelle

    2012-01-01

    Frequently there is an inability to process and analyze samples of low biomass due to limiting amounts of relevant biomaterial in the sample. Furthermore, molecular biological protocols geared towards increasing the density of recovered cells and biomolecules of interest, by their very nature, also concentrate unwanted inhibitory humic acids and other particulates that have an adversarial effect on downstream analysis. A novel and robust fluorescence-activated cell-sorting (FACS)-based technology has been developed for purifying (removing cells from sampling matrices), separating (based on size, density, morphology), and concentrating cells (spores, prokaryotic, eukaryotic) from a sample low in biomass. The technology capitalizes on fluorescent cell-sorting technologies to purify and concentrate bacterial cells from a low-biomass, high-volume sample. Over the past decade, cell-sorting detection systems have undergone enhancements and increased sensitivity, making bacterial cell sorting a feasible concept. Although there are many unknown limitations with regard to the applicability of this technology to environmental samples (smaller cells, few cells, mixed populations), dogmatic principles support the theoretical effectiveness of this technique upon thorough testing and proper optimization. Furthermore, the pilot study from which this report is based proved effective and demonstrated this technology capable of sorting and concentrating bacterial endospore and bacterial cells of varying size and morphology. Two commercial off-the-shelf bacterial counting kits were used to optimize a bacterial stain/dye FACS protocol. A LIVE/DEAD BacLight Viability and Counting Kit was used to distinguish between the live and dead cells. A Bacterial Counting Kit comprising SYTO BC (mixture of SYTO dyes) was employed as a broad-spectrum bacterial counting agent. Optimization using epifluorescence microscopy was performed with these two dye/stains. This refined protocol was further validated using varying ratios and mixtures of cells to ensure homogenous staining compared to that of individual cells, and were utilized for flow analyzer and FACS labeling. This technology focuses on the purification and concentration of cells from low-biomass spacecraft assembly facility samples. Currently, purification and concentration of low-biomass samples plague planetary protection downstream analyses. Having a capability to use flow cytometry to concentrate cells out of low-biomass, high-volume spacecraft/ facility sample extracts will be of extreme benefit to the fields of planetary protection and astrobiology. Successful research and development of this novel methodology will significantly increase the knowledge base for designing more effective cleaning protocols, and ultimately lead to a more empirical and true account of the microbial diversity present on spacecraft surfaces. Refined cleaning and an enhanced ability to resolve microbial diversity may decrease the overall cost of spacecraft assembly and/or provide a means to begin to assess challenging planetary protection missions.

  2. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    PubMed

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  3. Tabu search and binary particle swarm optimization for feature selection using microarray data.

    PubMed

    Chuang, Li-Yeh; Yang, Cheng-Huei; Yang, Cheng-Hong

    2009-12-01

    Gene expression profiles have great potential as a medical diagnosis tool because they represent the state of a cell at the molecular level. In the classification of cancer type research, available training datasets generally have a fairly small sample size compared to the number of genes involved. This fact poses an unprecedented challenge to some classification methodologies due to training data limitations. Therefore, a good selection method for genes relevant for sample classification is needed to improve the predictive accuracy, and to avoid incomprehensibility due to the large number of genes investigated. In this article, we propose to combine tabu search (TS) and binary particle swarm optimization (BPSO) for feature selection. BPSO acts as a local optimizer each time the TS has been run for a single generation. The K-nearest neighbor method with leave-one-out cross-validation and support vector machine with one-versus-rest serve as evaluators of the TS and BPSO. The proposed method is applied and compared to the 11 classification problems taken from the literature. Experimental results show that our method simplifies features effectively and either obtains higher classification accuracy or uses fewer features compared to other feature selection methods.

  4. Pulsed-voltage atom probe tomography of low conductivity and insulator materials by application of ultrathin metallic coating on nanoscale specimen geometry.

    PubMed

    Adineh, Vahid R; Marceau, Ross K W; Chen, Yu; Si, Kae J; Velkov, Tony; Cheng, Wenlong; Li, Jian; Fu, Jing

    2017-10-01

    We present a novel approach for analysis of low-conductivity and insulating materials with conventional pulsed-voltage atom probe tomography (APT), by incorporating an ultrathin metallic coating on focused ion beam prepared needle-shaped specimens. Finite element electrostatic simulations of coated atom probe specimens were performed, which suggest remarkable improvement in uniform voltage distribution and subsequent field evaporation of the insulated samples with a metallic coating of approximately 10nm thickness. Using design of experiment technique, an experimental investigation was performed to study physical vapor deposition coating of needle specimens with end tip radii less than 100nm. The final geometries of the coated APT specimens were characterized with high-resolution scanning electron microscopy and transmission electron microscopy, and an empirical model was proposed to determine the optimal coating thickness for a given specimen size. The optimal coating strategy was applied to APT specimens of resin embedded Au nanospheres. Results demonstrate that the optimal coating strategy allows unique pulsed-voltage atom probe analysis and 3D imaging of biological and insulated samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optimization of the behavior of CTAB coated cobalt ferrite nanoparticles

    NASA Astrophysics Data System (ADS)

    Kumari, Mukesh; Bhatnagar, Mukesh Chander

    2018-05-01

    In this work, we have synthesized cetyltrimethyl ammonium bromide (CTAB) mixed cobalt ferrite (CoFe2O4) nanoparticles (NPs) using sol-gel auto-combustion method taking a different weight percent ratio of CTAB i.e., 0%, 1%, 2%, 3% and 4% with respect to metal nitrates. The morphological, structural and magnetic properties of these NPs are characterized by high resolution transmitted electron microscopy (HRTEM), X-ray diffraction (XRD), Raman spectrometer and physical property measurement system (PPMS). It has been found that saturation magnetization of cobalt ferrite increases with increase in crystalline size of the NPs. Saturation magnetization and crystallite size both were found to be lowest in the case of sample containing 2% CTAB.

  6. Investigation on Simultaneous Effects of Shot Peen and Austenitizing Time and Temperature on Grain Size and Microstructure of Austenitic Manganese Steel (Hadfield)

    NASA Astrophysics Data System (ADS)

    Beheshti, M.; Zabihiazadboni, M.; Ismail, M. C.; Kakooei, S.; Shahrestani, S.

    2018-03-01

    Optimal conditions to increase life time of casting parts have been investigated by applying various cycles of heat treatment and shot peening on Hadfield steel surface. Metallographic and SEM microstructure examinations were used to determine the effects of shot peen, austenitizing time and temperature simultaneously. The results showed that with increasing austenitizing time and temperature of casting sample, carbides resolved in austenite phase and by further increase of austenitizing temperature and time, the austenite grain size becomes larger. Metallographic images illustrated that shot peening on Hadfield steel surface; Austenite - Martensite transformation has not occurred, but its matrix hardened through twining formation process.

  7. Numerical study and ex vivo assessment of HIFU treatment time reduction through optimization of focal point trajectory

    NASA Astrophysics Data System (ADS)

    Grisey, A.; Yon, S.; Pechoux, T.; Letort, V.; Lafitte, P.

    2017-03-01

    Treatment time reduction is a key issue to expand the use of high intensity focused ultrasound (HIFU) surgery, especially for benign pathologies. This study aims at quantitatively assessing the potential reduction of the treatment time arising from moving the focal point during long pulses. In this context, the optimization of the focal point trajectory is crucial to achieve a uniform thermal dose repartition and avoid boiling. At first, a numerical optimization algorithm was used to generate efficient trajectories. Thermal conduction was simulated in 3D with a finite difference code and damages to the tissue were modeled using the thermal dose formula. Given an initial trajectory, the thermal dose field was first computed, then, making use of Pontryagin's maximum principle, the trajectory was iteratively refined. Several initial trajectories were tested. Then, an ex vivo study was conducted in order to validate the efficicency of the resulting optimized strategies. Single pulses were performed at 3MHz on fresh veal liver samples with an Echopulse and the size of each unitary lesion was assessed by cutting each sample along three orthogonal planes and measuring the dimension of the whitened area based on photographs. We propose a promising approach to significantly shorten HIFU treatment time: the numerical optimization algorithm was shown to provide a reliable insight on trajectories that can improve treatment strategies. The model must now be improved in order to take in vivo conditions into account and extensively validated.

  8. Regional HLA Differences in Poland and Their Effect on Stem Cell Donor Registry Planning

    PubMed Central

    Schmidt, Alexander H.; Solloch, Ute V.; Pingel, Julia; Sauter, Jürgen; Böhme, Irina; Cereb, Nezih; Dubicka, Kinga; Schumacher, Stephan; Wachowiak, Jacek; Ehninger, Gerhard

    2013-01-01

    Regional HLA frequency differences are of potential relevance for the optimization of stem cell donor recruitment. We analyzed a very large sample (n = 123,749) of registered Polish stem cell donors. Donor figures by 1-digit postal code regions ranged from n = 5,243 (region 9) to n = 19,661 (region 8). Simulations based on region-specific haplotype frequencies showed that donor recruitment in regions 0, 2, 3 and 4 (mainly located in the south-eastern part of Poland) resulted in an above-average increase of matching probabilities for Polish patients. Regions 1, 7, 8, 9 (mainly located in the northern part of Poland) showed an opposite behavior. However, HLA frequency differences between regions were generally small. A strong indication for regionally focused donor recruitment efforts can, therefore, not be derived from our analyses. Results of haplotype frequency estimations showed sample size effects even for sizes between n≈5,000 and n≈20,000. This observation deserves further attention as most published haplotype frequency estimations are based on much smaller samples. PMID:24069237

  9. Optimization of crystallization conditions for biological macromolecules.

    PubMed

    McPherson, Alexander; Cudney, Bob

    2014-11-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success.

  10. Optimization of crystallization conditions for biological macromolecules

    PubMed Central

    McPherson, Alexander; Cudney, Bob

    2014-01-01

    For the successful X-ray structure determination of macromolecules, it is first necessary to identify, usually by matrix screening, conditions that yield some sort of crystals. Initial crystals are frequently microcrystals or clusters, and often have unfavorable morphologies or yield poor diffraction intensities. It is therefore generally necessary to improve upon these initial conditions in order to obtain better crystals of sufficient quality for X-ray data collection. Even when the initial samples are suitable, often marginally, refinement of conditions is recommended in order to obtain the highest quality crystals that can be grown. The quality of an X-ray structure determination is directly correlated with the size and the perfection of the crystalline samples; thus, refinement of conditions should always be a primary component of crystal growth. The improvement process is referred to as optimization, and it entails sequential, incremental changes in the chemical parameters that influence crystallization, such as pH, ionic strength and precipitant concentration, as well as physical parameters such as temperature, sample volume and overall methodology. It also includes the application of some unique procedures and approaches, and the addition of novel components such as detergents, ligands or other small molecules that may enhance nucleation or crystal development. Here, an attempt is made to provide guidance on how optimization might best be applied to crystal-growth problems, and what parameters and factors might most profitably be explored to accelerate and achieve success. PMID:25372810

  11. A highly selective nanocomposite based on MIP for curcumin trace levels quantification in food samples and human plasma following optimization by central composite design.

    PubMed

    Bahrani, Sonia; Ghaedi, Mehrorang; Khoshnood Mansoorkhani, Mohammad Javad; Ostovan, Abbas

    2017-01-01

    A selective and rapid method was developed for quantification of curcumin in human plasma and food samples using molecularly imprinted magnetic multiwalled carbon nanotubes (MMWCNTs) which was characterized with EDX and FESEM. The role of sorbent mass, volume of eluent and sonication time on response in solid phase microextraction procedure were optimized by central composite design (CCD) combined with response surface methodology (RSM) using Statistica. Preliminary experiments reveal that among different solvents, methanol:dimethyl sulfoxide (4:1V/V) led to efficient and quantitative elution of analyte. A reversed-phase high performance liquid chromatographic technique with UV detection (HPLC-UV) was applied for detection of curcumin content. The assay procedure involves chromatographic separation on analytical Nucleosil C18 column (250×4.6mm I.D., 5μm particle size) at ambient temperature with acetonitrile-water adjusted at pH=4.0 (20:80, v/v) as mobile phase at flow rate of 1.0mLmin -1 , while UV detector was set at 420nm. Under optimized conditions, the method demonstrated linear calibration curve with good detection limit (0.028ngmL -1 ) and R 2 =0.9983. The proposed method was successfully applied to biological fluid and food samples including ginger powder, curry powder, and turmeric powder. Copyright © 2016. Published by Elsevier B.V.

  12. Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Vaughan, O. H.

    1978-01-01

    In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.

  13. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    PubMed

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  14. A two-stage stochastic optimization model for scheduling electric vehicle charging loads to relieve distribution-system constraints

    DOE PAGES

    Wu, Fei; Sioshansi, Ramteen

    2017-05-25

    Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less

  15. Optimal design of studies of influenza transmission in households. I: case-ascertained studies.

    PubMed

    Klick, B; Leung, G M; Cowling, B J

    2012-01-01

    Case-ascertained household transmission studies, in which households including an 'index case' are recruited and followed up, are invaluable to understanding the epidemiology of influenza. We used a simulation approach parameterized with data from household transmission studies to evaluate alternative study designs. We compared studies that relied on self-reported illness in household contacts vs. studies that used home visits to collect swab specimens for virological confirmation of secondary infections, allowing for the trade-off between sample size vs. intensity of follow-up given a fixed budget. For studies estimating the secondary attack proportion, 2-3 follow-up visits with specimens collected from all members regardless of illness were optimal. However, for studies comparing secondary attack proportions between two or more groups, such as controlled intervention studies, designs with reactive home visits following illness reports in contacts were most powerful, while a design with one home visit optimally timed also performed well.

  16. Automated measurement of diatom size

    USGS Publications Warehouse

    Spaulding, Sarah A.; Jewson, David H.; Bixby, Rebecca J.; Nelson, Harry; McKnight, Diane M.

    2012-01-01

    Size analysis of diatom populations has not been widely considered, but it is a potentially powerful tool for understanding diatom life histories, population dynamics, and phylogenetic relationships. However, measuring cell dimensions on a light microscope is a time-consuming process. An alternative technique has been developed using digital flow cytometry on a FlowCAM® (Fluid Imaging Technologies) to capture hundreds, or even thousands, of images of a chosen taxon from a single sample in a matter of minutes. Up to 30 morphological measures may be quantified through post-processing of the high resolution images. We evaluated FlowCAM size measurements, comparing them against measurements from a light microscope. We found good agreement between measurement of apical cell length in species with elongated, straight valves, including small Achnanthidium minutissimum (11-21 µm) and largeDidymosphenia geminata (87–137 µm) forms. However, a taxon with curved cells, Hannaea baicalensis (37–96 µm), showed differences of ~ 4 µm between the two methods. Discrepancies appear to be influenced by the choice of feret or geodesic measurement for asymmetric cells. We describe the operating conditions necessary for analysis of size distributions and present suggestions for optimal instrument conditions for size analysis of diatom samples using the FlowCAM. The increased speed of data acquisition through use of imaging flow cytometers like the FlowCAM is an essential step for advancing studies of diatom populations.

  17. Automating Structural Analysis of Spacecraft Vehicles

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2004-01-01

    A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.

  18. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  19. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    PubMed

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.

  20. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  1. The effect of nanoparticle size on theranostic systems: the optimal particle size for imaging is not necessarily optimal for drug delivery

    NASA Astrophysics Data System (ADS)

    Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela

    2018-02-01

    Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.

  2. Using a Blender to Assess the Microbial Density of Encapsulated Organisms

    NASA Technical Reports Server (NTRS)

    Benardini, James N.; Koukol, Robert C.; Kazarians, Gayane A.; Schubert, Wayne W.; Morales, Fabian

    2013-01-01

    There are specific NASA requirements for source-specific encapsulated microbial density for encapsulated organisms in non-metallic materials. Projects such as the Mars Science Laboratory (MSL) that use large volumes of non-metallic materials of planetary protection concern pose a challenge to their bioburden budget. An optimized and adapted destructive hardware technology employing a commercial blender was developed to assess the embedded bioburden of thermal paint for the MSL project. The main objective of this optimization was to blend the painted foil pieces in the smallest sizes possible without excessive heating. The small size increased the surface area of the paint and enabled the release of the maximum number of encapsulated microbes. During a trial run, a piece of foil was placed into a blender for 10 minutes. The outside of the blender was very hot to the touch. Thus, the grinding was reduced to five 2-minute periods with 2-minute cooling periods between cycles. However, almost 20% of the foil fraction was larger (>2 mm). Thus, the largest fractions were then put into the blender and reground, resulting in a 71% increase in particles less than 1 mm in size, and a 76% decrease in particles greater than 2 mm in size. Because a repeatable process had been developed, a painted sample was processed with over 80% of the particles being <2 mm. It was not perceived that the properties (i.e. weight and rubber-like nature) of the painted/foil pieces would allow for a finer size distribution. With these constraints, each section would be ground for a total of 10 minutes with five cycles of a 2-minute pulse followed by a 2-minute pause. It was observed on several occasions that a larger blade affected the recovery of seeded spores by approximately half an order of magnitude. In the standard approach, each piece of painted foil was aseptically removed from the bag and placed onto a sterile tray where they were sized, cut, and cleaned. Each section was then weighed and placed into a sterile Waring Laboratory Blender. Samples were processed on low speed. The ground-up samples were then transferred to a 500-mL bottle using a sterile 1-in. (.2.5-cm) trim brush. To each of the bottles sterile planetary protection rinse solution was added and a modified NASA Standard Assay (NASA HBK 6022) was performed. Both vegetative and spore plates were analyzed.

  3. Optimal deployment of thermal energy storage under diverse economic and climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael

    2014-04-01

    This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less

  4. Analytical solution of a stochastic model of risk spreading with global coupling

    NASA Astrophysics Data System (ADS)

    Morita, Satoru; Yoshimura, Jin

    2013-11-01

    We study a stochastic matrix model to understand the mechanics of risk spreading (or bet hedging) by dispersion. Up to now, this model has been mostly dealt with numerically, except for the well-mixed case. Here, we present an analytical result that shows that optimal dispersion leads to Zipf's law. Moreover, we found that the arithmetic ensemble average of the total growth rate converges to the geometric one, because the sample size is finite.

  5. Development of a Low cost Ultra tiny Line Laser Range Sensor

    DTIC Science & Technology

    2016-12-01

    Development of a Low-cost Ultra-tiny Line Laser Range Sensor Xiangyu Chen∗, Moju Zhao∗, Lingzhu Xiang†, Fumihito Sugai∗, Hiroaki Yaguchi∗, Kei Okada...and Masayuki Inaba∗ Abstract— To enable robotic sensing for tasks with require- ments on weight, size, and cost, we develop an ultra-tiny line laser ...view customizable using different laser lenses. The optimal measurement range of the sensor is 0.05[m] ∼ 2[m]. Higher sampling rates can be achieved

  6. Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.

  7. Fabrication of mesoporous silica nanoparticles by sol gel method followed various hydrothermal temperature

    NASA Astrophysics Data System (ADS)

    Purwaningsih, Hariyati; Pratiwi, Vania Mitha; Purwana, Siti Annisa Bani; Nurdiansyah, Haniffudin; Rahmawati, Yenny; Susanti, Diah

    2018-04-01

    Rice husk is an agricultural waste that is potentially used as natural silica resources. Natural silica claimed to be safe in handling, cheap and can be generate from cheap resource. In this study mesoporous silica was synthesized using sodium silicate extracted from rice husk ash. This research's aim are to study the optimization of silica extraction from rice husk, characterizing mesoporous silica from sol-gel method and surfactant templating from rice husk and the effect of hydrothermal temperature on mesoporous silica nanoparticle (MSNp) formation. In this research, rice husk was extracted with sol-gel method and was followed by hydrothermal treatment; several of hydrothermal temperatures were 85°C, 100°C, 115°C, 130°C and 145° for 24 hours. X-ray diffraction analysis was identified of α-SiO2 phase and NaCl compound impurities. Scherer's analysis method for crystallite size have resulted 6.27-40.3 nm. FTIR results of silica from extraction and MSNp indicated Si-O-Si bonds on the sample. SEM result showed the morphology of the sample that has spherical shape and smooth surface. TEM result showed particle size ranged between 69,69-84,42 nm. BET showed that the pore size classified as mesoporous with pore diameter size is 19,29 nm.

  8. The Consideration of Future Consequences and Health Behaviour: A Meta-Analysis.

    PubMed

    Murphy, Lisa; Dockray, Samantha

    2018-06-14

    The aim of this meta-analysis was to quantify the direction and strength of associations between the Consideration of Future Consequences (CFC) scale and intended and actual engagement in three categories of health-related behaviour: health risk, health promotive, and illness preventative/detective behaviour. A systematic literature search was conducted to identify studies that measured CFC and health behaviour. In total, sixty-four effect sizes were extracted from 53 independent samples. Effect sizes were synthesised using a random-effects model. Aggregate effect sizes for all behaviour categories were significant, albeit small in magnitude. There were no significant moderating effects of the length of CFC scale (long vs. short), population type (college students vs. non-college students), mean age, or sex proportion of study samples. CFC reliability and study quality score significantly moderated the overall association between CFC and health risk behaviour only. The magnitude of effect sizes is comparable to associations between health behaviour and other individual difference variables, such as the Big Five personality traits. The findings indicate that CFC is an important construct to consider in research on engagement in health risk behaviour in particular. Future research is needed to examine the optimal approach by which to apply the findings to behavioural interventions.

  9. Method development for speciation analysis of nanoparticle and ionic forms of gold in biological samples by high performance liquid chromatography hyphenated to inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Malejko, Julita; Świerżewska, Natalia; Bajguz, Andrzej; Godlewska-Żyłkiewicz, Beata

    2018-04-01

    A new method based on coupling high performance liquid chromatography (HPLC) to inductively coupled plasma mass spectrometry (ICP MS) has been developed for the speciation analysis of gold nanoparticles (AuNPs) and dissolved gold species (Au(III)) in biological samples. The column type, the composition and the flow rate of the mobile phase were carefully investigated in order to optimize the separation conditions. The usefulness of two polymeric reversed phase columns (PLRP-S with 100 nm and 400 nm pore size) to separate gold species were investigated for the first time. Under the optimal conditions (PLRP-S400 column, 10 mmol L-1 SDS and 5% methanol as the mobile phase, 0.5 mL min-1 flow rate), detection limits of 2.2 ng L-1 for Au(III), 2.8 ng L-1 for 10 nm AuNPs and 3.7 ng L-1 for 40 nm AuNPs were achieved. The accuracy of the method was proved by analysis of reference material RM 8011 (NIST) of gold nanoparticles of nominal diameter of 10 nm. The HPLC-ICP MS method has been successfully applied to the detection and size characterization of gold species in lysates of green algae Acutodesmus obliquus, typical representative of phytoplankton flora, incubated with 10 nm AuNPs or Au(III).

  10. Optimality, sample size, and power calculations for the sequential parallel comparison design.

    PubMed

    Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A

    2011-10-15

    The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  12. Synthesis, crystal structure and magnetic properties of superconducting single crystals of HgBa2CuO4+δ

    NASA Astrophysics Data System (ADS)

    Bertinotti, A.; Viallet, V.; Colson, D.; Marucco, J.-F.; Hammann, J.; Forget, A.; Le Bras, G.

    1996-02-01

    Single crystals of HgBa2CuO4+δ of submillimetric sizes were grown with the same one step, low pressure, gold amalgamation technique used to obtain single crystals of HgBa2Ca2Cu3O8+δ. Remarkable superconducting properties are displayed by the samples which are optimally doped as grown. The sharpness of the transition profiles of the magnetic susceptibility, its anisotropy dependence and the volume fraction exhibiting the Meissner effect exceed the values obtained with the best crystal samples of Hg-1223. X-rays show that no substitutional defects have been found in the mercury plane, in particular no mixed occupancy of copper at the mercury site. The interstitial oxygen content at (1/2, 1/2, 0) δ = 0.066+/-0.008 is about one third that observed in optimally doped Hg-1223, resulting in an identical doping level per CuO2 plane in both compounds.

  13. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much lessmore » computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.« less

  14. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  15. Isolating magnetic moments from individual grains within a magnetic assemblage

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Fabian, K.; Jansen, C.; Lascu, I.; Harrison, R.; Barnhoorn, A.; de Groot, L. V.

    2017-12-01

    Methods to derive paleodirections or paleointensities from rocks currently rely on measurements of bulk samples (typically 10 cc). The process of recording and storing magnetizations as function of temperature, however, differs for grains of various sizes and chemical compositions. Most rocks, by their mere nature, consist of assemblages of grains varying in size, shape, and chemistry. Unraveling the behavior of individual grains is a holy grail in fundamental rock magnetism. Recently, we showed that it is possible to obtain plausible magnetic moments for individual grains in a synthetic sample by a micromagnetic tomography (MMT) technique. We use a least-squares inversion to obtain these magnetic moments based on the physical locations and dimensions of the grains obtained from a MicroCT scanner and a magnetic flux density map of the surface of the sample. The sample used for this proof of concept, however, was optimized for success: it had a low dispersion of the grains, and the grains were large enough so they were easily detected by the MicroCT scanner. Natural lavas are much more complex than the synthetic sample analyzed so far: the dispersion of the magnetic markers is one order of magnitude higher, the grains differ more in composition and size, and many small (submicron) magnetic markers may be present that go undetected by the MicroCT scanner. Here we present the first results derived from a natural volcanic sample from the 1907-flow at Hawaii. To analyze the magnetic flux at the surface of the sample at room temperature, we used the Magnetic Tunneling Junction (MTJ) technique. We were able to successfully obtain MicroCT and MTJ scans from the sample and isolate plausible magnetic moments for individual grains in the top 70 µm of the sample. We discuss the potential of the MMT technique applied to natural samples and compare the MTJ and SSM methods in terms of work flow and quality of the results.

  16. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  17. Low energy isomers of (H2O)25 from a hierarchical method based on Monte Carlo Temperature Basin Paving and Molecular Tailoring Approaches benchmarked by full MP2 calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahu, Nityananda; Gadre, Shridhar R.; Bandyopadhyay, Pradipta

    We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving (MCTBP) sampling of the cluster’s Potential Energy Surface (PES) with the Effective Fragment Potential (EFP), subsequent geometry optimization using the Molecular Tailoring fragmentation Approach (MTA) and final refinement at the second order Møller Plesset perturbation (MP2) level of theory. The MTA geometry optimizations usedmore » between 14 and 18 main fragments with maximum sizes between 11 and 14 water molecules and average size of 10 water molecules, whose energies and gradients were computed at the MP2 level. The MTA-MP2 optimized geometries were found to be quite close (within < 0.5 kcal/mol) to the ones obtained from the MP2 optimization of the whole cluster. The grafting of the MTA-MP2 energies yields electronic energies that are within < 5×10-4 a.u. from the MP2 results for the whole cluster while preserving their energy order. The MTA-MP2 method was also found to reproduce the MP2 harmonic vibrational frequencies in both the HOH bending and the OH stretching regions.« less

  18. Nanometer-sized ceria-coated silica-iron oxide for the reagentless microextraction/preconcentration of heavy metals in environmental and biological samples followed by slurry introduction to ICP-OES.

    PubMed

    Dados, A; Paparizou, E; Eleftheriou, P; Papastephanou, C; Stalikas, C D

    2014-04-01

    A slurry suspension sampling technique is developed and optimized for the rapid microextraction of heavy metals and analysis using nanometer-sized ceria-coated silica-iron oxide particles and inductively coupled plasma optical emission spectrometry (ICP-OES). Magnetic-silica material is synthesized by a co-precipitation and sol-gel method followed by ceria coating through a precipitation. The large particles are removed using a sedimentation-fractionation procedure and a magnetic homogeneous colloidal suspension of ceria-modified iron oxide-silica is produced for microextraction. The nanometer-sized particles are separated from the sample solution magnetically and analyzed with ICP-OES using a slurry suspension sampling approach. The ceria-modified iron oxide-silica does not contain any organic matter and this probably justifies the absence of matrix effect on plasma atomization capacity, when increased concentrations of slurries are aspirated. The As, Be, Mo, Cr, Cu, Pb, Hg, Sb, Se and V can be preconcentrated by the proposed method at pH 6.0 while Mn, Cd, Co and Ni require a pH ≥ 8.0. Satisfactory values are obtained for the relative standard deviations (2-6%), recoveries (88-102%), enrichment factors (14-19) and regression correlation coefficients as well as detectability, at sub-μg L(-1) levels. The applicability of magnetic ceria for the microextraction of metal ions in combination with the slurry introduction technique using ICP is substantiated by the analysis of environmental water and urine samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Formulation and In-vitro Evaluation of Tretinoin Microemulsion as a Potential Carrier for Dermal Drug Delivery

    PubMed Central

    Mortazavi, Seyed Alireza; Pishrochi, Sanaz; Jafari azar, Zahra

    2013-01-01

    In this study, tretinoin microemulsion has been formulated based on phase diagram studies by changing the amounts and proportions of inactive ingredients, such as surfactants, co-surfactants and oils. The effects of these variables have been determined on microemulsion formation, particle size of the dispersed phase and release profile of tretinoin from microemulsion through dialysis membrane. In released studies, static Franz diffusion cells mounted with dialysis membrane were used. Sampling was conducted every 3 h at room temperature over a period of 24 h. The amount of released drug was measured with UV-spectrophotometer and the percentage of drug released was calculated. Based on the results obtained, the oil phase concentration had a proportional effect on particle size which can consequently influence on drug release. The particle size and the amount of released drug were affected by the applied surfactants. The components of the optimized microemulsion formulation were 15% olive oil, 12% propylene glycol (as co-surfactant), 33% Tween®80 (as surfactant) and 40% distilled water, which was tested for viscosity and rheological behavior. The prepared tretinoin microemulsion showed pseudoplastic-thixotropic behavior. The profile of drug release follows zero order kinetics. The optimized tretinoin microemulsion showed enhanced in-vitro release profile compared to the commercial gels and creams. PMID:24523740

  20. Formulation and In-vitro Evaluation of Tretinoin Microemulsion as a Potential Carrier for Dermal Drug Delivery.

    PubMed

    Mortazavi, Seyed Alireza; Pishrochi, Sanaz; Jafari Azar, Zahra

    2013-01-01

    In this study, tretinoin microemulsion has been formulated based on phase diagram studies by changing the amounts and proportions of inactive ingredients, such as surfactants, co-surfactants and oils. The effects of these variables have been determined on microemulsion formation, particle size of the dispersed phase and release profile of tretinoin from microemulsion through dialysis membrane. In released studies, static Franz diffusion cells mounted with dialysis membrane were used. Sampling was conducted every 3 h at room temperature over a period of 24 h. The amount of released drug was measured with UV-spectrophotometer and the percentage of drug released was calculated. Based on the results obtained, the oil phase concentration had a proportional effect on particle size which can consequently influence on drug release. The particle size and the amount of released drug were affected by the applied surfactants. The components of the optimized microemulsion formulation were 15% olive oil, 12% propylene glycol (as co-surfactant), 33% Tween(®)80 (as surfactant) and 40% distilled water, which was tested for viscosity and rheological behavior. The prepared tretinoin microemulsion showed pseudoplastic-thixotropic behavior. The profile of drug release follows zero order kinetics. The optimized tretinoin microemulsion showed enhanced in-vitro release profile compared to the commercial gels and creams.

  1. Optimization and validation of highly selective microfluidic integrated silicon nanowire chemical sensor

    NASA Astrophysics Data System (ADS)

    Ehfaed, Nuri. A. K. H.; Bathmanathan, Shillan A. L.; Dhahi, Th S.; Adam, Tijjani; Hashim, Uda; Noriman, N. Z.

    2017-09-01

    The study proposed characterization and optimization of silicon nanosensor for specific detection of heavy metal. The sensor was fabricated in-house and conventional photolithography coupled with size reduction via dry etching process in an oxidation furnace. Prior to heavy metal heavy metal detection, the capability to aqueous sample was determined utilizing serial DI water at various. The sensor surface was surface modified with Organofunctional alkoxysilanes (3-aminopropyl) triethoxysilane (APTES) to create molecular binding chemistry. This has allowed interaction between heavy metals being measured and the sensor component resulting in increasing the current being measured. Due to its, excellent detection capabilities, this sensor was able to identify different group heavy metal species. The device was further integrated with sub-50 µm for chemical delivery.

  2. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  3. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  4. Massively parallel sequencing of 17 commonly used forensic autosomal STRs and amelogenin with small amplicons.

    PubMed

    Kim, Eun Hye; Lee, Hwan Young; Yang, In Seok; Jung, Sang-Eun; Yang, Woo Ick; Shin, Kyoung-Jin

    2016-05-01

    The next-generation sequencing (NGS) method has been utilized to analyze short tandem repeat (STR) markers, which are routinely used for human identification purposes in the forensic field. Some researchers have demonstrated the successful application of the NGS system to STR typing, suggesting that NGS technology may be an alternative or additional method to overcome limitations of capillary electrophoresis (CE)-based STR profiling. However, there has been no available multiplex PCR system that is optimized for NGS analysis of forensic STR markers. Thus, we constructed a multiplex PCR system for the NGS analysis of 18 markers (13CODIS STRs, D2S1338, D19S433, Penta D, Penta E and amelogenin) by designing amplicons in the size range of 77-210 base pairs. Then, PCR products were generated from two single-sources, mixed samples and artificially degraded DNA samples using a multiplex PCR system, and were prepared for sequencing on the MiSeq system through construction of a subsequent barcoded library. By performing NGS and analyzing the data, we confirmed that the resultant STR genotypes were consistent with those of CE-based typing. Moreover, sequence variations were detected in targeted STR regions. Through the use of small-sized amplicons, the developed multiplex PCR system enables researchers to obtain successful STR profiles even from artificially degraded DNA as well as STR loci which are analyzed with large-sized amplicons in the CE-based commercial kits. In addition, successful profiles can be obtained from mixtures up to a 1:19 ratio. Consequently, the developed multiplex PCR system, which produces small size amplicons, can be successfully applied to STR NGS analysis of forensic casework samples such as mixtures and degraded DNA samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Optimal design of clinical trials with biologics using dose-time-response models.

    PubMed

    Lange, Markus R; Schmidli, Heinz

    2014-12-30

    Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.

  6. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  7. Cluster randomised crossover trials with binary data and unbalanced cluster sizes: application to studies of near-universal interventions in intensive care.

    PubMed

    Forbes, Andrew B; Akram, Muhammad; Pilcher, David; Cooper, Jamie; Bellomo, Rinaldo

    2015-02-01

    Cluster randomised crossover trials have been utilised in recent years in the health and social sciences. Methods for analysis have been proposed; however, for binary outcomes, these have received little assessment of their appropriateness. In addition, methods for determination of sample size are currently limited to balanced cluster sizes both between clusters and between periods within clusters. This article aims to extend this work to unbalanced situations and to evaluate the properties of a variety of methods for analysis of binary data, with a particular focus on the setting of potential trials of near-universal interventions in intensive care to reduce in-hospital mortality. We derive a formula for sample size estimation for unbalanced cluster sizes, and apply it to the intensive care setting to demonstrate the utility of the cluster crossover design. We conduct a numerical simulation of the design in the intensive care setting and for more general configurations, and we assess the performance of three cluster summary estimators and an individual-data estimator based on binomial-identity-link regression. For settings similar to the intensive care scenario involving large cluster sizes and small intra-cluster correlations, the sample size formulae developed and analysis methods investigated are found to be appropriate, with the unweighted cluster summary method performing well relative to the more optimal but more complex inverse-variance weighted method. More generally, we find that the unweighted and cluster-size-weighted summary methods perform well, with the relative efficiency of each largely determined systematically from the study design parameters. Performance of individual-data regression is adequate with small cluster sizes but becomes inefficient for large, unbalanced cluster sizes. When outcome prevalences are 6% or less and the within-cluster-within-period correlation is 0.05 or larger, all methods display sub-nominal confidence interval coverage, with the less prevalent the outcome the worse the coverage. As with all simulation studies, conclusions are limited to the configurations studied. We confined attention to detecting intervention effects on an absolute risk scale using marginal models and did not explore properties of binary random effects models. Cluster crossover designs with binary outcomes can be analysed using simple cluster summary methods, and sample size in unbalanced cluster size settings can be determined using relatively straightforward formulae. However, caution needs to be applied in situations with low prevalence outcomes and moderate to high intra-cluster correlations. © The Author(s) 2014.

  8. Comparing kinetic curves in liquid chromatography

    NASA Astrophysics Data System (ADS)

    Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.

    2017-01-01

    Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.

  9. Simultaneous determination of 9 heterocyclic aromatic amines in pork products by liquid chromatography coupled with triple quadrupole tandem mass spectrometry

    NASA Astrophysics Data System (ADS)

    Shen, X. C.; Zhang, Y. L.; Cui, Y. Q.; Xu, L. Y.; Li, X.; Qi, J. H.

    2017-07-01

    Heterocyclic aromatic amines (HAAs) are potent mutagens that formed at high temperature in cooked, protein-rich food. Owing to their frequent intake, an accurate method is essential to access human health risk of HAAs exposure through detecting these compounds in various heat-treated meat products. In this study, a liquid chromatography-electrospray tandem mass spectrometry (LC--ESI-MS/MS) method was developed to perform the determination of 9 mutagenic heterocyclic amines (HAAs) in meat samples with multiple reaction monitoring (MRM) mode. Ultrasound assisted extraction and diatomaceous earth was employed to extract HAAs from food samples, and the analytes were purified and enriched using tandem solid phase extraction, with propyl sulfonic acid coupled to a C18 cartridge. Two parameters, extraction time and eluent, were carefully optimized to improve the extraction and purification efficiency. The LC separation was carried out using a Zorbax SB-C18 (3.5 μm particle size, 2.1 × 150 mm i.d.) column and optimized some parameters, such as pH, concentration and volume. Under the optimal experimental conditions, recoveries ranged from 52.97% to 97.11% with good quality parameters: limit of detection values between 0.02 and 0.24 ng mL-1, linearity (R2>0.998), and run-to-run and day-to-day precisions lower than 9.81% achieved. To evaluate the performance of the method in high throughput analysis of complex meat samples, the LC-MS/MS method was applied to the analysis of HAAs in three food samples, and the results demonstrated that the method can be used for the trace determination of HAAs in pork samples.

  10. ALCHEMY: a reliable method for automated SNP genotype calling for small batch sizes and highly homozygous populations

    PubMed Central

    Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.

    2010-01-01

    Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420

  11. Value for money? A contingent valuation study of the optimal size of the Swedish health care budget.

    PubMed

    Eckerlund, I; Johannesson, M; Johansson, P O; Tambour, M; Zethraeus, N

    1995-11-01

    The contingent valuation method has been developed in the environmental field to measure the willingness to pay for environmental changes using survey methods. In this exploratory study the contingent valuation method was used to analyse how much individuals are willing to spend in total in the form of taxes for health care in Sweden, i.e. to analyse the optimal size of the 'health care budget' in Sweden. A binary contingent valuation question was included in a telephone survey of a random sample of 1260 households in Sweden. With a conservative interpretation of the data the result shows that 50% of the respondents would accept an increased tax payment to health care of about SEK 60 per month ($1 = SEK 8). It is concluded that the results indicate that the population overall thinks that the current spending on health care in Sweden is on a reasonable level. There seems to be a willingness to increase the tax payments somewhat, but major increases does not seem acceptable to a majority of the population.

  12. Size and shape effects on diffusion and absorption of colloidal particles near a partially absorbing sphere: implications for uptake of nanoparticles in animal cells.

    PubMed

    Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian

    2008-12-01

    A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.

  13. Size and shape effects on diffusion and absorption of colloidal particles near a partially absorbing sphere: Implications for uptake of nanoparticles in animal cells

    NASA Astrophysics Data System (ADS)

    Shi, Wendong; Wang, Jizeng; Fan, Xiaojun; Gao, Huajian

    2008-12-01

    A mechanics model describing how a cell membrane with diffusive mobile receptors wraps around a ligand-coated cylindrical or spherical particle has been recently developed to model the role of particle size in receptor-mediated endocytosis. The results show that particles in the size range of tens to hundreds of nanometers can enter cells even in the absence of clathrin or caveolin coats. Here we report further progress on modeling the effects of size and shape in diffusion, interaction, and absorption of finite-sized colloidal particles near a partially absorbing sphere. Our analysis indicates that, from the diffusion and interaction point of view, there exists an optimal hydrodynamic size of particles, typically in the nanometer regime, for the maximum rate of particle absorption. Such optimal size arises as a result of balance between the diffusion constant of the particles and the interaction energy between the particles and the absorbing sphere relative to the thermal energy. Particles with a smaller hydrodynamic radius have larger diffusion constant but weaker interaction with the sphere while larger particles have smaller diffusion constant but stronger interaction with the sphere. Since the hydrodynamic radius is also determined by the particle shape, an optimal hydrodynamic radius implies an optimal size as well as an optimal aspect ratio for a nonspherical particle. These results show broad agreement with experimental observations and may have general implications on interaction between nanoparticles and animal cells.

  14. Investigation of optimal route to fabricate submicron-sized Sm{sub 2}Fe{sub 17} particles with reduction-diffusion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Shusuke, E-mail: shusuke-okada@aist.go.jp; Takagi, Kenta; Ozaki, Kimihiro

    Submicron-sized Sm{sub 2}Fe{sub 17} powder samples were fabricated by a non-pulverizing process through reduction-diffusion of precursors prepared by a wet-chemical technique. Three precursors having different morphologies, which were micron-sized porous Sm-Fe oxide-impregnated iron nitrate, acicular goethite impregnated-samarium nitrate, and a conventional Sm-Fe coprecipitate, were prepared and subjected to hydrogen reduction and reduction-diffusion treatment to clarify whether these precursors could be convert to Sm{sub 2}Fe{sub 17} without impurity phases and which precursor is the most attractive for producing submicron-sized Sm{sub 2}Fe{sub 17} powder. As a result, all three precursors were successfully converted to Sm{sub 2}Fe{sub 17} powders without impurity phases, andmore » the synthesis route using iron-oxide particle-impregnated samarium oxide was revealed to have the greatest potential among the three routes.« less

  15. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  16. Scaling ice microstructures from the laboratory to nature: cryo-EBSD on large samples.

    NASA Astrophysics Data System (ADS)

    Prior, David; Craw, Lisa; Kim, Daeyeong; Peyroux, Damian; Qi, Chao; Seidemann, Meike; Tooley, Lauren; Vaughan, Matthew; Wongpan, Pat

    2017-04-01

    Electron backscatter diffraction (EBSD) has extended significantly our ability to conduct detailed quantitative microstructural investigations of rocks, metals and ceramics. EBSD on ice was first developed in 2004. Techniques have improved significantly in the last decade and EBSD is now becoming more common in the microstructural analysis of ice. This is particularly true for laboratory-deformed ice where, in some cases, the fine grain sizes exclude the possibility of using a thin section of the ice. Having the orientations of all axes (rather than just the c-axis as in an optical method) yields important new information about ice microstructure. It is important to examine natural ice samples in the same way so that we can scale laboratory observations to nature. In the case of ice deformation, higher strain rates are used in the laboratory than those seen in nature. These are achieved by increasing stress and/or temperature and it is important to assess that the microstructures produced in the laboratory are comparable with those observed in nature. Natural ice samples are coarse grained. Glacier and ice sheet ice has a grain size from a few mm up to several cm. Sea and lake ice has grain sizes of a few cm to many metres. Thus extending EBSD analysis to larger sample sizes to include representative microstructures is needed. The chief impediments to working on large ice samples are sample exchange, limitations on stage motion and temperature control. Large ice samples cannot be transferred through a typical commercial cryo-transfer system that limits sample sizes. We transfer through a nitrogen glove box that encloses the main scanning electron microscope (SEM) door. The nitrogen atmosphere prevents the cold stage and the sample from becoming covered in frost. Having a long optimal working distance for EBSD (around 30mm for the Otago cryo-EBSD facility) , by moving the camera away from the pole piece, enables the stage to move without crashing into either the EBSD camera or the SEM pole piece (final lens). In theory a sample up to 100mm perpendicular to the tilt axis by 150mm parallel to the tilt axis can be analysed. In practice, the motion of our stage is restricted to maximum dimensions of 100 by 50mm by a conductive copper braid on our cold stage. Temperature control becomes harder as the samples become larger. If the samples become too warm then they will start to sublime and the quality of EBSD data will reduce. Large samples need to be relatively thin ( 5mm or less) so that conduction of heat to the cold stage is more effective at keeping the surface temperature low. In the Otago facility samples of up to 40mm by 40mm present little problem and can be analysed for several hours without significant sublimation. Larger samples need more care, e.g. fast sample transfer to keep the sample very cold. The largest samples we work on routinely are 40 by 60mm in size. We will show examples of EBSD data from glacial ice and sea ice from Antarctica and from large laboratory ice samples.

  17. Size-exclusion chromatography of perfluorosulfonated ionomers.

    PubMed

    Mourey, T H; Slater, L A; Galipo, R C; Koestner, R J

    2011-08-26

    A size-exclusion chromatography (SEC) method in N,N-dimethylformamide containing 0.1 M LiNO(3) is shown to be suitable for the determination of molar mass distributions of three classes of perfluorosulfonated ionomers, including Nafion(®). Autoclaving sample preparation is optimized to prepare molecular solutions free of aggregates, and a solvent exchange method concentrates the autoclaved samples to enable the use of molar-mass-sensitive detection. Calibration curves obtained from light scattering and viscometry detection suggest minor variation in the specific refractive index increment across the molecular size distributions, which introduces inaccuracies in the calculation of local absolute molar masses and intrinsic viscosities. Conformation plots that combine apparent molar masses from light scattering detection with apparent intrinsic viscosities from viscometry detection partially compensate for the variations in refractive index increment. The conformation plots are consistent with compact polymer conformations, and they provide Mark-Houwink-Sakurada constants that can be used to calculate molar mass distributions without molar-mass-sensitive detection. Unperturbed dimensions and characteristic ratios calculated from viscosity-molar mass relationships indicate unusually free rotation of the perfluoroalkane backbones and may suggest limitations to applying two-parameter excluded volume theories for these ionomers. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Magnetic solid-phase extraction of tetracyclines using ferrous oxide coated magnetic silica microspheres from water samples.

    PubMed

    Lian, Lili; Lv, Jinyi; Wang, Xiyue; Lou, Dawei

    2018-01-26

    A novel magnetic solid-phase extraction approach was proposed for extraction of potential residues of tetracyclines (TCs) in tap and river water samples, based on Fe 3 O 4 @SiO 2 @FeO magnetic nanocomposite. Characterized results showed that the received Fe 3 O 4 @SiO 2 @FeO had distinguished magnetism and core-shell structure. Modified FeO nanoparticles with an ∼5 nm size distribution were homogeneously dispersed on the surface of the silica shell. Owing to the strong surface affinity of Fe (II) toward TCs, the magnetic nanocomposite could be applied to efficiently extract three TCs antibiotics, namely, oxytetracycline, tetracycline and chlortetracycline from water samples. Several factors, such as sorbent amount, pH condition, adsorption and desorption time, desorption solvent, selectivity and sample volume, influencing the extraction performance of TCs were investigated and optimized. The developed method showed excellent linearity (R > 0.9992) in the range of 0.133-333 μg L -1 , under optimized conditions. The limits of detection were between 0.027 and 0.107 μg L -1 for oxytetracycline, tetracycline and chlortetracycline, respectively. The feasibility of this method was evaluated by analysis of tap and river water samples. The recoveries at the spiked concentration levels ranged from 91.0% to 104.6% with favorable reproducibility (RSD < 4%). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Optimizing and Validating a Brief Assessment for Identifying Children of Service Members at Risk for Psychological Health Problems Following Parent Deployment

    DTIC Science & Technology

    2017-09-01

    these groups . In the 2014/2015 year, efforts focused on securing a commitment from the United States Marine Corps to host the study. In Winter 2014...we can reach an adjusted sample size target in the 2017/2018 project year by expanding our recruitment to incorporate deploying infantry groups ...Vocabulary Test Revised. Circle Pines, MN: American Guidance Service. George, C. & Solomon , J. (2008). The caregving system: A behavioral systems approach

  20. Ultrastructurally-smooth thick partitioning and volume stitching for larger-scale connectomics

    PubMed Central

    Hayworth, Kenneth J.; Xu, C. Shan; Lu, Zhiyuan; Knott, Graham W.; Fetter, Richard D.; Tapia, Juan Carlos; Lichtman, Jeff W.; Hess, Harald F.

    2015-01-01

    FIB-SEM has become an essential tool for studying neural tissue at resolutions below 10×10×10 nm, producing datasets superior for automatic connectome tracing. We present a technical advance, ultrathick sectioning, which reliably subdivides embedded tissue samples into chunks (20 µm thick) optimally sized and mounted for efficient, parallel FIB-SEM imaging. These chunks are imaged separately and then ‘volume stitched’ back together, producing a final 3D dataset suitable for connectome tracing. PMID:25686390

  1. Study design in high-dimensional classification analysis.

    PubMed

    Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen

    2016-10-01

    Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Optimization of nanoparticle structure for improved conversion efficiency of dye solar cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohamed, Norani Muti, E-mail: noranimuti-mohamed@petronas.com.my; Zaine, Siti Nur Azella, E-mail: ct.azella@gmail.com.my

    2014-10-24

    Heavy dye loading and the ability to contain the light within the thin layer (typically ∼12 μm) are the requirement needed for the photoelectrode material in order to enhance the harvesting efficiency of dye solar cell. This can be realized by optimizing the particle size with desirable crystal structure. The paper reports the investigation on the dependency of the dye loading and light scattering on the properties of nanostructured photoelectrode materials by comparing 4 different samples of TiO{sub 2} in the form of nanoparticles and micron-sized TiO{sub 2} aggregates which composed of nanocrystallites. Their properties were evaluated by using scanningmore » electron microscopy, X-ray diffraction and UVVis spectroscopy while the performance of the fabricated test cells were measured using universal photovoltaic test system (UPTS) under 1000 W/cm{sup 2} intensity of radiation. Nano sized particles provide large surface area which allow for greater dye adsorption but have no ability to retain the incident light in the TiO{sub 2} film. In contrast, micron-sized particles in the form of aggregates can generate light scattering allowing the travelling distance of the light to be extended and increasing the interaction between the photons and dye molecules adsorb on TiO{sub 2}nanocrystallites. This resulted in an improvement in the conversion efficiency of the aggregates that demonstrates the close relation between light scattering effect and the structure of the photolectrode film.« less

  3. Assessing the Power of Exome Chips.

    PubMed

    Page, Christian Magnus; Baranzini, Sergio E; Mevik, Bjørn-Helge; Bos, Steffan Daniel; Harbo, Hanne F; Andreassen, Bettina Kulle

    2015-01-01

    Genotyping chips for rare and low-frequent variants have recently gained popularity with the introduction of exome chips, but the utility of these chips remains unclear. These chips were designed using exome sequencing data from mainly American-European individuals, enriched for a narrow set of common diseases. In addition, it is well-known that the statistical power of detecting associations with rare and low-frequent variants is much lower compared to studies exclusively involving common variants. We developed a simulation program adaptable to any exome chip design to empirically evaluate the power of the exome chips. We implemented the main properties of the Illumina HumanExome BeadChip array. The simulated data sets were used to assess the power of exome chip based studies for varying effect sizes and causal variant scenarios. We applied two widely-used statistical approaches for rare and low-frequency variants, which collapse the variants into genetic regions or genes. Under optimal conditions, we found that a sample size between 20,000 to 30,000 individuals were needed in order to detect modest effect sizes (0.5% < PAR > 1%) with 80% power. For small effect sizes (PAR <0.5%), 60,000-100,000 individuals were needed in the presence of non-causal variants. In conclusion, we found that at least tens of thousands of individuals are necessary to detect modest effects under optimal conditions. In addition, when using rare variant chips on cohorts or diseases they were not originally designed for, the identification of associated variants or genes will be even more challenging.

  4. Topological Hall and Spin Hall Effects in Disordered Skyrmionic Textures

    NASA Astrophysics Data System (ADS)

    Ndiaye, Papa Birame; Akosa, Collins; Manchon, Aurelien; Spintronics Theory Group Team

    We carry out a throughout study of the topological Hall and topological spin Hall effects in disordered skyrmionic systems: the dimensionless (spin) Hall angles are evaluated across the energy band structure in the multiprobe Landauer-Büttiker formalism and their link to the effective magnetic field emerging from the real space topology of the spin texture is highlighted. We discuss these results for an optimal skyrmion size and for various sizes of the sample and found that the adiabatic approximation still holds for large skyrmions as well as for few atomic size-nanoskyrmions. Finally, we test the robustness of the topological signals against disorder strength and show that topological Hall effect is highly sensitive to momentum scattering. This work was supported by the King Abdullah University of Science and Technology (KAUST) through the Award No OSR-CRG URF/1/1693-01 from the Office of Sponsored Research (OSR).

  5. Importance of size and distribution of Ni nanoparticles for the hydrodeoxygenation of microalgae oil.

    PubMed

    Song, Wenji; Zhao, Chen; Lercher, Johannes A

    2013-07-22

    Improved synthetic approaches for preparing small-sized Ni nanoparticles (d=3 nm) supported on HBEA zeolite have been explored and compared with the traditional impregnation method. The formation of surface nickel silicate/aluminate involved in the two precipitation processes are inferred to lead to the stronger interaction between the metal and the support. The lower Brønsted acid concentrations of these two Ni/HBEA catalysts compared with the parent zeolite caused by the partial exchange of Brønsted acid sites by Ni(2+) cations do not influence the hydrodeoxygenation rates, but alter the product selectivity. Higher initial rates and higher stability have been achieved with these optimized catalysts for the hydrodeoxygenation of stearic acid and microalgae oil. Small metal particles facilitate high initial catalytic activity in the fresh sample and size uniformity ensures high catalyst stability. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Development of poly-l-lysine-coated calcium-alginate microspheres encapsulating fluorescein-labeled dextrans

    NASA Astrophysics Data System (ADS)

    Charron, Luc; Harmer, Andrea; Lilge, Lothar

    2005-09-01

    A technique to produce fluorescent cell phantom standards based on calcium alginate microspheres with encapsulated fluorescein-labeled dextrans is presented. An electrostatic ionotropic gelation method is used to create the microspheres which are then exposed to an encapsulation method using poly-l-lysine to trap the dextrans inside. Both procedures were examined in detail to find the optimal parameters producing cell phantoms meeting our requirements. Size distributions favoring 10-20 microns microspheres were obtained by varying the high voltage and needle size parameters. Typical size distributions of the samples were centered at 150 μm diameter. Neither the molecular weight nor the charge of the dextrans had a significant effect on their retention in the microspheres, though anionic dextrans were chosen to help in future capillary electrophoresis work. Increasing the exposure time of the microspheres to the poly-l-lysine solution decreased the leakage rates of fluorescein-labeled dextrans.

  7. Optimal background matching camouflage.

    PubMed

    Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C

    2017-07-12

    Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.

  8. Optimal cut-off levels to define obesity: body mass index and waist circumference, and their relationship to cardiovascular disease, dyslipidaemia, hypertension and diabetes in Malaysia.

    PubMed

    Zaher, Zaki Morad Mohd; Zambari, Robayaah; Pheng, Chan Siew; Muruga, Vadivale; Ng, Bernard; Appannah, Geeta; Onn, Lim Teck

    2009-01-01

    Many studies in Asia have demonstrated that Asian populations may require lower cut-off levels for body mass index (BMI) and waist circumference to define obesity and abdominal obesity respectively, compared to western populations. Optimal cut-off levels for body mass index and waist circumference were determined to assess the relationship between the two anthropometric- and cardiovascular indices. Receiver operating characteristics analysis was used to determine the optimal cut-off levels. The study sample included 1833 subjects (mean age of 44+/-14 years) from 93 primary care clinics in Malaysia. Eight hundred and seventy two of the subjects were men and 960 were women. The optimal body mass index cut-off values predicting dyslipidaemia, hypertension, diabetes mellitus, or at least one cardiovascular risk factor varied from 23.5 to 25.5 kg/m2 in men and 24.9 to 27.4 kg/m2 in women. As for waist circumference, the optimal cut-off values varied from 83 to 92 cm in men and from 83 to 88 cm in women. The optimal cut-off values from our study showed that body mass index of 23.5 kg/m2 in men and 24.9 kg/m2 in women and waist circumference of 83 cm in men and women may be more suitable for defining the criteria for overweight or obesity among adults in Malaysia. Waist circumference may be a better indicator for the prediction of obesity-related cardiovascular risk factors in men and women compared to BMI. Further investigation using a bigger sample size in Asia needs to be done to confirm our findings.

  9. Improving risk classification of critical illness with biomarkers: a simulation study

    PubMed Central

    Seymour, Christopher W.; Cooke, Colin R.; Wang, Zheyu; Kerr, Kathleen F.; Yealy, Donald M.; Angus, Derek C.; Rea, Thomas D.; Kahn, Jeremy M.; Pepe, Margaret S.

    2012-01-01

    Purpose Optimal triage of patients at risk of critical illness requires accurate risk prediction, yet little data exists on the performance criteria required of a potential biomarker to be clinically useful. Materials and Methods We studied an adult cohort of non-arrest, non-trauma emergency medical services encounters transported to a hospital from 2002–2006. We simulated hypothetical biomarkers increasingly associated with critical illness during hospitalization, and determined the biomarker strength and sample size necessary to improve risk classification beyond a best clinical model. Results Of 57,647 encounters, 3,121 (5.4%) were hospitalized with critical illness and 54,526 (94.6%) without critical illness. The addition of a moderate strength biomarker (odds ratio=3.0 for critical illness) to a clinical model improved discrimination (c-statistic 0.85 vs. 0.8, p<0.01), reclassification (net reclassification improvement=0.15, 95%CI: 0.13,0.18), and increased the proportion of cases in the highest risk categoryby+8.6% (95%CI: 7.5,10.8%). Introducing correlation between the biomarker and physiological variables in the clinical risk score did not modify the results. Statistically significant changes in net reclassification required a sample size of at least 1000 subjects. Conclusions Clinical models for triage of critical illness could be significantly improved by incorporating biomarkers, yet, substantial sample sizes and biomarker strength may be required. PMID:23566734

  10. Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.

    PubMed

    Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M

    2017-09-01

    Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.

  11. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  12. Tracing Staphylococcus aureus in small and medium-sized food-processing factories on the basis of molecular sub-species typing.

    PubMed

    Koreňová, Janka; Rešková, Zuzana; Véghová, Adriana; Kuchta, Tomáš

    2015-01-01

    Contamination by Staphylococcus aureus of the production environment of three small or medium-sized food-processing factories in Slovakia was investigated on the basis of sub-species molecular identification by multiple locus variable number of tandem repeats analysis (MLVA). On the basis of MLVA profiling, bacterial isolates were assigned to 31 groups. Data from repeated samplings over a period of 3 years facilitated to draw spatial and temporal maps of the contamination routes for individual factories, as well as identification of potential persistent strains. Information obtained by MLVA typing allowed to identify sources and routes of contamination and, subsequently, will allow to optimize the technical and sanitation measures to ensure hygiene.

  13. Interim analysis: A rational approach of decision making in clinical trial.

    PubMed

    Kumar, Amal; Chakraborty, Bhaswat S

    2016-01-01

    Interim analysis of especially sizeable trials keeps the decision process free of conflict of interest while considering cost, resources, and meaningfulness of the project. Whenever necessary, such interim analysis can also call for potential termination or appropriate modification in sample size, study design, and even an early declaration of success. Given the extraordinary size and complexity today, this rational approach helps to analyze and predict the outcomes of a clinical trial that incorporate what is learned during the course of a study or a clinical development program. Such approach can also fill the gap by directing the resources toward relevant and optimized clinical trials between unmet medical needs and interventions being tested currently rather than fulfilling only business and profit goals.

  14. Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses

    NASA Astrophysics Data System (ADS)

    Kazemzadeh Azad, Saeid

    2018-01-01

    In spite of considerable research work on the development of efficient algorithms for discrete sizing optimization of steel truss structures, only a few studies have addressed non-algorithmic issues affecting the general performance of algorithms. For instance, an important question is whether starting the design optimization from a feasible solution is fruitful or not. This study is an attempt to investigate the effect of seeding the initial population with feasible solutions on the general performance of metaheuristic techniques. To this end, the sensitivity of recently proposed metaheuristic algorithms to the feasibility of initial candidate designs is evaluated through practical discrete sizing of real-size steel truss structures. The numerical experiments indicate that seeding the initial population with feasible solutions can improve the computational efficiency of metaheuristic structural optimization algorithms, especially in the early stages of the optimization. This paves the way for efficient metaheuristic optimization of large-scale structural systems.

  15. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  16. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-07

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  17. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy

    NASA Astrophysics Data System (ADS)

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2017-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6  ±  15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.

  18. A New Approach to Integrate GPU-based Monte Carlo Simulation into Inverse Treatment Plan Optimization for Proton Therapy

    PubMed Central

    Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2016-01-01

    Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456

  19. Motor unit recruitment by size does not provide functional advantages for motor performance

    PubMed Central

    Dideriksen, Jakob L; Farina, Dario

    2013-01-01

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers. PMID:24144879

  20. Motor unit recruitment by size does not provide functional advantages for motor performance.

    PubMed

    Dideriksen, Jakob L; Farina, Dario

    2013-12-15

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.

  1. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  2. Development of procedures for the identification of human papilloma virus DNA fragments in laser plume

    NASA Astrophysics Data System (ADS)

    Woellmer, Wolfgang; Meder, Tom; Jappe, Uta; Gross, Gerd; Riethdorf, Sabine; Riethdorf, Lutz; Kuhler-Obbarius, Christina; Loening, Thomas

    1996-01-01

    For the investigation of laser plume for the existence of HPV DNA fragments, which possibly occur during laser treatment of virus infected tissue, human papillomas and condylomas were treated in vitro with the CO2-laser. For the sampling of the laser plume a new method for the trapping of the material was developed by use of water-soluble gelatine filters. These samples were analyzed with the polymerase chain reaction (PCR) technique, which was optimized in regard of the gelatine filters and the specific primers. Positive PCR results for HPV DNA fragments up to the size of a complete oncogene were obtained and are discussed regarding infectiousity.

  3. Design of experiment approach for the process optimisation of microwave assisted extraction of lupeol from Ficus racemosa leaves using response surface methodology.

    PubMed

    Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C

    2013-01-01

    Triterpenoids are a group of important phytocomponents from Ficus racemosa (syn. Ficus glomerata Roxb.) that are known to possess diverse pharmacological activities and which have prompted the development of various extraction techniques and strategies for its better utilisation. To develop an effective, rapid and ecofriendly microwave-assisted extraction (MAE) strategy to optimise the extraction of a potent bioactive triterpenoid compound, lupeol, from young leaves of Ficus racemosa using response surface methodology (RSM) for industrial scale-up. Initially a Plackett-Burman design matrix was applied to identify the most significant extraction variables amongst microwave power, irradiation time, particle size, solvent:sample ratio loading, varying solvent strength and pre-leaching time on lupeol extraction. Among the six variables tested, microwave power, irradiation time and solvent-sample/loading ratio were found to have a significant effect (P < 0.05) on lupeol extraction and were fitted to a Box-Behnken-design-generated quadratic polynomial equation to predict optimal extraction conditions as well as to locate operability regions with maximum yield. The optimal conditions were microwave power of 65.67% of 700 W, extraction time of 4.27 min and solvent-sample ratio loading of 21.33 mL/g. Confirmation trials under the optimal conditions gave an experimental yield (18.52 µg/g of dry leaves) close to the RSM predicted value of 18.71 µg/g. Under the optimal conditions the mathematical model was found to be well fitted with the experimental data. The MAE was found to be a more rapid, convenient and appropriate extraction method, with a higher yield and lower solvent consumption when compared with conventional extraction techniques. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Ultrasound assisted extraction of Maxilon Red GRL dye from water samples using cobalt ferrite nanoparticles loaded on activated carbon as sorbent: Optimization and modeling.

    PubMed

    Mehrabi, Fatemeh; Vafaei, Azam; Ghaedi, Mehrorang; Ghaedi, Abdol Mohammad; Alipanahpour Dil, Ebrahim; Asfaram, Arash

    2017-09-01

    In this research, a selective, simple and rapid ultrasound assisted dispersive solid-phase micro-microextraction (UA-DSPME) was developed using cobalt ferrite nanoparticles loaded on activated carbon (CoFe 2 O 4 -NPs-AC) as an efficient sorbent for the preconcentration and determination of Maxilon Red GRL (MR-GRL) dye. The properties of sorbent are characterized by X-ray diffraction (XRD), Transmission Electron Microscopy (TEM), Vibrating sample magnetometers (VSM), Fourier transform infrared spectroscopy (FTIR), Particle size distribution (PSD) and Scanning Electron Microscope (SEM) techniques. The factors affecting on the determination of MR-GRL dye were investigated and optimized by central composite design (CCD) and artificial neural networks based on genetic algorithm (ANN-GA). CCD and ANN-GA were used for optimization. Using ANN-GA, optimum conditions were set at 6.70, 1.2mg, 5.5min and 174μL for pH, sorbent amount, sonication time and volume of eluent, respectively. Under the optimized conditions obtained from ANN-GA, the method exhibited a linear dynamic range of 30-3000ngmL -1 with a detection limit of 5.70ngmL -1 . The preconcentration factor and enrichment factor were 57.47 and 93.54, respectively with relative standard deviations (RSDs) less than 4.0% (N=6). The interference effect of some ions and dyes was also investigated and the results show a good selectivity for this method. Finally, the method was successfully applied to the preconcentration and determination of Maxilon Red GRL in water and wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Quantification of soil water retention parameters using multi-section TDR-waveform analysis

    NASA Astrophysics Data System (ADS)

    Baviskar, S. M.; Heimovaara, T. J.

    2017-06-01

    Soil water retention parameters are important for describing flow in variably saturated soils. TDR is one of the standard methods used for determining water content in soil samples. In this study, we present an approach to estimate water retention parameters of a sample which is initially saturated and subjected to an incremental decrease in boundary head causing it to drain in a multi-step fashion. TDR waveforms are measured along the height of the sample at assumed different hydrostatic conditions at daily interval. The cumulative discharge outflow drained from the sample is also recorded. The saturated water content is obtained using volumetric analysis after the final step involved in multi-step drainage. The equation obtained by coupling the unsaturated parametric function and the apparent dielectric permittivity is fitted to a TDR wave propagation forward model. The unsaturated parametric function is used to spatially interpolate the water contents along TDR probe. The cumulative discharge outflow data is fitted with cumulative discharge estimated using the unsaturated parametric function. The weight of water inside the sample estimated at the first and final boundary head in multi-step drainage is fitted with the corresponding weights calculated using unsaturated parametric function. A Bayesian optimization scheme is used to obtain optimized water retention parameters for these different objective functions. This approach can be used for samples with long heights and is especially suitable for characterizing sands with a uniform particle size distribution at low capillary heads.

  6. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application.

    PubMed

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon(®) 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm(2)/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride-loaded transfersome formulation could be a superior alternative to oral delivery of the drug.

  7. Experimental design and optimization of raloxifene hydrochloride loaded nanotransfersomes for transdermal application

    PubMed Central

    Mahmood, Syed; Taher, Muhammad; Mandal, Uttam Kumar

    2014-01-01

    Raloxifene hydrochloride, a highly effective drug for the treatment of invasive breast cancer and osteoporosis in post-menopausal women, shows poor oral bioavailability of 2%. The aim of this study was to develop, statistically optimize, and characterize raloxifene hydrochloride-loaded transfersomes for transdermal delivery, in order to overcome the poor bioavailability issue with the drug. A response surface methodology experimental design was applied for the optimization of transfersomes, using Box-Behnken experimental design. Phospholipon® 90G, sodium deoxycholate, and sonication time, each at three levels, were selected as independent variables, while entrapment efficiency, vesicle size, and transdermal flux were identified as dependent variables. The formulation was characterized by surface morphology and shape, particle size, and zeta potential. Ex vivo transdermal flux was determined using a Hanson diffusion cell assembly, with rat skin as a barrier medium. Transfersomes from the optimized formulation were found to have spherical, unilamellar structures, with a homogeneous distribution and low polydispersity index (0.08). They had a particle size of 134±9 nM, with an entrapment efficiency of 91.00%±4.90%, and transdermal flux of 6.5±1.1 μg/cm2/hour. Raloxifene hydrochloride-loaded transfersomes proved significantly superior in terms of amount of drug permeated and deposited in the skin, with enhancement ratios of 6.25±1.50 and 9.25±2.40, respectively, when compared with drug-loaded conventional liposomes, and an ethanolic phosphate buffer saline. Differential scanning calorimetry study revealed a greater change in skin structure, compared with a control sample, during the ex vivo drug diffusion study. Further, confocal laser scanning microscopy proved an enhanced permeation of coumarin-6-loaded transfersomes, to a depth of approximately160 μM, as compared with rigid liposomes. These ex vivo findings proved that a raloxifene hydrochloride-loaded transfersome formulation could be a superior alternative to oral delivery of the drug. PMID:25246789

  8. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    PubMed

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  9. DNA analysis using an integrated microchip for multiplex PCR amplification and electrophoresis for reference samples.

    PubMed

    Le Roux, Delphine; Root, Brian E; Reedy, Carmen R; Hickey, Jeffrey A; Scott, Orion N; Bienvenue, Joan M; Landers, James P; Chassagne, Luc; de Mazancourt, Philippe

    2014-08-19

    A system that automatically performs the PCR amplification and microchip electrophoretic (ME) separation for rapid forensic short tandem repeat (STR) forensic profiling in a single disposable plastic chip is demonstrated. The microchip subassays were optimized to deliver results comparable to conventional benchtop methods. The microchip process was accomplished in sub-90 min compared with >2.5 h for the conventional approach. An infrared laser with a noncontact temperature sensing system was optimized for a 45 min PCR compared with the conventional 90 min amplification time. The separation conditions were optimized using LPA-co-dihexylacrylamide block copolymers specifically designed for microchip separations to achieve accurate DNA size calling in an effective length of 7 cm in a plastic microchip. This effective separation length is less than half of other reports for integrated STR analysis and allows a compact, inexpensive microchip design. This separation quality was maintained when integrated with microchip PCR. Thirty samples were analyzed conventionally and then compared with data generated by the microfluidic chip system. The microfluidic system allele calling was 100% concordant with the conventional process. This study also investigated allelic ladder consistency over time. The PCR-ME genetic profiles were analyzed using binning palettes generated from two sets of allelic ladders run three and six months apart. Using these binning palettes, no allele calling errors were detected in the 30 samples demonstrating that a microfluidic platform can be highly consistent over long periods of time.

  10. Chemical disorder influence on magnetic state of optimally-doped La0.7Ca0.3MnO3

    NASA Astrophysics Data System (ADS)

    Rozenberg, E.; Auslender, M.; Shames, A. I.; Jung, G.; Felner, I.; Tsindlekht, M. I.; Mogilyansky, D.; Sominski, E.; Gedanken, A.; Mukovskii, Ya. M.; Gorodetsky, G.

    2011-10-01

    X-band electron magnetic resonance and dc/ac magnetic measurements have been employed to study the effects of chemical disorder on magnetic ordering in bulk and nanometer-sized single crystals and bulk ceramics of optimally-doped La0.7Ca0.3MnO3 manganite. The magnetic ground state of bulk samples appeared to be ferromagnetic with the lower Curie temperature and higher magnetic homogeneity in the vicinity of the ferromagnetic-paramagnetic phase transition in the crystal, as compared with those characteristics in the ceramics. The influence of technological driven "macroscopic" fluctuations of Ca-dopant level in crystal and "mesoscopic" disorder within grain boundary regions in ceramics was proposed to be responsible for these effects. Surface spin disorder together with pronounced inter-particle interactions within agglomerated nano-sample results in well defined core/shell spin configuration in La0.7Ca0.3MnO3 nano-crystals. The analysis of the electron paramagnetic resonance data enlightened the reasons for the observed difference in the magnetic order. Lattice effects dominate the first-order nature of magnetic phase transition in bulk samples. However, mesoscale chemical disorder seems to be responsible for the appearance of small ferromagnetic polarons in the paramagnetic state of bulk ceramics. The experimental results and their analysis indicate that a chemical/magnetic disorder has a strong impact on the magnetic state even in the case of mostly stable optimally hole-doped manganites.

  11. Fast DRR generation for 2D to 3D registration on GPUs.

    PubMed

    Tornai, Gábor János; Cserey, György; Pappas, Ion

    2012-08-01

    The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  12. Simultaneous determination of 11 antibiotics and their main metabolites from four different groups by reversed-phase high-performance liquid chromatography-diode array-fluorescence (HPLC-DAD-FLD) in human urine samples.

    PubMed

    Fernandez-Torres, R; Consentino, M Olías; Lopez, M A Bello; Mochon, M Callejon

    2010-05-15

    A new, accurate and sensitive reversed-phase high-performance liquid chromatography (RP-HPLC) as analytical method for the quantitative determination of 11 antibiotics (drugs) and the main metabolites of five of them present in human urine has been worked out, optimized and validated. The analytes belong to four different groups of antibiotics (sulfonamides, tetracyclines, penicillins and anphenicols). The analyzed compounds were sulfadiazine (SDI) and its N(4)-acetylsulfadiazine (NDI) metabolite, sulfamethazine (SMZ) and its N(4)-acetylsulfamethazine (NMZ), sulfamerazine (SMR) and its N(4)-acetylsulfamerazine (NMR), sulfamethoxazole (SMX), trimetroprim (TMP), amoxicillin (AMX) and its main metabolite amoxicilloic acid (AMA), ampicillin (AMP) and its main metabolite ampicilloic acid (APA), chloramphenicol (CLF), thiamphenicol (TIF), oxytetracycline (OXT) and chlortetracycline (CLT). For HPLC analysis, diode array (DAD) and fluorescence (FLD) detectors were used. The separation of the analyzed compounds was conducted by means of a Phenomenex Gemini C(18) (150mm x 4.6mm I.D., particle size 5microm) analytical column with LiChroCART LiChrospher C(18) (4mm x 4mm, particle size 5microm) guard column. Analyzed drugs were determined within 34min using formic acid 0.1% in water and acetonitrile in gradient elution mode as mobile phase. A linear response was observed for all compounds in the range of concentration studied. Two procedures were optimized for sample preparation: a direct treatment with methanol and acetonitrile and a solid phase extraction procedure using Bond Elut Plexa columns. The method was applied to the determination of the analytes in human urine from volunteers under treatment with different pharmaceutical formulations. This method can be successfully applied to routine determination of all these drugs in human urine samples.

  13. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  14. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 1

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Performance Analysis and Design Synthesis (PADS) computer program has a two-fold purpose. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module. For Volume 1 see N73-13199.

  15. Simultaneous Determination of Size and Quantification of Gold Nanoparticles by Direct Coupling Thin layer Chromatography with Catalyzed Luminol Chemiluminescence

    PubMed Central

    Yan, Neng; Zhu, Zhenli; He, Dong; Jin, Lanlan; Zheng, Hongtao; Hu, Shenghong

    2016-01-01

    The increasing use of metal-based nanoparticle products has raised concerns in particular for the aquatic environment and thus the quantification of such nanomaterials released from products should be determined to assess their environmental risks. In this study, a simple, rapid and sensitive method for the determination of size and mass concentration of gold nanoparticles (AuNPs) in aqueous suspension was established by direct coupling of thin layer chromatography (TLC) with catalyzed luminol-H2O2 chemiluminescence (CL) detection. For this purpose, a moving stage was constructed to scan the chemiluminescence signal from TLC separated AuNPs. The proposed TLC-CL method allows the quantification of differently sized AuNPs (13 nm, 41 nm and 100 nm) contained in a mixture. Various experimental parameters affecting the characterization of AuNPs, such as the concentration of H2O2, the concentration and pH of the luminol solution, and the size of the spectrometer aperture were investigated. Under optimal conditions, the detection limits for AuNP size fractions of 13 nm, 41 nm and 100 nm were 38.4 μg L−1, 35.9 μg L−1 and 39.6 μg L−1, with repeatabilities (RSD, n = 7) of 7.3%, 6.9% and 8.1% respectively for 10 mg L−1 samples. The proposed method was successfully applied to the characterization of AuNP size and concentration in aqueous test samples. PMID:27080702

  16. Particle size effect on microwave absorbing of La{sub 0.67}Ba{sub 0.33}Mn{sub 0.94}Ti{sub 0.06}O{sub 3} powders prepared by mechanical alloying with the assistance of ultrasonic irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saptari, Sitti Ahmiatri, E-mail: siti-ahmiatri@yahoo.co.id; Manaf, Azwar; Kurniawan, Budhy

    Doped manganites have attracted substantial interest due to their unique chemical and physics properties, which makes it possible to be used for microwave absorbing materials. In this paper we report synthesizes and characterization of La{sub 0.67}Ba{sub 0.33}Mn{sub 0.94}Ti{sub 0.06}O{sub 3} powders prepared by mechanical alloying with the assistance of a high power ultrasonic treatment. After solid state reaction, the presence of single phase was confirmed by X-ray Diffraction (XRD). Refinement results showed that samples are single phase with monoclinic structure. It was found that powder materials derived from mechanical alloying results in large variation in the particle size. A significantmore » improvement was obtained upon subjecting the mechanically milled powder materials to an ultrasonication treatment for a relatively short period of time. As determined by particle size analyzer (PSA), the mean particle size gradually decreased from the original size of 5.02 µm to 0.36 µm. Magnetic properties were characterized by VSM, and hysteresis loops results showed that samples are soft magnetic. It was found that when the mean particle size decreases, saturation was increases and coersitivity was decreases. Microwave absorption properties were investigated in the frequency range of 8-12 GHz using vector network analyzer. An optimal reflection loss of 24.44 dB is reached at 11.4 GHz.« less

  17. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less

  18. High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.

    PubMed

    Ren, Juan; Zou, Qingze

    2014-07-01

    In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.

  19. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  20. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  1. Electrochemical alloying of immiscible Ag and Co for their structural and magnetic analyses

    NASA Astrophysics Data System (ADS)

    Santhi, Kalavathy; Kumarsan, Dhanapal; Vengidusamy, Naryanan; Arumainathan, Stephen

    2017-07-01

    Electrochemical alloying of immiscible Ag and Co was carried out at different current densities from electrolytes of two different concentrations, after optimizing the electrolytic bath and operating conditions. The samples obtained were characterized using X-ray diffraction to confirm the simultaneous deposition of Ag and Co and to determine their crystallographic structure. The atomic percentage of Ag and Co contents in the granular alloy was determined by ICP-OES analysis. The XPS spectra were observed to confirm the presence of Ag and Co in the metallic form in the granular alloy samples. The micrographs observed using scanning and transmission electron microscopes threw light on the surface morphology and the size of the particles. The magnetic nature of the samples was analyzed at room temperature by a vibration sample magnetometer. Their magnetic phase transition while heating was also studied to provide further evidence for the magnetic behaviour and the structure of the deposits.

  2. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    NASA Astrophysics Data System (ADS)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning optimization applications. Last model rather simply fulfils the task it was designed for by drawing navigation lines.

  3. Extraction and analysis of intact glucosinolates--a validated pressurized liquid extraction/liquid chromatography-mass spectrometry protocol for Isatis tinctoria, and qualitative analysis of other cruciferous plants.

    PubMed

    Mohn, Tobias; Cutting, Brian; Ernst, Beat; Hamburger, Matthias

    2007-09-28

    Glucosinolates have attracted significant interest due to the chemopreventive properties of some of their transformation products. Numerous protocols for the extraction and analysis of glucosinolates have been published, but limited effort has been devoted to optimize and validate crucial extraction parameters and sample preparation steps. We carried out a systematic optimization and validation of a quantitative assay for the direct analysis of intact glucosinolates in Isatis tinctoria leaves (woad, Brassicaceae). Various parameters such as solvent composition, particle size, temperature, and number of required extraction steps were optimized using pressurized liquid extraction (PLE). We observed thermal degradation of glucosinolates at temperatures above 50 degrees C, and loss of >60% within 10min at 100 degrees C, but no enzymatic degradation in the leaf samples at ambient temperature. Excellent peak shape and resolution was obtained by reversed-phase chromatography on a Phenomenex Aqua column using 10mM ammonium formate as ion-pair reagent. Detection was carried out by electrospray ionisation mass spectrometry in the negative ion mode. Analysis of cruciferous vegetables and spices such as broccoli (Brassica oleracea L. var. italica), garden cress (Lepidium sativum L.) and black mustard (Sinapis nigra L.) demonstrated the general applicability of the method.

  4. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  5. Feasibility study of TSPO quantification with [18F]FEPPA using population-based input function.

    PubMed

    Mabrouk, Rostom; Strafella, Antonio P; Knezevic, Dunja; Ghadery, Christine; Mizrahi, Romina; Gharehgazlou, Avideh; Koshimori, Yuko; Houle, Sylvain; Rusjan, Pablo

    2017-01-01

    The input function (IF) is a core element in the quantification of Translocator protein 18 kDa with positron emission tomography (PET), as no suitable reference region with negligible binding has been identified. Arterial blood sampling is indeed needed to create the IF (ASIF). In the present manuscript we study individualization of a population based input function (PBIF) with a single arterial manual sample to estimate total distribution volume (VT) for [18F]FEPPA and to replicate previously published clinical studies in which the ASIF was used. The data of 3 previous [18F]FEPPA studies (39 of healthy controls (HC), 16 patients with Parkinson's disease (PD) and 18 with Alzheimer's disease (AD)) was reanalyzed with the new approach. PBIF was used with the Logan graphical analysis (GA) neglecting the vascular contribution to estimate VT. Time of linearization of the GA was determined with the maximum error criteria. The optimal calibration of the PBIF was determined based on the area under the curve (AUC) of the IF and the agreement range of VT between methods. The shape of the IF between groups was studied while taking into account genotyping of the polymorphism (rs6971). PBIF scaled with a single value of activity due to unmetabolized radioligand in arterial plasma, calculated as the average of a sample taken at 60 min and a sample taken at 90 min post-injection, yielded a good interval of agreement between methods and optimized the area under the curve of IF. In HC, gray matter VTs estimated by PBIF highly correlated with those using the standard method (r2 = 0.82, p = 0.0001). Bland-Altman plots revealed PBIF slightly underestimates (~1 mL/cm3) VT calculated by ASIF (including a vascular contribution). It was verified that the AUC of the ASIF were independent of genotype and disease (HC, PD, and AD). Previous clinical results were replicated using PBIF but with lower statistical power. A single arterial blood sample taken 75 minute post-injection contains enough information to individualize the IF in the groups of subjects studied; however, the higher variability produced requires an increase in sample size to reach the same effect size.

  6. Kinship-based politics and the optimal size of kin groups

    PubMed Central

    Hammel, E. A.

    2005-01-01

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures. PMID:16091466

  7. Kinship-based politics and the optimal size of kin groups.

    PubMed

    Hammel, E A

    2005-08-16

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures.

  8. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    PubMed

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  9. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method

    PubMed Central

    Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-01-01

    Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478

  10. Sampling solution traces for the problem of sorting permutations by signed reversals

    PubMed Central

    2012-01-01

    Background Traditional algorithms to solve the problem of sorting by signed reversals output just one optimal solution while the space of all optimal solutions can be huge. A so-called trace represents a group of solutions which share the same set of reversals that must be applied to sort the original permutation following a partial ordering. By using traces, we therefore can represent the set of optimal solutions in a more compact way. Algorithms for enumerating the complete set of traces of solutions were developed. However, due to their exponential complexity, their practical use is limited to small permutations. A partial enumeration of traces is a sampling of the complete set of traces and can be an alternative for the study of distinct evolutionary scenarios of big permutations. Ideally, the sampling should be done uniformly from the space of all optimal solutions. This is however conjectured to be ♯P-complete. Results We propose and evaluate three algorithms for producing a sampling of the complete set of traces that instead can be shown in practice to preserve some of the characteristics of the space of all solutions. The first algorithm (RA) performs the construction of traces through a random selection of reversals on the list of optimal 1-sequences. The second algorithm (DFALT) consists in a slight modification of an algorithm that performs the complete enumeration of traces. Finally, the third algorithm (SWA) is based on a sliding window strategy to improve the enumeration of traces. All proposed algorithms were able to enumerate traces for permutations with up to 200 elements. Conclusions We analysed the distribution of the enumerated traces with respect to their height and average reversal length. Various works indicate that the reversal length can be an important aspect in genome rearrangements. The algorithms RA and SWA show a tendency to lose traces with high average reversal length. Such traces are however rare, and qualitatively our results show that, for testable-sized permutations, the algorithms DFALT and SWA produce distributions which approximate the reversal length distributions observed with a complete enumeration of the set of traces. PMID:22704580

  11. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  12. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  13. Improving piezo actuators for nanopositioning tasks

    NASA Astrophysics Data System (ADS)

    Seeliger, Martin; Gramov, Vassil; Götz, Bernt

    2018-02-01

    In recent years, numerous applications emerged on the market with seemingly contradicting demands. On one side, the structure size decreased while on the other side, the overall sample size and speed of operation increased. Although the principle usage of piezoelectric positioning solutions has become a standard in the field of micro- and nanopositioning, surface inspection and manipulation, piezosystem jena now enhanced the performance beyond simple control loop tuning and actuator design. In automated manufacturing machines, a given signal has to be tracked fast and precise. However, control systems naturally decrease the ability to follow this signal in real time. piezosystem jena developed a new signal feed forward system bypassing the PID control. This way, we could reduce signal tracking errors by a factor of three compared to a conventionally optimized PID control. Of course, PID-values still have to be adjusted to specific conditions, e.g. changing additional mass, to optimize the performance. This can now be done with a new automatic tuning tool designed to analyze the current setup, find the best fitting configuration, and also gather and display theoretical as well as experimental performance data. Thus, the control quality of a mechanical setup can be improved within a few minutes without the need of external calibration equipment. Furthermore, new mechanical optimization techniques that focus not only on the positioning device, but also take the whole setup into account, prevent parasitic motion down to a few nanometers.

  14. Particle-size distribution (PSD) of pulverized hair: A quantitative approach of milling efficiency and its correlation with drug extraction efficiency.

    PubMed

    Chagas, Aline Garcia da Rosa; Spinelli, Eliani; Fiaux, Sorele Batista; Barreto, Adriana da Silva; Rodrigues, Silvana Vianna

    2017-08-01

    Different types of hair were submitted to different milling procedures and their resulting powders were analyzed by scanning electron microscopy (SEM) and laser diffraction (LD). SEM results were qualitative whereas LD results were quantitative and accurately characterized the hair powders through their particle size distribution (PSD). Different types of hair were submitted to an optimized milling conditions and their PSD was quite similar. A good correlation was obtained between PSD results and ketamine concentration in a hair sample analyzed by LC-MS/MS. Hair samples were frozen in liquid nitrogen for 5min and pulverized at 25Hz for 10min, resulting in 61% of particles <104μm and 39% from 104 to 1000μm. Doing so, a 359% increment on ketamine concentration was obtained for an authentic sample extracted after pulverization comparing with the same sample cut in 1mm fragments. When milling time was extended to 25min, >90% of particles were <60μm and an additional increment of 52.4% in ketamine content was obtained. PSD is a key feature on analysis of pulverized hair as it can affect the method recovery and reproducibility. In addition, PSD is an important issue on sample retesting and quality control procedures. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Determining quantity and quality of retained oil in mature marly chalk and marlstone of the Cretaceous Niobrara Formation by low-temperature hydrous pyrolysis

    USGS Publications Warehouse

    Lewan, Michael; Sonnenfeld, Mark D.

    2017-01-01

    Low-temperature hydrous pyrolysis (LTHP) at 300°C (572°F) for 24 h released retained oils from 12- to 20-meshsize samples of mature Niobrara marly chalk and marlstone cores. The released oil accumulated on the water surface of the reactor, and is compositionally similar to oil produced from the same well. The quantities of oil released from the marly chalk and marlstone by LTHP are respectively 3.4 and 1.6 times greater than those determined by tight rock analyses (TRA) on aliquots of the same samples. Gas chromatograms indicated this difference is a result of TRA oils losing more volatiles and volatilizing less heavy hydrocarbons during collection than LTHP oils. Characterization of the rocks before and after LTPH by programmable open-system pyrolysis (HAWK) indicate that under LTHP conditions no significant oil is generated and only preexisting retained oil is released. Although LTHP appears to provide better predictions of quantity and quality of retained oil in a mature source rock, it is not expected to replace the more time and sample-size efficacy of TRA. However, LTHP can be applied to composited samples from key intervals or lithologies originally recognized by TRA. Additional studies on duration, temperature, and sample size used in LTHP may further optimize its utility.

  16. Design Methods and Optimization for Morphing Aircraft

    NASA Technical Reports Server (NTRS)

    Crossley, William A.

    2005-01-01

    This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.

  17. Biomechanical behavior of bone scaffolds made of additive manufactured tricalciumphosphate and titanium alloy under different loading conditions.

    PubMed

    Wieding, Jan; Fritsche, Andreas; Heinl, Peter; Körner, Carolin; Cornelsen, Matthias; Seitz, Hermann; Mittelmeier, Wolfram; Bader, Rainer

    2013-12-16

    The repair of large segmental bone defects caused by fracture, tumor or infection remains challenging in orthopedic surgery. The capability of two different bone scaffold materials, sintered tricalciumphosphate and a titanium alloy (Ti6Al4V), were determined by mechanical and biomechanical testing. All scaffolds were fabricated by means of additive manufacturing techniques with identical design and controlled pore geometry. Small-sized sintered TCP scaffolds (10 mm diameter, 21 mm length) were fabricated as dense and open-porous samples and tested in an axial loading procedure. Material properties for titanium alloy were determined by using both tensile (dense) and compressive test samples (open-porous). Furthermore, large-sized open-porous TCP and titanium alloy scaffolds (30 mm in height and diameter, 700 µm pore size) were tested in a biomechanical setup simulating a large segmental bone defect using a composite femur stabilized with an osteosynthesis plate. Static physiologic loads (1.9 kN) were applied within these tests. Ultimate compressive strength of the TCP samples was 11.2 ± 0.7 MPa and 2.2 ± 0.3 MPa, respectively, for the dense and the open-porous samples. Tensile strength and ultimate compressive strength was 909.8 ± 4.9 MPa and 183.3 ± 3.7 MPa, respectively, for the dense and the open-porous titanium alloy samples. Furthermore, the biomechanical results showed good mechanical stability for the titanium alloy scaffolds. TCP scaffolds failed at 30% of the maximum load. Based on recent data, the 3D printed TCP scaffolds tested cannot currently be recommended for high load-bearing situations. Scaffolds made of titanium could be optimized by adapting the biomechanical requirements.

  18. 4D x-ray phase contrast tomography for repeatable motion of biological samples

    NASA Astrophysics Data System (ADS)

    Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto

    2016-09-01

    X-ray phase contrast tomography based on a grating interferometer was applied to fast and dynamic measurements of biological samples. To achieve this, the scanning procedure in the tomographic scan was improved. A triangle-shaped voltage signal from a waveform generator to a Piezo stage was used for the fast phase stepping in the grating interferometer. In addition, an optical fiber coupled x-ray scientific CMOS camera was used to achieve fast and highly efficient image acquisitions. These optimizations made it possible to perform an x-ray phase contrast tomographic measurement within an 8 min scan with density resolution of 2.4 mg/cm3. A maximum volume size of 13 × 13 × 6 mm3 was obtained with a single tomographic measurement with a voxel size of 6.5 μm. The scanning procedure using the triangle wave was applied to four-dimensional measurements in which highly sensitive three-dimensional x-ray imaging and a time-resolved dynamic measurement of biological samples were combined. A fresh tendon in the tail of a rat was measured under a uniaxial stretching and releasing condition. To maintain the freshness of the sample during four-dimensional phase contrast tomography, the temperature of the bathing liquid of the sample was kept below 10° using a simple cooling system. The time-resolved deformation of the tendon and each fascicle was measured with a temporal resolution of 5.7 Hz. Evaluations of cross-sectional area size, length of the axis, and mass density in the fascicle during a stretching process provided a basis for quantitative analysis of the deformation of tendon fascicle.

  19. 4D x-ray phase contrast tomography for repeatable motion of biological samples.

    PubMed

    Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto

    2016-09-01

    X-ray phase contrast tomography based on a grating interferometer was applied to fast and dynamic measurements of biological samples. To achieve this, the scanning procedure in the tomographic scan was improved. A triangle-shaped voltage signal from a waveform generator to a Piezo stage was used for the fast phase stepping in the grating interferometer. In addition, an optical fiber coupled x-ray scientific CMOS camera was used to achieve fast and highly efficient image acquisitions. These optimizations made it possible to perform an x-ray phase contrast tomographic measurement within an 8 min scan with density resolution of 2.4 mg/cm 3 . A maximum volume size of 13 × 13 × 6 mm 3 was obtained with a single tomographic measurement with a voxel size of 6.5 μm. The scanning procedure using the triangle wave was applied to four-dimensional measurements in which highly sensitive three-dimensional x-ray imaging and a time-resolved dynamic measurement of biological samples were combined. A fresh tendon in the tail of a rat was measured under a uniaxial stretching and releasing condition. To maintain the freshness of the sample during four-dimensional phase contrast tomography, the temperature of the bathing liquid of the sample was kept below 10° using a simple cooling system. The time-resolved deformation of the tendon and each fascicle was measured with a temporal resolution of 5.7 Hz. Evaluations of cross-sectional area size, length of the axis, and mass density in the fascicle during a stretching process provided a basis for quantitative analysis of the deformation of tendon fascicle.

  20. Domain-wall excitations in the two-dimensional Ising spin glass

    NASA Astrophysics Data System (ADS)

    Khoshbakht, Hamid; Weigel, Martin

    2018-02-01

    The Ising spin glass in two dimensions exhibits rich behavior with subtle differences in the scaling for different coupling distributions. We use recently developed mappings to graph-theoretic problems together with highly efficient implementations of combinatorial optimization algorithms to determine exact ground states for systems on square lattices with up to 10 000 ×10 000 spins. While these mappings only work for planar graphs, for example for systems with periodic boundary conditions in at most one direction, we suggest here an iterative windowing technique that allows one to determine ground states for fully periodic samples up to sizes similar to those for the open-periodic case. Based on these techniques, a large number of disorder samples are used together with a careful finite-size scaling analysis to determine the stiffness exponents and domain-wall fractal dimensions with unprecedented accuracy, our best estimates being θ =-0.2793 (3 ) and df=1.273 19 (9 ) for Gaussian couplings. For bimodal disorder, a new uniform sampling algorithm allows us to study the domain-wall fractal dimension, finding df=1.279 (2 ) . Additionally, we also investigate the distributions of ground-state energies, of domain-wall energies, and domain-wall lengths.

  1. Elasto-inertial microfluidics for bacteria separation from whole blood for sepsis diagnostics.

    PubMed

    Faridi, Muhammad Asim; Ramachandraiah, Harisha; Banerjee, Indradumna; Ardabili, Sahar; Zelenin, Sergey; Russom, Aman

    2017-01-04

    Bloodstream infections (BSI) remain a major challenge with high mortality rate, with an incidence that is increasing worldwide. Early treatment with appropriate therapy can reduce BSI-related morbidity and mortality. However, despite recent progress in molecular based assays, complex sample preparation steps have become critical roadblock for a greater expansion of molecular assays. Here, we report a size based, label-free, bacteria separation from whole blood using elasto-inertial microfluidics. In elasto-inertial microfluidics, the viscoelastic flow enables size based migration of blood cells into a non-Newtonian solution, while smaller bacteria remain in the streamline of the blood sample entrance and can be separated. We first optimized the flow conditions using particles, and show continuous separation of 5 μm particles from 2 μm at a yield of 95% for 5 µm particle and 93% for 2 µm particles at respective outlets. Next, bacteria were continuously separated at an efficiency of 76% from undiluted whole blood sample. We demonstrate separation of bacteria from undiluted while blood using elasto-inertial microfluidics. The label-free, passive bacteria preparation method has a great potential for downstream phenotypic and molecular analysis of bacteria.

  2. Magnetic microscopic imaging with an optically pumped magnetometer and flux guides

    DOE PAGES

    Kim, Young Jin; Savukov, Igor Mykhaylovich; Huang, Jen -Huang; ...

    2017-01-23

    Here, by combining an optically pumped magnetometer (OPM) with flux guides (FGs) and by installing a sample platform on automated translation stages, we have implemented an ultra-sensitive FG-OPM scanning magnetic imaging system that is capable of detecting magnetic fields of ~20 pT with spatial resolution better than 300 μm (expected to reach ~10 pT sensitivity and ~100 μm spatial resolution with optimized FGs). As a demonstration of one possible application of the FG-OPM device, we conducted magnetic imaging of micron-size magnetic particles. Magnetic imaging of such particles, including nano-particles and clusters, is very important for many fields, especially for medicalmore » cancer diagnostics and biophysics applications. For rapid, precise magnetic imaging, we constructed an automatic scanning system, which holds and moves a target sample containing magnetic particles at a given stand-off distance from the FG tips. We show that the device was able to produce clear microscopic magnetic images of 10 μm-size magnetic particles. In addition, we also numerically investigated how the magnetic flux from a target sample at a given stand-off distance is transmitted to the OPM vapor cell.« less

  3. Effect of Layer Thickness and Printing Orientation on Mechanical Properties and Dimensional Accuracy of 3D Printed Porous Samples for Bone Tissue Engineering

    PubMed Central

    Farzadi, Arghavan; Solati-Hashjin, Mehran; Asadi-Eydivand, Mitra; Abu Osman, Noor Azuan

    2014-01-01

    Powder-based inkjet 3D printing method is one of the most attractive solid free form techniques. It involves a sequential layering process through which 3D porous scaffolds can be directly produced from computer-generated models. 3D printed products' quality are controlled by the optimal build parameters. In this study, Calcium Sulfate based powders were used for porous scaffolds fabrication. The printed scaffolds of 0.8 mm pore size, with different layer thickness and printing orientation, were subjected to the depowdering step. The effects of four layer thicknesses and printing orientations, (parallel to X, Y and Z), on the physical and mechanical properties of printed scaffolds were investigated. It was observed that the compressive strength, toughness and Young's modulus of samples with 0.1125 and 0.125 mm layer thickness were more than others. Furthermore, the results of SEM and μCT analyses showed that samples with 0.1125 mm layer thickness printed in X direction have more dimensional accuracy and significantly close to CAD software based designs with predefined pore size, porosity and pore interconnectivity. PMID:25233468

  4. On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis

    NASA Astrophysics Data System (ADS)

    Yen, Tony Minghung

    This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.

  5. Pharmacist-led management of chronic pain in primary care: costs and benefits in a pilot randomised controlled trial.

    PubMed

    Neilson, Aileen R; Bruhn, Hanne; Bond, Christine M; Elliott, Alison M; Smith, Blair H; Hannaford, Philip C; Holland, Richard; Lee, Amanda J; Watson, Margaret; Wright, David; McNamee, Paul

    2015-04-01

    To explore differences in mean costs (from a UK National Health Service perspective) and effects of pharmacist-led management of chronic pain in primary care evaluated in a pilot randomised controlled trial (RCT), and to estimate optimal sample size for a definitive RCT. Regression analysis of costs and effects, using intention-to-treat and expected value of sample information analysis (EVSI). Six general practices: Grampian (3); East Anglia (3). 125 patients with complete resource use and short form-six-dimension questionnaire (SF-6D) data at baseline, 3 months and 6 months. Patients were randomised to either pharmacist medication review with face-to-face pharmacist prescribing or pharmacist medication review with feedback to general practitioner or treatment as usual (TAU). Differences in mean total costs and effects measured as quality-adjusted life years (QALYs) at 6 months and EVSI for sample size calculation. Unadjusted total mean costs per patient were £452 for prescribing (SD: £466), £570 for review (SD: £527) and £668 for TAU (SD: £1333). After controlling for baseline costs, the adjusted mean cost differences per patient relative to TAU were £77 for prescribing (95% CI -82 to 237) and £54 for review (95% CI -103 to 212). Unadjusted mean QALYs were 0.3213 for prescribing (SD: 0.0659), 0.3161 for review (SD: 0.0684) and 0.3079 for TAU (SD: 0.0606). Relative to TAU, the adjusted mean differences were 0.0069 for prescribing (95% CI -0.0091 to 0.0229) and 0.0097 for review (95% CI -0.0054 to 0.0248). The EVSI suggested the optimal future trial size was between 460 and 690, and between 540 and 780 patients per arm using a threshold of £30,000 and £20,000 per QALY gained, respectively. Compared with TAU, pharmacist-led interventions for chronic pain appear more costly and provide similar QALYs. However, these estimates are imprecise due to the small size of the pilot trial. The EVSI indicates that a larger trial is necessary to obtain more precise estimates of differences in mean effects and costs between treatment groups. ISRCTN06131530. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Evaluation of agile designs in first-in-human (FIH) trials--a simulation study.

    PubMed

    Perlstein, Itay; Bolognese, James A; Krishna, Rajesh; Wagner, John A

    2009-12-01

    The aim of the investigation was to evaluate alternatives to standard first-in-human (FIH) designs in order to optimize the information gained from such studies by employing novel agile trial designs. Agile designs combine adaptive and flexible elements to enable optimized use of prior information either before and/or during conduct of the study to seamlessly update the study design. A comparison of the traditional 6 + 2 (active + placebo) subjects per cohort design with alternative, reduced sample size, agile designs was performed by using discrete event simulation. Agile designs were evaluated for specific adverse event models and rates as well as dose-proportional, saturated, and steep-accumulation pharmacokinetic profiles. Alternative, reduced sample size (hereafter referred to as agile) designs are proposed for cases where prior knowledge about pharmacokinetics and/or adverse event relationships are available or appropriately assumed. Additionally, preferred alternatives are proposed for a general case when prior knowledge is limited or unavailable. Within the tested conditions and stated assumptions, some agile designs were found to be as efficient as traditional designs. Thus, simulations demonstrated that the agile design is a robust and feasible approach to FIH clinical trials, with no meaningful loss of relevant information, as it relates to PK and AE assumptions. In some circumstances, applying agile designs may decrease the duration and resources required for Phase I studies, increasing the efficiency of early clinical development. We highlight the value and importance of useful prior information when specifying key assumptions related to safety, tolerability, and PK.

  7. Optimizing Clinical Trial Enrollment Methods Through "Goal Programming"

    PubMed Central

    Davis, J.M.; Sandgren, A.J.; Manley, A.R.; Daleo, M.A.; Smith, S.S.

    2014-01-01

    Introduction Clinical trials often fail to reach desired goals due to poor recruitment outcomes, including low participant turnout, high recruitment cost, or poor representation of minorities. At present, there is limited literature available to guide recruitment methodology. This study, conducted by researchers at the University of Wisconsin Center for Tobacco Research and Intervention (UW-CTRI), provides an example of how iterative analysis of recruitment data may be used to optimize recruitment outcomes during ongoing recruitment. Study methodology UW-CTRI’s research team provided a description of methods used to recruit smokers in two randomized trials (n = 196 and n = 175). The trials targeted low socioeconomic status (SES) smokers and involved time-intensive smoking cessation interventions. Primary recruitment goals were to meet required sample size and provide representative diversity while working with limited funds and limited time. Recruitment data was analyzed repeatedly throughout each study to optimize recruitment outcomes. Results Estimates of recruitment outcomes based on prior studies on smoking cessation suggested that researchers would be able to recruit 240 low SES smokers within 30 months at a cost of $72,000. With employment of methods described herein, researchers were able to recruit 374 low SES smokers over 30 months at a cost of $36,260. Discussion Each human subjects study presents unique recruitment challenges with time and cost of recruitment dependent on the sample population and study methodology. Nonetheless, researchers may be able to improve recruitment outcomes though iterative analysis of recruitment data and optimization of recruitment methods throughout the recruitment period. PMID:25642125

  8. An Evaluation of Sharp Cut Cyclones for Sampling Diesel Particulate Matter Aerosol in the Presence of Respirable Dust

    PubMed Central

    Cauda, Emanuele; Sheehan, Maura; Gussman, Robert; Kenny, Lee; Volkwein, Jon

    2015-01-01

    Two prototype cyclones were the subjects of a comparative research campaign with a diesel particulate matter sampler (DPMS) that consists of a respirable cyclone combined with a downstream impactor. The DPMS is currently used in mining environments to separate dust from the diesel particulate matter and to avoid interferences in the analysis of integrated samples and direct-reading monitoring in occupational environments. The sampling characteristics of all three devices were compared using ammonium fluorescein, diesel, and coal dust aerosols. With solid spherical test aerosols at low particle loadings, the aerodynamic size-selection characteristics of all three devices were found to be similar, with 50% penetration efficiencies (d50) close to the design value of 0.8 µm, as required by the US Mine Safety and Health Administration for monitoring occupational exposure to diesel particulate matter in US mining operations. The prototype cyclones were shown to have ‘sharp cut’ size-selection characteristics that equaled or exceeded the sharpness of the DPMS. The penetration of diesel aerosols was optimal for all three samplers, while the results of the tests with coal dust induced the exclusion of one of the prototypes from subsequent testing. The sampling characteristics of the remaining prototype sharp cut cyclone (SCC) and the DPMS were tested with different loading of coal dust. While the characteristics of the SCC remained constant, the deposited respirable coal dust particles altered the size-selection performance of the currently used sampler. This study demonstrates that the SCC performed better overall than the DPMS. PMID:25060240

  9. Effect of Data Assimilation Parameters on The Optimized Surface CO2 Flux in Asia

    NASA Astrophysics Data System (ADS)

    Kim, Hyunjung; Kim, Hyun Mee; Kim, Jinwoong; Cho, Chun-Ho

    2018-02-01

    In this study, CarbonTracker, an inverse modeling system based on the ensemble Kalman filter, was used to evaluate the effects of data assimilation parameters (assimilation window length and ensemble size) on the estimation of surface CO2 fluxes in Asia. Several experiments with different parameters were conducted, and the results were verified using CO2 concentration observations. The assimilation window lengths tested were 3, 5, 7, and 10 weeks, and the ensemble sizes were 100, 150, and 300. Therefore, a total of 12 experiments using combinations of these parameters were conducted. The experimental period was from January 2006 to December 2009. Differences between the optimized surface CO2 fluxes of the experiments were largest in the Eurasian Boreal (EB) area, followed by Eurasian Temperate (ET) and Tropical Asia (TA), and were larger in boreal summer than in boreal winter. The effect of ensemble size on the optimized biosphere flux is larger than the effect of the assimilation window length in Asia, but the importance of them varies in specific regions in Asia. The optimized biosphere flux was more sensitive to the assimilation window length in EB, whereas it was sensitive to the ensemble size as well as the assimilation window length in ET. The larger the ensemble size and the shorter the assimilation window length, the larger the uncertainty (i.e., spread of ensemble) of optimized surface CO2 fluxes. The 10-week assimilation window and 300 ensemble size were the optimal configuration for CarbonTracker in the Asian region based on several verifications using CO2 concentration measurements.

  10. Tapping insertional torque allows prediction for better pedicle screw fixation and optimal screw size selection.

    PubMed

    Helgeson, Melvin D; Kang, Daniel G; Lehman, Ronald A; Dmitriev, Anton E; Luhmann, Scott J

    2013-08-01

    There is currently no reliable technique for intraoperative assessment of pedicle screw fixation strength and optimal screw size. Several studies have evaluated pedicle screw insertional torque (IT) and its direct correlation with pullout strength. However, there is limited clinical application with pedicle screw IT as it must be measured during screw placement and rarely causes the spine surgeon to change screw size. To date, no study has evaluated tapping IT, which precedes screw insertion, and its ability to predict pedicle screw pullout strength. The objective of this study was to investigate tapping IT and its ability to predict pedicle screw pullout strength and optimal screw size. In vitro human cadaveric biomechanical analysis. Twenty fresh-frozen human cadaveric thoracic vertebral levels were prepared and dual-energy radiographic absorptiometry scanned for bone mineral density (BMD). All specimens were osteoporotic with a mean BMD of 0.60 ± 0.07 g/cm(2). Five specimens (n=10) were used to perform a pilot study, as there were no previously established values for optimal tapping IT. Each pedicle during the pilot study was measured using a digital caliper as well as computed tomography measurements, and the optimal screw size was determined to be equal to or the first size smaller than the pedicle diameter. The optimal tap size was then selected as the tap diameter 1 mm smaller than the optimal screw size. During optimal tap size insertion, all peak tapping IT values were found to be between 2 in-lbs and 3 in-lbs. Therefore, the threshold tapping IT value for optimal pedicle screw and tap size was determined to be 2.5 in-lbs, and a comparison tapping IT value of 1.5 in-lbs was selected. Next, 15 test specimens (n=30) were measured with digital calipers, probed, tapped, and instrumented using a paired comparison between the two threshold tapping IT values (Group 1: 1.5 in-lbs; Group 2: 2.5 in-lbs), randomly assigned to the left or right pedicle on each specimen. Each pedicle was incrementally tapped to increasing size (3.75, 4.00, 4.50, and 5.50 mm) until the threshold value was reached based on the assigned group. Pedicle screw size was determined by adding 1 mm to the tap size that crossed the threshold torque value. Torque measurements were recorded with each revolution during tap and pedicle screw insertion. Each specimen was then individually potted and pedicle screws pulled out "in-line" with the screw axis at a rate of 0.25 mm/sec. Peak pullout strength (POS) was measured in Newtons (N). The peak tapping IT was significantly increased (50%) in Group 2 (3.23 ± 0.65 in-lbs) compared with Group 1 (2.15 ± 0.56 in-lbs) (p=.0005). The peak screw IT was also significantly increased (19%) in Group 2 (8.99 ± 2.27 in-lbs) compared with Group 1 (7.52 ± 2.96 in-lbs) (p=.02). The pedicle screw pullout strength was also significantly increased (23%) in Group 2 (877.9 ± 235.2 N) compared with Group 1 (712.3 ± 223.1 N) (p=.017). The mean pedicle screw diameter was significantly increased in Group 2 (5.70 ± 1.05 mm) compared with Group 1 (5.00 ± 0.80 mm) (p=.0002). There was also an increased rate of optimal pedicle screw size selection in Group 2 with 9 of 15 (60%) pedicle screws compared with Group 1 with 4 of 15 (26.7%) pedicle screws within 1 mm of the measured pedicle width. There was a moderate correlation for tapping IT with both screw IT (r=0.54; p=.002) and pedicle screw POS (r=0.55; p=.002). Our findings suggest that tapping IT directly correlates with pedicle screw IT, pedicle screw pullout strength, and optimal pedicle screw size. Therefore, tapping IT may be used during thoracic pedicle screw instrumentation as an adjunct to preoperative imaging and clinical experience to maximize fixation strength and optimize pedicle "fit and fill" with the largest screw possible. However, further prospective, in vivo studies are necessary to evaluate the intraoperative use of tapping IT to predict screw loosening/complications. Published by Elsevier Inc.

  11. IN VITRO QUANTIFICATION OF THE SIZE DISTRIBUTION OF INTRASACCULAR VOIDS LEFT AFTER ENDOVASCULAR COILING OF CEREBRAL ANEURYSMS.

    PubMed

    Sadasivan, Chander; Brownstein, Jeremy; Patel, Bhumika; Dholakia, Ronak; Santore, Joseph; Al-Mufti, Fawaz; Puig, Enrique; Rakian, Audrey; Fernandez-Prada, Kenneth D; Elhammady, Mohamed S; Farhat, Hamad; Fiorella, David J; Woo, Henry H; Aziz-Sultan, Mohammad A; Lieber, Baruch B

    2013-03-01

    Endovascular coiling of cerebral aneurysms remains limited by coil compaction and associated recanalization. Recent coil designs which effect higher packing densities may be far from optimal because hemodynamic forces causing compaction are not well understood since detailed data regarding the location and distribution of coil masses are unavailable. We present an in vitro methodology to characterize coil masses deployed within aneurysms by quantifying intra-aneurysmal void spaces. Eight identical aneurysms were packed with coils by both balloon- and stent-assist techniques. The samples were embedded, sequentially sectioned and imaged. Empty spaces between the coils were numerically filled with circles (2D) in the planar images and with spheres (3D) in the three-dimensional composite images. The 2D and 3D void size histograms were analyzed for local variations and by fitting theoretical probability distribution functions. Balloon-assist packing densities (31±2%) were lower ( p =0.04) than the stent-assist group (40±7%). The maximum and average 2D and 3D void sizes were higher ( p =0.03 to 0.05) in the balloon-assist group as compared to the stent-assist group. None of the void size histograms were normally distributed; theoretical probability distribution fits suggest that the histograms are most probably exponentially distributed with decay constants of 6-10 mm. Significant ( p <=0.001 to p =0.03) spatial trends were noted with the void sizes but correlation coefficients were generally low (absolute r <=0.35). The methodology we present can provide valuable input data for numerical calculations of hemodynamic forces impinging on intra-aneurysmal coil masses and be used to compare and optimize coil configurations as well as coiling techniques.

  12. Evaluation of whole genome amplified DNA to decrease material expenditure and increase quality.

    PubMed

    Bækvad-Hansen, Marie; Bybjerg-Grauholm, Jonas; Poulsen, Jesper B; Hansen, Christine S; Hougaard, David M; Hollegaard, Mads V

    2017-06-01

    The overall aim of this study is to evaluate whole genome amplification of DNA extracted from dried blood spot samples. We wish to explore ways of optimizing the amplification process, while decreasing the amount of input material and inherently the cost. Our primary focus of optimization is on the amount of input material, the amplification reaction volume, the number of replicates and amplification time and temperature. Increasing the quality of the amplified DNA and the subsequent results of array genotyping is a secondary aim of this project. This study is based on DNA extracted from dried blood spot samples. The extracted DNA was subsequently whole genome amplified using the REPLIg kit and genotyped on the PsychArray BeadChip (assessing > 570,000 SNPs genome wide). We used Genome Studio to evaluate the quality of the genotype data by call rates and log R ratios. The whole genome amplification process is robust and does not vary between replicates. Altering amplification time, temperature or number of replicates did not affect our results. We found that spot size i.e. amount of input material could be reduced without compromising the quality of the array genotyping data. We also showed that whole genome amplification reaction volumes can be reduced by a factor of 4, without compromising the DNA quality. Whole genome amplified DNA samples from dried blood spots is well suited for array genotyping and produces robust and reliable genotype data. However, the amplification process introduces additional noise to the data, making detection of structural variants such as copy number variants difficult. With this study, we explore ways of optimizing the amplification protocol in order to reduce noise and increase data quality. We found, that the amplification process was very robust, and that changes in amplification time or temperature did not alter the genotyping calls or quality of the array data. Adding additional replicates of each sample also lead to insignificant changes in the array data. Thus, the amount of noise introduced by the amplification process was consistent regardless of changes made to the amplification protocol. We also explored ways of decreasing material expenditure by reducing the spot size or the amplification reaction volume. The reduction did not affect the quality of the genotyping data.

  13. Enzymatic-microwave assisted extraction and high-performance liquid chromatography-mass spectrometry for the determination of selected veterinary antibiotics in fish and mussel samples.

    PubMed

    Fernandez-Torres, R; Lopez, M A Bello; Consentino, M Olias; Mochon, M Callejon; Payan, M Ramos

    2011-04-05

    A new method based on enzymatic-microwave assisted extraction prior to high performance liquid chromatography (HPLC) has been developed for the determination of 11 antibiotics (drugs) and the main metabolites of five of them in fish tissue and mussel samples. The analysed compounds were sulfadiazine (SDI), N(4)-acetylsulfadiazine (NDI), sulfamethazine (SMZ), N(4)-acetylsulfamethazine (NMZ), sulfamerazine (SMR), N(4)-acetylsulfamerazine (NMR), sulfamethoxazole (SMX), trimetroprim (TMP), amoxicillin (AMX), amoxicilloic acid (AMA), ampicillin (AMP), ampicilloic acid (APA), chloramphenicol (CLF), thiamphenicol (TIF), oxytetracycline (OXT) and chlortetracycline (CLT). The main factors affecting the extraction efficiency were optimized in tissue of hake (Merluccius merluccius), anchovy (Engraulis encrasicolus), mussel (Mytilus sp.) and wedge sole (Solea solea). The microwave extraction was carried out using an extraction time of 5 min with 5 mL of water at 50W and posterior clean up with dichloromethane. High-performance liquid chromatography (HPLC)-mass spectrometry was used for the determination of the antibiotics. The separation of the analysed compounds was conducted by means of a Phenomenex® Gemini C(18) (150 mm × 4.6mm I.D., particle size 5 μm) analytical column with LiChroCART® LiChrospher® C(18) (4 mm × 4 mm, particle size 5 μm) guard-column. Analysed drugs were determined using formic acid 0.1% in water and acetonitrile in gradient elution mode as mobile phase. Under the optimal conditions, the average recoveries of all the analysed drugs were in the range 70-100%. The proposed method was applied to samples obtained from Mediterranean sea and also evaluated by a laboratory assay consisting in the determination of the targeted analytes in samples of Cyprinus carpio that had been previously administered the antibiotics. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Sampling Mars: Analytical requirements and work to do in advance

    NASA Technical Reports Server (NTRS)

    Koeberl, Christian

    1988-01-01

    Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.

  15. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    NASA Astrophysics Data System (ADS)

    Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m2.

  16. High-throughput growth temperature optimization of ferroelectric SrxBa1-xNb2O6 epitaxial thin films using a temperature gradient method

    NASA Astrophysics Data System (ADS)

    Ohkubo, I.; Christen, H. M.; Kalinin, Sergei V.; Jellison, G. E.; Rouleau, C. M.; Lowndes, D. H.

    2004-02-01

    We have developed a multisample film growth method on a temperature-gradient substrate holder to quickly optimize the film growth temperature in pulsed-laser deposition. A smooth temperature gradient is achieved, covering a range of temperatures from 200 to 830 °C. In a single growth run, the optimal growth temperature for SrxBa1-xNb2O6 thin films on MgO(001) substrates was determined to be 750 °C, based on results from ellipsometry and piezoresponse force microscopy. Variations in optical properties and ferroelectric domains structures were clearly observed as function of growth temperature, and these physical properties can be related to their different crystalline quality. Piezoresponse force microscopy indicated the formation of uniform ferroelectric film for deposition temperatures above 750 °C. At 660 °C, isolated micron-sized ferroelectric islands were observed, while samples deposited below 550 °C did not exhibit clear piezoelectric contrast.

  17. A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  18. Mathematical programming models for the economic design and assessment of wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Reinert, K. A.

    The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.

  19. A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.

    PubMed

    Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C

    2006-06-01

    The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.

  20. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  1. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  2. Deeper and sparser nets are optimal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.; Makaruk, H.E.

    1998-03-01

    The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less

  3. Morphing Wing Weight Predictors and Their Application in a Template-Based Morphing Aircraft Sizing Environment II. Part 2; Morphing Aircraft Sizing via Multi-level Optimization

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2008-01-01

    This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.

  4. A Study on Optimal Sizing of Pipeline Transporting Equi-sized Particulate Solid-Liquid Mixture

    NASA Astrophysics Data System (ADS)

    Asim, Taimoor; Mishra, Rakesh; Pradhan, Suman; Ubbi, Kuldip

    2012-05-01

    Pipelines transporting solid-liquid mixtures are of practical interest to the oil and pipe industry throughout the world. Such pipelines are known as slurry pipelines where the solid medium of the flow is commonly known as slurry. The optimal designing of such pipelines is of commercial interests for their widespread acceptance. A methodology has been evolved for the optimal sizing of a pipeline transporting solid-liquid mixture. Least cost principle has been used in sizing such pipelines, which involves the determination of pipe diameter corresponding to the minimum cost for given solid throughput. The detailed analysis with regard to transportation of slurry having solids of uniformly graded particles size has been included. The proposed methodology can be used for designing a pipeline for transporting any solid material for different solid throughput.

  5. The requirements for low-temperature plasma ionization support miniaturization of the ion source.

    PubMed

    Kiontke, Andreas; Holzer, Frank; Belder, Detlev; Birkemeyer, Claudia

    2018-06-01

    Ambient ionization mass spectrometry (AI-MS), the ionization of samples under ambient conditions, enables fast and simple analysis of samples without or with little sample preparation. Due to their simple construction and low resource consumption, plasma-based ionization methods in particular are considered ideal for use in mobile analytical devices. However, systematic investigations that have attempted to identify the optimal configuration of a plasma source to achieve the sensitive detection of target molecules are still rare. We therefore used a low-temperature plasma ionization (LTPI) source based on dielectric barrier discharge with helium employed as the process gas to identify the factors that most strongly influence the signal intensity in the mass spectrometry of species formed by plasma ionization. In this study, we investigated several construction-related parameters of the plasma source and found that a low wall thickness of the dielectric, a small outlet spacing, and a short distance between the plasma source and the MS inlet are needed to achieve optimal signal intensity with a process-gas flow rate of as little as 10 mL/min. In conclusion, this type of ion source is especially well suited for downscaling, which is usually required in mobile devices. Our results provide valuable insights into the LTPI mechanism; they reveal the potential to further improve its implementation and standardization for mobile mass spectrometry as well as our understanding of the requirements and selectivity of this technique. Graphical abstract Optimized parameters of a dielectric barrier discharge plasma for ionization in mass spectrometry. The electrode size, shape, and arrangement, the thickness of the dielectric, and distances between the plasma source, sample, and MS inlet are marked in red. The process gas (helium) flow is shown in black.

  6. Hierarchical porous photoanode based on acid boric catalyzed sol for dye sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Maleki, Khatereh; Abdizadeh, Hossein; Golobostanfard, Mohammad Reza; Adelfar, Razieh

    2017-02-01

    The hierarchical porous photoanode of the dye sensitized solar cell (DSSC) is synthesized through non-aqueous sol-gel method based on H3BO3 as an acid catalyst and the efficiencies of the fabricated DSSC based on these photoanodes are compared. The sol parameters of 0.17 M, water mole ratio of 4.5, acid mole ratio of 0.45, and solvent type of ethanol are introduced as optimum parameters for photoanode formation without any detectable cracks. The optimized hierarchical photoanode mainly contains anatase phase with slight shift toward higher angles, confirming the doping of boron into titania structure. Moreover, the porous structure involves two ranges of average pore sizes of 20 and 635 nm. The diffuse reflectance spectroscopy (DRS) shows the proper scattering and blueshift in band gap. The paste parameters of solid:liquid, TiO2:ethyl cellulose, and terpineol:ethanol equal to 11:89, 3.5:7.5, and 25:64, respectively, are assigned as optimized parameters for this novel paste. The photovoltaic properties of short circuit current density, open circuit voltage, fill factor, and efficiency of 5.89 mA/cm2, 703 mV, 0.7, and 2.91% are obtained for the optimized sample, respectively. The relatively higher short circuit current of the main sample compared to other samples is mainly due to higher dye adsorption in this sample corresponding to its higher surface area and presumably higher charge transfer confirmed by low RS and Rct in electrochemical impedance spectroscopy data. Boric acid as a catalyst in titania sol not only forms hierarchical porous structure, but also dopes the titania lattice, which results in appreciated performance in this device.

  7. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words

    PubMed Central

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words. PMID:26909052

  8. Effects of Word Width and Word Length on Optimal Character Size for Reading of Horizontally Scrolling Japanese Words.

    PubMed

    Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji

    2016-01-01

    The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.

  9. Deeper sparsely nets are size-optimal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiu, V.; Makaruk, H.E.

    1997-12-01

    The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions (Horne, 1994); and (ii) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimalmore » solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e. minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less

  10. Determination of a temperature sensor location for monitoring weld pool size in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boo, K.S.; Cho, H.S.

    1994-11-01

    This paper describes a method of determining the optimal sensor location to measure weldment surface temperature, which has a close correlation with weld pool size in the gas metal arc (GMA) welding process. Due to the inherent complexity and nonlinearity in the GMA welding process, the relationship between the weldment surface temperature and the weld pool size varies with the point of measurement. This necessitates an optimal selection of the measurement point to minimize the process nonlinearity effect in estimating the weld pool size from the measured temperature. To determine the optimal sensor location on the top surface of themore » weldment, the correlation between the measured temperature and the weld pool size is analyzed. The analysis is done by calculating the correlation function, which is based upon an analytical temperature distribution model. To validate the optimal sensor location, a series of GMA bead-on-plate welds are performed on a medium-carbon steel under various welding conditions. A comparison study is given in detail based upon the simulation and experimental results.« less

  11. Optimal Budget Allocation for Sample Average Approximation

    DTIC Science & Technology

    2011-06-01

    an optimization algorithm applied to the sample average problem. We examine the convergence rate of the estimator as the computing budget tends to...regime for the optimization algorithm . 1 Introduction Sample average approximation (SAA) is a frequently used approach to solving stochastic programs...appealing due to its simplicity and the fact that a large number of standard optimization algorithms are often available to optimize the resulting sample

  12. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  13. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory alloy caps [5] if the heating does not affect the sample, or by crimping caps similar to bottle capping. We prefer cap sealing surfaces be external to the cup rim to prevent sample dust inside the cups interfering with sealing, or, contamination of the sample by Teflon seal elements (if adopted). Finally the sample collection rover, or a Fetch rover, selects cups with best choice samples and loads them into a sample tray, before delivering it to the Earth Return Vehicle (ERV) in the MSR Dragon capsule as described in [1] (Fig 1). This ensures best use of the MSR payload mass allowance. A 3 meter long jointed robot arm is extended from the Dragon capsule's crew hatch, retrieves the sample tray and inserts it into the sample canister payload located on the ERV stage. The robot arm has capacity to obtain grab samples in the event of a rover failure. The sample canister has a robot arm capture casting to enable capture by crewed or robot spacecraft when it returns to Earth orbit

  14. Evaluation of three inverse problem models to quantify skin microcirculation using diffusion-weighted MRI

    NASA Astrophysics Data System (ADS)

    Cordier, G.; Choi, J.; Raguin, L. G.

    2008-11-01

    Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.

  15. An Intensified Vibratory Milling Process for Enhancing the Breakage Kinetics during the Preparation of Drug Nanosuspensions.

    PubMed

    Li, Meng; Zhang, Lu; Davé, Rajesh N; Bilgili, Ecevit

    2016-04-01

    As a drug-sparing approach in early development, vibratory milling has been used for the preparation of nanosuspensions of poorly water-soluble drugs. The aim of this study was to intensify this process through a systematic increase in vibration intensity and bead loading with the optimal bead size for faster production. Griseofulvin, a poorly water-soluble drug, was wet-milled using yttrium-stabilized zirconia beads with sizes ranging from 50 to 1500 μm at low power density (0.87 W/g). Then, this process was intensified with the optimal bead size by sequentially increasing vibration intensity and bead loading. Additional experiments with several bead sizes were performed at high power density (16 W/g), and the results were compared to those from wet stirred media milling. Laser diffraction, scanning electron microscopy, X-ray diffraction, differential scanning calorimetry, and dissolution tests were used for characterization. Results for the low power density indicated 800 μm as the optimal bead size which led to a median size of 545 nm with more than 10% of the drug particles greater than 1.8 μm albeit the fastest breakage. An increase in either vibration intensity or bead loading resulted in faster breakage. The most intensified process led to 90% of the particles being smaller than 300 nm. At the high power intensity, 400 μm beads were optimal, which enhanced griseofulvin dissolution significantly and signified the importance of bead size in view of the power density. Only the optimally intensified vibratory milling led to a comparable nanosuspension to that prepared by the stirred media milling.

  16. A highly efficient protocol for micropropagation of Begonia tuberous.

    PubMed

    Duong, Tan Nhut; Nguyen, Thanh Hai; Mai, Xuan Phan

    2010-01-01

    A protocol for micropropagation of begonia was established utilizing a thin cell layer (TCL) system. This system has been employed to produce several thousand shoots per sample. Explant size and position, and plant growth regulators (PGRs) contribute to the tissue morphogenesis. By optimizing the size of the tissue and applying an improved selection procedure, shoots were elongated in 8 weeks of culture, with an average number of 210 +/- 9.7 shoots per segment. This system has facilitated a number of studies using TCL as a model for micropropagation and will enable the large-scale production of begonia. On an average, the best treatment would allow production of about 10,000 plantlets by the micropropagation of the axillary buds of one plant with five petioles, within a period of 8 months.

  17. A study of TRIGLYCINE SULFATE (TGS) crystals from the International Microgravity Laboratory Mission (IML-1)

    NASA Technical Reports Server (NTRS)

    Lal, R. B.

    1992-01-01

    Preliminary evaluation of the data was made during the hologram processing procedure. A few representative holograms were selected and reconstructed in the HGS; photographs of sample particle images were made to illustrate the resolution of all three particle sizes. Based on these evaluations slight modifications were requested in the hologram processing procedure to optimize the hologram exposure in the vicinity of the crystal. Preliminary looks at the data showed that we are able to see and track all three sizes of particles throughout the chamber. Because of the vast amount of data available in the holograms, it was recommended that we produce a detailed data reduction plan with prioritization on the different types of data which can be extracted from the holograms.

  18. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  19. Visible and near-infrared bulk optical properties of raw milk.

    PubMed

    Aernouts, B; Van Beers, R; Watté, R; Huybrechts, T; Lammertyn, J; Saeys, W

    2015-10-01

    The implementation of optical sensor technology to monitor the milk quality on dairy farms and milk processing plants would support the early detection of altering production processes. Basic visible and near-infrared spectroscopy is already widely used to measure the composition of agricultural and food products. However, to obtain maximal performance, the design of such optical sensors should be optimized with regard to the optical properties of the samples to be measured. Therefore, the aim of this study was to determine the visible and near-infrared bulk absorption coefficient, bulk scattering coefficient, and scattering anisotropy spectra for a diverse set of raw milk samples originating from individual cow milkings, representing the milk variability present on dairy farms. Accordingly, this database of bulk optical properties can be used in future simulation studies to efficiently optimize and validate the design of an optical milk quality sensor. In a next step of the current study, the relation between the obtained bulk optical properties and milk quality properties was analyzed in detail. The bulk absorption coefficient spectra were found to mainly contain information on the water, fat, and casein content, whereas the bulk scattering coefficient spectra were found to be primarily influenced by the quantity and the size of the fat globules. Moreover, a strong positive correlation (r ≥ 0.975) was found between the fat content in raw milk and the measured bulk scattering coefficients in the 1,300 to 1,400 nm wavelength range. Relative to the bulk scattering coefficient, the variability on the scattering anisotropy factor was found to be limited. This is because the milk scattering anisotropy is nearly independent of the fat globule and casein micelle quantity, while it is mainly determined by the size of the fat globules. As this study shows high correlations between the sample's bulk optical properties and the milk composition and fat globule size, a sensor that allows for robust separation between the absorption and scattering properties would enable accurate prediction of the raw milk quality parameters. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Enhanced Solubility and Dissolution Rate of Lacidipine Nanosuspension: Formulation Via Antisolvent Sonoprecipitation Technique and Optimization Using Box-Behnken Design.

    PubMed

    Kassem, Mohamed A A; ElMeshad, Aliaa N; Fares, Ahmed R

    2017-05-01

    Lacidipine (LCDP) is a highly lipophilic calcium channel blocker of poor aqueous solubility leading to poor oral absorption. This study aims to prepare and optimize LCDP nanosuspensions using antisolvent sonoprecipitation technique to enhance the solubility and dissolution of LCDP. A three-factor, three-level Box-Behnken design was employed to optimize the formulation variables to obtain LCDP nanosuspension of small and uniform particle size. Formulation variables were as follows: stabilizer to drug ratio (A), sodium deoxycholate percentage (B), and sonication time (C). LCDP nanosuspensions were assessed for particle size, zeta potential, and polydispersity index. The formula with the highest desirability (0.969) was chosen as the optimized formula. The values of the formulation variables (A, B, and C) in the optimized nanosuspension were 1.5, 100%, and 8 min, respectively. Optimal LCDP nanosuspension had particle size (PS) of 273.21 nm, zeta potential (ZP) of -32.68 mV and polydispersity index (PDI) of 0.098. LCDP nanosuspension was characterized using x-ray powder diffraction, differential scanning calorimetry, and transmission electron microscopy. LCDP nanosuspension showed saturation solubility 70 times that of raw LCDP in addition to significantly enhanced dissolution rate due to particle size reduction and decreased crystallinity. These results suggest that the optimized LCDP nanosuspension could be promising to improve oral absorption of LCDP.

  1. Integrated topology and shape optimization in structural design

    NASA Technical Reports Server (NTRS)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  2. MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton C.

    2011-01-01

    The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by mass of each sample should be preserved to support future scientific investigations. Samples of 15-16 grams are considered optimal. The total mass of returned rocks, soils, blanks and standards should be approximately 500 grams. Atmospheric gas samples should be the equivalent of 50 cubic cm at 20 times Mars ambient atmospheric pressure.

  3. Multiobjective optimization design of an rf gun based electron diffraction beam line

    NASA Astrophysics Data System (ADS)

    Gulliford, Colwyn; Bartnik, Adam; Bazarov, Ivan; Maxson, Jared

    2017-03-01

    Multiobjective genetic algorithm optimizations of a single-shot ultrafast electron diffraction beam line comprised of a 100 MV /m 1.6-cell normal conducting rf (NCRF) gun, as well as a nine-cell 2 π /3 bunching cavity placed between two solenoids, have been performed. These include optimization of the normalized transverse emittance as a function of bunch charge, as well as optimization of the transverse coherence length as a function of the rms bunch length of the beam at the sample location for a fixed charge of 1 06 electrons. Analysis of the resulting solutions is discussed in terms of the relevant scaling laws, and a detailed description of one of the resulting solutions from the coherence length optimizations is given. For a charge of 1 06 electrons and final beam sizes of σx≥25 μ m and σt≈5 fs , we found a relative coherence length of Lc ,x/σx≈0.07 using direct optimization of the coherence length. Additionally, based on optimizations of the emittance as a function of final bunch length, we estimate the relative coherence length for bunch lengths of 30 and 100 fs to be roughly 0.1 and 0.2 nm /μ m , respectively. Finally, using the scaling of the optimal emittance with bunch charge, for a charge of 1 05 electrons, we estimate relative coherence lengths of 0.3, 0.5, and 0.92 nm /μ m for final bunch lengths of 5, 30 and 100 fs, respectively.

  4. Pre-breeding food restriction promotes the optimization of parental investment in house mice, Mus musculus.

    PubMed

    Dušek, Adam; Bartoš, Luděk; Sedláček, František

    2017-01-01

    Litter size is one of the most reliable state-dependent life-history traits that indicate parental investment in polytocous (litter-bearing) mammals. The tendency to optimize litter size typically increases with decreasing availability of resources during the period of parental investment. To determine whether this tactic is also influenced by resource limitations prior to reproduction, we examined the effect of experimental, pre-breeding food restriction on the optimization of parental investment in lactating mice. First, we investigated the optimization of litter size in 65 experimental and 72 control families (mothers and their dependent offspring). Further, we evaluated pre-weaning offspring mortality, and the relationships between maternal and offspring condition (body weight), as well as offspring mortality, in 24 experimental and 19 control families with litter reduction (the death of one or more offspring). Assuming that pre-breeding food restriction would signal unpredictable food availability, we hypothesized that the optimization of parental investment would be more effective in the experimental rather than in the control mice. In comparison to the controls, the experimental mice produced larger litters and had a more selective (size-dependent) offspring mortality and thus lower litter reduction (the proportion of offspring deaths). Selective litter reduction helped the experimental mothers to maintain their own optimum condition, thereby improving the condition and, indirectly, the survival of their remaining offspring. Hence, pre-breeding resource limitations may have facilitated the mice to optimize their inclusive fitness. On the other hand, in the control females, the absence of environmental cues indicating a risky environment led to "maternal optimism" (overemphasizing good conditions at the time of breeding), which resulted in the production of litters of super-optimal size and consequently higher reproductive costs during lactation, including higher offspring mortality. Our study therefore provides the first evidence that pre-breeding food restriction promotes the optimization of parental investment, including offspring number and developmental success.

  5. Composite Magnetic Nanoparticles (CuFe₂O₄) as a New Microsorbent for Extraction of Rhodamine B from Water Samples.

    PubMed

    Roostaie, Ali; Allahnoori, Farzad; Ehteshami, Shokooh

    2017-09-01

    In this work, novel composite magnetic nanoparticles (CuFe2O4) were synthesized based on sol-gel combustion in the laboratory. Next, a simple production method was optimized for the preparation of the copper nanoferrites (CuFe2O4), which are stable in water, magnetically active, and have a high specific area used as sorbent material for organic dye extraction in water solution. CuFe2O4 nanopowders were characterized by field-emission scanning electron microscopy (SEM), FTIR spectroscopy, and energy dispersive X-ray spectroscopy. The size range of the nanoparticles obtained in such conditions was estimated by SEM images to be 35-45 nm. The parameters influencing the extraction of CuFe2O4 nanoparticles, such as desorption solvent, amount of sorbent, desorption time, sample pH, ionic strength, and extraction time, were investigated and optimized. Under the optimum conditions, a linear calibration curve in the range of 0.75-5.00 μg/L with R2 = 0.9996 was obtained. The LOQ (10Sb) and LOD (3Sb) of the method were 0.75 and 0.25 μg/L (n = 3), respectively. The RSD for a water sample spiked with 1 μg/L rhodamine B was 3% (n = 5). The method was applied for the determination of rhodamine B in tap water, dishwashing foam, dishwashing liquid, and shampoo samples. The relative recovery percentages for these samples were in the range of 95-99%.

  6. Offspring fitness and individual optimization of clutch size

    PubMed Central

    Both, C.; Tinbergen, J. M.; Noordwijk, A. J. van

    1998-01-01

    Within-year variation in clutch size has been claimed to be an adaptation to variation in the individual capacity to raise offspring. We tested this hypothesis by manipulating brood size to one common size, and predicted that if clutch size is individually optimized, then birds with originally large clutches have a higher fitness than birds with originally small clutches. No evidence was found that fitness was related to the original clutch size, and in this population clutch size is thus not related to the parental capacity to raise offspring. However, offspring from larger original clutches recruited better than their nest mates that came from smaller original clutches. This suggests that early maternal or genetic variation in viability is related to clutch size.

  7. Dylan Cutler | NREL

    Science.gov Websites

    focuses on integration and optimization of distributed energy resources, specifically cost-optimal sizing Campus team which is focusing on NREL's own control system integration and energy informatics sizing and dispatch of distributed energy resources Integration of building and utility control systems

  8. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  9. Quantum Support Vector Machine for Big Data Classification

    NASA Astrophysics Data System (ADS)

    Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth

    2014-09-01

    Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.

  10. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Fabrication of Titanium-Niobium-Zirconium-Tantalium Alloy (TNZT) Bioimplant Components with Controllable Porosity by Spark Plasma Sintering

    PubMed Central

    Rechtin, Jack; Torresani, Elisa; Ivanov, Eugene; Olevsky, Eugene

    2018-01-01

    Spark Plasma Sintering (SPS) is used to fabricate Titanium-Niobium-Zirconium-Tantalum alloy (TNZT) powder—based bioimplant components with controllable porosity. The developed densification maps show the effects of final SPS temperature, pressure, holding time, and initial particle size on final sample relative density. Correlations between the final sample density and mechanical properties of the fabricated TNZT components are also investigated and microstructural analysis of the processed material is conducted. A densification model is proposed and used to calculate the TNZT alloy creep activation energy. The obtained experimental data can be utilized for the optimized fabrication of TNZT components with specific microstructural and mechanical properties suitable for biomedical applications. PMID:29364165

  12. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    PubMed

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. A challenge for theranostics: is the optimal particle for therapy also optimal for diagnostics?

    NASA Astrophysics Data System (ADS)

    Dreifuss, Tamar; Betzer, Oshra; Shilo, Malka; Popovtzer, Aron; Motiei, Menachem; Popovtzer, Rachela

    2015-09-01

    Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications.Theranostics is defined as the combination of therapeutic and diagnostic capabilities in the same agent. Nanotechnology is emerging as an efficient platform for theranostics, since nanoparticle-based contrast agents are powerful tools for enhancing in vivo imaging, while therapeutic nanoparticles may overcome several limitations of conventional drug delivery systems. Theranostic nanoparticles have drawn particular interest in cancer treatment, as they offer significant advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of platforms for theranostic applications raises critical questions; is the optimal particle for therapy also the optimal particle for diagnostics? Are the specific characteristics needed to optimize diagnostic imaging parallel to those required for treatment applications? This issue is examined in the present study, by investigating the effect of the gold nanoparticle (GNP) size on tumor uptake and tumor imaging. A series of anti-epidermal growth factor receptor conjugated GNPs of different sizes (diameter range: 20-120 nm) was synthesized, and then their uptake by human squamous cell carcinoma head and neck cancer cells, in vitro and in vivo, as well as their tumor visualization capabilities were evaluated using CT. The results showed that the size of the nanoparticle plays an instrumental role in determining its potential activity in vivo. Interestingly, we found that although the highest tumor uptake was obtained with 20 nm C225-GNPs, the highest contrast enhancement in the tumor was obtained with 50 nm C225-GNPs, thus leading to the conclusion that the optimal particle size for drug delivery is not necessarily optimal for imaging. These findings stress the importance of the investigation and design of optimal nanoparticles for theranostic applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03119b

  14. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    PubMed

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  15. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  16. TiO2 Nanotube Arrays: Fabricated by Soft-Hard Template and the Grain Size Dependence of Field Emission Performance

    NASA Astrophysics Data System (ADS)

    Yang, Xuxin; Ma, Pei; Qi, Hui; Zhao, Jingxin; Wu, Qiang; You, Jichun; Li, Yongjin

    2017-11-01

    Highly ordered TiO2 nanotube (TNT) arrays were successfully synthesized by the combination of soft and hard templates. In the fabrication of them, anodic aluminum oxide membranes act as the hard template while the self-assembly of polystyrene-block-poly(ethylene oxide) (PS-b-PEO) complexed with titanium-tetraisopropoxide (TTIP, the precursor of TiO2) provides the soft template to control the grain size of TiO2 nanotubes. Our results indicate that the field emission (FE) performance depends crucially on the grain size of the calcinated TiO2 which is dominated by the PS-b-PEO and its blending ratio with TTIP. The optimized sample (with the TTIP/PEO ratio of 3.87) exhibits excellent FE performances involving both a low turn-on field of 3.3 V/um and a high current density of 7.6 mA/cm2 at 12.7 V/μm. The enhanced FE properties can be attributed to the low effective work function (1.2 eV) resulted from the smaller grain size of TiO2.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorenz, Matthias; Ovchinnikova, Olga S; Van Berkel, Gary J

    RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system.more » RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant materials.« less

  18. High-Definition Infrared Spectroscopic Imaging

    PubMed Central

    Reddy, Rohith K.; Walsh, Michael J.; Schulmerich, Matthew V.; Carney, P. Scott; Bhargava, Rohit

    2013-01-01

    The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments. PMID:23317676

  19. High-definition infrared spectroscopic imaging.

    PubMed

    Reddy, Rohith K; Walsh, Michael J; Schulmerich, Matthew V; Carney, P Scott; Bhargava, Rohit

    2013-01-01

    The quality of images from an infrared (IR) microscope has traditionally been limited by considerations of throughput and signal-to-noise ratio (SNR). An understanding of the achievable quality as a function of instrument parameters, from first principals is needed for improved instrument design. Here, we first present a model for light propagation through an IR spectroscopic imaging system based on scalar wave theory. The model analytically describes the propagation of light along the entire beam path from the source to the detector. The effect of various optical elements and the sample in the microscope is understood in terms of the accessible spatial frequencies by using a Fourier optics approach and simulations are conducted to gain insights into spectroscopic image formation. The optimal pixel size at the sample plane is calculated and shown much smaller than that in current mid-IR microscopy systems. A commercial imaging system is modified, and experimental data are presented to demonstrate the validity of the developed model. Building on this validated theoretical foundation, an optimal sampling configuration is set up. Acquired data were of high spatial quality but, as expected, of poorer SNR. Signal processing approaches were implemented to improve the spectral SNR. The resulting data demonstrated the ability to perform high-definition IR imaging in the laboratory by using minimally-modified commercial instruments.

  20. Optimized Geometry for Superconducting Sensing Coils

    NASA Technical Reports Server (NTRS)

    Eom, Byeong Ho; Pananen, Konstantin; Hahn, Inseob

    2008-01-01

    An optimized geometry has been proposed for superconducting sensing coils that are used in conjunction with superconducting quantum interference devices (SQUIDs) in magnetic resonance imaging (MRI), magnetoencephalography (MEG), and related applications in which magnetic fields of small dipoles are detected. In designing a coil of this type, as in designing other sensing coils, one seeks to maximize the sensitivity of the detector of which the coil is a part, subject to geometric constraints arising from the proximity of other required equipment. In MRI or MEG, the main benefit of maximizing the sensitivity would be to enable minimization of measurement time. In general, to maximize the sensitivity of a detector based on a sensing coil coupled with a SQUID sensor, it is necessary to maximize the magnetic flux enclosed by the sensing coil while minimizing the self-inductance of this coil. Simply making the coil larger may increase its self-inductance and does not necessarily increase sensitivity because it also effectively increases the distance from the sample that contains the source of the signal that one seeks to detect. Additional constraints on the size and shape of the coil and on the distance from the sample arise from the fact that the sample is at room temperature but the coil and the SQUID sensor must be enclosed within a cryogenic shield to maintain superconductivity.

Top