Sample records for consideration set size

  1. Combining the role of convenience and consideration set size in explaining fish consumption in Norway.

    PubMed

    Rortveit, Asbjorn Warvik; Olsen, Svein Ottar

    2009-04-01

    The purpose of this study is to explore how convenience orientation, perceived product inconvenience and consideration set size are related to attitudes towards fish and fish consumption. The authors present a structural equation model (SEM) based on the integration of two previous studies. The results of a SEM analysis using Lisrel 8.72 on data from a Norwegian consumer survey (n=1630) suggest that convenience orientation and perceived product inconvenience have a negative effect on both consideration set size and consumption frequency. Attitude towards fish has the greatest impact on consumption frequency. The results also indicate that perceived product inconvenience is a key variable since it has a significant impact on attitude, and on consideration set size and consumption frequency. Further, the analyses confirm earlier findings suggesting that the effect of convenience orientation on consumption is partially mediated through perceived product inconvenience. The study also confirms earlier findings suggesting that the consideration set size affects consumption frequency. Practical implications drawn from this research are that the seafood industry would benefit from developing and positioning products that change beliefs about fish as an inconvenient product. Future research for other food categories should be done to enhance the external validity.

  2. Determinants of Awareness, Consideration, and Choice Set Size in University Choice.

    ERIC Educational Resources Information Center

    Dawes, Philip L.; Brown, Jennifer

    2002-01-01

    Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)

  3. Setting monitoring objectives for landscape-size areas

    Treesearch

    Craig M. Olson; Dean Angelides

    2000-01-01

    The setting of objectives for monitoring schemes for landscape-size areas should be a complex task in today's regulatory and sociopolitical atmosphere. The technology available today, the regulatory environment, and the sociopolitical considerations require multiresource inventory and monitoring schemes, whether tile ownership is industrial or for preservation....

  4. Calculating Interaction Energies Using First Principle Theories: Consideration of Basis Set Superposition Error and Fragment Relaxation

    ERIC Educational Resources Information Center

    Bowen, J. Philip; Sorensen, Jennifer B.; Kirschner, Karl N.

    2007-01-01

    The analysis explains the basis set superposition error (BSSE) and fragment relaxation involved in calculating the interaction energies using various first principle theories. Interacting the correlated fragment and increasing the size of the basis set can help in decreasing the BSSE to a great extent.

  5. A qualitative study of parents' perceptions and use of portion size strategies for preschool children's snacks.

    PubMed

    Blake, Christine E; Fisher, Jennifer Orlet; Ganter, Claudia; Younginer, Nicholas; Orloski, Alexandria; Blaine, Rachel E; Bruton, Yasmeen; Davison, Kirsten K

    2015-05-01

    Increases in childhood obesity correspond with shifts in children's snacking behaviors and food portion sizes. This study examined parents' conceptualizations of portion size and the strategies they use to portion snacks in the context of preschool-aged children's snacking. Semi-structured qualitative interviews were conducted with non-Hispanic white (W), African American (AA), and Hispanic (H) low-income parents (n = 60) of preschool-aged children living in Philadelphia and Boston. The interview examined parents' child snacking definitions, purposes, contexts, and frequency. Verbatim transcripts were analyzed using a grounded theory approach. Coding matrices compared responses by race/ethnicity, parent education, and household food security status. Parents' commonly referenced portion sizes when describing children's snacks with phrases like "something small." Snack portion sizes were guided by considerations including healthfulness, location, hunger, and timing. Six strategies for portioning snacks were presented including use of small containers, subdividing large portions, buying prepackaged snacks, use of hand measurement, measuring cups, scales, and letting children determine portion size. Differences in considerations and strategies were seen between race/ethnic groups and by household food security status. Low-income parents of preschool-aged children described a diverse set of considerations and strategies related to portion sizes of snack foods offered to their children. Future studies should examine how these considerations and strategies influence child dietary quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A qualitative study of parents’ perceptions and use of portion size strategies for preschool children’s snacks

    PubMed Central

    Blake, Christine E.; Fisher, Jennifer Orlet; Ganter, Claudia; Younginer, Nicholas; Orloski, Alexandria; Blaine, Rachel E.; Bruton, Yasmeen; Davison, Kirsten K.

    2014-01-01

    Objective Increases in childhood obesity correspond with shifts in children’s snacking behaviors and food portion sizes. This study examined parents’ conceptualizations of portion size and the strategies they use to portion snacks in the context of preschool-aged children’s snacking. Methods Semi-structured qualitative interviews were conducted with non-Hispanic white (W), African American (AA), and Hispanic (H) low-income parents (n=60) of preschool-aged children living in Philadelphia and Boston. The interview examined parents’ child snacking definitions, purposes, contexts, and frequency. Verbatim transcripts were analyzed using a grounded theory approach. Coding matrices compared responses by race/ethnicity, parent education, and household food security status. Results Parents’ commonly referenced portion sizes when describing children’s snacks with phrases like “something small.” Snack portion sizes were guided by considerations including healthfulness, location, hunger, and timing. Six strategies for portioning snacks were presented including use of small containers, subdividing large portions, buying prepackaged snacks, use of hand measurement, measuring cups, scales, and letting children determine portion size. Differences in considerations and strategies were seen between race/ ethnic groups and by household food security status. Conclusions Low-income parents of preschool-aged children described a diverse set of considerations and strategies related to portion sizes of snack foods offered to their children. Future studies should examine how these considerations and strategies influence child dietary quality. PMID:25447008

  7. Height-related trends in leaf xylem anatomy and shoot hydraulic characteristics in a tall conifer: safety versus efficiency in water transport

    Treesearch

    D.R. Woodruff; F.C. Meinzer; B. Lachenbruch

    2008-01-01

    Growth and aboveground biomass accumulation follow a common pattern as tree size increases, with productivity peaking when leaf area reaches its maximum and then declining as tree age and size increase. Age- and size-related declines in forest productivity are major considerations in setting the rotational age of commercial forests, and relate to issues of carbon...

  8. Rethinking the Health Center: Assessing Your Health Center and Setting Goals.

    ERIC Educational Resources Information Center

    McMillan, Nancy S.

    2001-01-01

    Camp health center management begins with assessing the population served, camp areas impacted, and the contract of care with parents. That information is used to plan the size of the center; its location in the camp; the type of equipment; and considerations such as medication management, infectious disease control, size of in- and out-patient…

  9. How large a training set is needed to develop a classifier for microarray data?

    PubMed

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  10. Analysis of SET pulses propagation probabilities in sequential circuits

    NASA Astrophysics Data System (ADS)

    Cai, Shuo; Yu, Fei; Yang, Yiqun

    2018-05-01

    As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.

  11. Swimming Pools.

    ERIC Educational Resources Information Center

    Ministry of Housing and Local Government, London (England).

    Technical and engineering data are set forth on the design and construction of swimming pools. Consideration is given to site selection, pool construction, the comparative merits of combining open air and enclosed pools, and alternative uses of the pool. Guidelines are presented regarding--(1) pool size and use, (2) locker and changing rooms, (3)…

  12. Family size, the physical environment, and socioeconomic effects across the stature distribution.

    PubMed

    Carson, Scott Alan

    2012-04-01

    A neglected area in historical stature studies is the relationship between stature and family size. Using robust statistics and a large 19th century data set, this study documents a positive relationship between stature and family size across the stature distribution. The relationship between material inequality and health is the subject of considerable debate, and there was a positive relationship between stature and wealth and an inverse relationship between stature and material inequality. After controlling for family size and wealth variables, the paper reports a positive relationship between the physical environment and stature. Copyright © 2012 Elsevier GmbH. All rights reserved.

  13. A photovoltaic generator on coconut island

    NASA Astrophysics Data System (ADS)

    Sheridan, N. R.

    A description is given of the design principles of a photovoltaic—diesel power generator that has been constructed on Coconut Island, Torres Strait, to supply a village of 130 people with 240 V: 50 Hz electricity. Even though the solar fraction is only 0.4, the system sets a precedent for Australia with an array size of 23 kW. The uniqueness arises, however, from the fact that it is a stand-alone, inverter-driven system of considerable size with a sine-wave output.

  14. Engaging Students in Physical Education: Key Challenges and Opportunities for Physical Educators in Urban Settings

    ERIC Educational Resources Information Center

    Sliwa, Sarah; Nihiser, Allison; Lee, Sarah; McCaughtry, Nathan; Culp, Brian; Michael, Shannon

    2017-01-01

    In October 2009, "JOPERD" published a special issue about "Engaging Urban Youths in Physical Education and Physical Activity." Seven years later, many of the considerations mentioned remain relevant, such as large class sizes, limited access to equipment, and the lack of a dedicated gymnasium or outdoor space. These structural…

  15. Randomness, Sample Size, Imagination and Metacognition: Making Judgments about Differences in Data Sets

    ERIC Educational Resources Information Center

    Stack, Sue; Watson, Jane

    2013-01-01

    There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…

  16. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  17. Detection of linkage between a quantitative trait and a marker locus by the lod score method: sample size and sampling considerations.

    PubMed

    Demenais, F; Lathrop, G M; Lalouel, J M

    1988-07-01

    A simulation study is here conducted to measure the power of the lod score method to detect linkage between a quantitative trait and a marker locus in various situations. The number of families necessary to detect such linkage with 80% power is assessed for different sets of parameters at the trait locus and different values of the recombination fraction. The effects of varying the mode of sampling families and the sibship size are also evaluated.

  18. Using RFID to Enhance Security in Off-Site Data Storage

    PubMed Central

    Lopez-Carmona, Miguel A.; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R.

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system’s benefits in terms of efficiency and failure prevention. PMID:22163638

  19. Using RFID to enhance security in off-site data storage.

    PubMed

    Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2010-01-01

    Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.

  20. Comparative assessment of nanomaterial definitions and safety evaluation considerations.

    PubMed

    Boverhof, Darrell R; Bramante, Christina M; Butala, John H; Clancy, Shaun F; Lafranconi, Mark; West, Jay; Gordon, Steve C

    2015-10-01

    Nanomaterials continue to bring promising advances to science and technology. In concert have come calls for increased regulatory oversight to ensure their appropriate identification and evaluation, which has led to extensive discussions about nanomaterial definitions. Numerous nanomaterial definitions have been proposed by government, industry, and standards organizations. We conducted a comprehensive comparative assessment of existing nanomaterial definitions put forward by governments to highlight their similarities and differences. We found that the size limits used in different definitions were inconsistent, as were considerations of other elements, including agglomerates and aggregates, distributional thresholds, novel properties, and solubility. Other important differences included consideration of number size distributions versus weight distributions and natural versus intentionally-manufactured materials. Overall, the definitions we compared were not in alignment, which may lead to inconsistent identification and evaluation of nanomaterials and could have adverse impacts on commerce and public perceptions of nanotechnology. We recommend a set of considerations that future discussions of nanomaterial definitions should consider for describing materials and assessing their potential for health and environmental impacts using risk-based approaches within existing assessment frameworks. Our intent is to initiate a dialogue aimed at achieving greater clarity in identifying those nanomaterials that may require additional evaluation, not to propose a formal definition. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. The Pastoral Potential of Audio Feedback: A Review of the Literature

    ERIC Educational Resources Information Center

    Dixon, Stephen

    2015-01-01

    This paper surveys the literature on the use of audio feedback in higher education, where assignment feedback is sent as a recorded mp3 to students. Findings from the literature are set in the context of considerable changes to the HE sector over the last 20 years, including increased class sizes and less face-to-face contact between staff and…

  2. Anomalous or regular capacitance? The influence of pore size dispersity on double-layer formation

    NASA Astrophysics Data System (ADS)

    Jäckel, N.; Rodner, M.; Schreiber, A.; Jeongwook, J.; Zeiger, M.; Aslan, M.; Weingarth, D.; Presser, V.

    2016-09-01

    The energy storage mechanism of electric double-layer capacitors is governed by ion electrosorption at the electrode surface. This process requires high surface area electrodes, typically highly porous carbons. In common organic electrolytes, bare ion sizes are below one nanometer but they are larger when we consider their solvation shell. In contrast, ionic liquid electrolytes are free of solvent molecules, but cation-anion coordination requires special consideration. By matching pore size and ion size, two seemingly conflicting views have emerged: either an increase in specific capacitance with smaller pore size or a constant capacitance contribution of all micro- and mesopores. In our work, we revisit this issue by using a comprehensive set of electrochemical data and a pore size incremental analysis to identify the influence of certain ranges in the pore size distribution to the ion electrosorption capacity. We see a difference in solvation of ions in organic electrolytes depending on the applied voltage and a cation-anion interaction of ionic liquids in nanometer sized pores.

  3. Maximizing ecological and evolutionary insight in bisulfite sequencing data sets

    PubMed Central

    Lea, Amanda J.; Vilgalys, Tauras P.; Durst, Paul A.P.; Tung, Jenny

    2017-01-01

    Preface Genome-scale bisulfite sequencing approaches have opened the door to ecological and evolutionary studies of DNA methylation in many organisms. These approaches can be powerful. However, they introduce new methodological and statistical considerations, some of which are particularly relevant to non-model systems. Here, we highlight how these considerations influence a study’s power to link methylation variation with a predictor variable of interest. Relative to current practice, we argue that sample sizes will need to increase to provide robust insights. We also provide recommendations for overcoming common challenges and an R Shiny app to aid in study design. PMID:29046582

  4. Sample size considerations using mathematical models: an example with Chlamydia trachomatis infection and its sequelae pelvic inflammatory disease.

    PubMed

    Herzog, Sereina A; Low, Nicola; Berghold, Andrea

    2015-06-19

    The success of an intervention to prevent the complications of an infection is influenced by the natural history of the infection. Assumptions about the temporal relationship between infection and the development of sequelae can affect the predicted effect size of an intervention and the sample size calculation. This study investigates how a mathematical model can be used to inform sample size calculations for a randomised controlled trial (RCT) using the example of Chlamydia trachomatis infection and pelvic inflammatory disease (PID). We used a compartmental model to imitate the structure of a published RCT. We considered three different processes for the timing of PID development, in relation to the initial C. trachomatis infection: immediate, constant throughout, or at the end of the infectious period. For each process we assumed that, of all women infected, the same fraction would develop PID in the absence of an intervention. We examined two sets of assumptions used to calculate the sample size in a published RCT that investigated the effect of chlamydia screening on PID incidence. We also investigated the influence of the natural history parameters of chlamydia on the required sample size. The assumed event rates and effect sizes used for the sample size calculation implicitly determined the temporal relationship between chlamydia infection and PID in the model. Even small changes in the assumed PID incidence and relative risk (RR) led to considerable differences in the hypothesised mechanism of PID development. The RR and the sample size needed per group also depend on the natural history parameters of chlamydia. Mathematical modelling helps to understand the temporal relationship between an infection and its sequelae and can show how uncertainties about natural history parameters affect sample size calculations when planning a RCT.

  5. Some considerations about Gaussian basis sets for electric property calculations

    NASA Astrophysics Data System (ADS)

    Arruda, Priscilla M.; Canal Neto, A.; Jorge, F. E.

    Recently, segmented contracted basis sets of double, triple, and quadruple zeta valence quality plus polarization functions (XZP, X = D, T, and Q, respectively) for the atoms from H to Ar were reported. In this work, with the objective of having a better description of polarizabilities, the QZP set was augmented with diffuse (s and p symmetries) and polarization (p, d, f, and g symmetries) functions that were chosen to maximize the mean dipole polarizability at the UHF and UMP2 levels, respectively. At the HF and B3LYP levels of theory, electric dipole moment and static polarizability for a sample of molecules were evaluated. Comparison with experimental data and results obtained with a similar size basis set, whose diffuse functions were optimized for the ground state energy of the anion, was done.

  6. Object Classification With Joint Projection and Low-Rank Dictionary Learning.

    PubMed

    Foroughi, Homa; Ray, Nilanjan; Hong Zhang

    2018-02-01

    For an object classification system, the most critical obstacles toward real-world applications are often caused by large intra-class variability, arising from different lightings, occlusion, and corruption, in limited sample sets. Most methods in the literature would fail when the training samples are heavily occluded, corrupted or have significant illumination or viewpoint variations. Besides, most of the existing methods and especially deep learning-based methods, need large training sets to achieve a satisfactory recognition performance. Although using the pre-trained network on a generic large-scale data set and fine-tune it to the small-sized target data set is a widely used technique, this would not help when the content of base and target data sets are very different. To address these issues simultaneously, we propose a joint projection and low-rank dictionary learning method using dual graph constraints. Specifically, a structured class-specific dictionary is learned in the low-dimensional space, and the discrimination is further improved by imposing a graph constraint on the coding coefficients, that maximizes the intra-class compactness and inter-class separability. We enforce structural incoherence and low-rank constraints on sub-dictionaries to reduce the redundancy among them, and also make them robust to variations and outliers. To preserve the intrinsic structure of data, we introduce a supervised neighborhood graph into the framework to make the proposed method robust to small-sized and high-dimensional data sets. Experimental results on several benchmark data sets verify the superior performance of our method for object classification of small-sized data sets, which include a considerable amount of different kinds of variation, and may have high-dimensional feature vectors.

  7. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  8. Factors Associated with the Performance and Cost-Effectiveness of Using Lymphatic Filariasis Transmission Assessment Surveys for Monitoring Soil-Transmitted Helminths: A Case Study in Kenya

    PubMed Central

    Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.

    2015-01-01

    Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730

  9. How reliably can a material be classified as a nanomaterial? Available particle-sizing techniques at work

    NASA Astrophysics Data System (ADS)

    Babick, Frank; Mielke, Johannes; Wohlleben, Wendel; Weigel, Stefan; Hodoroaba, Vasile-Dan

    2016-06-01

    Currently established and projected regulatory frameworks require the classification of materials (whether nano or non-nano) as specified by respective definitions, most of which are based on the size of the constituent particles. This brings up the question if currently available techniques for particle size determination are capable of reliably classifying materials that potentially fall under these definitions. In this study, a wide variety of characterisation techniques, including counting, fractionating, and spectroscopic techniques, has been applied to the same set of materials under harmonised conditions. The selected materials comprised well-defined quality control materials (spherical, monodisperse) as well as industrial materials of complex shapes and considerable polydispersity. As a result, each technique could be evaluated with respect to the determination of the number-weighted median size. Recommendations on the most appropriate and efficient use of techniques for different types of material are given.

  10. Classification of brain MRI with big data and deep 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim

    2018-02-01

    Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.

  11. Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems

    PubMed Central

    Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.

    2014-01-01

    Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568

  12. Investigation on the ability of an ultrasound bubble detector to deliver size measurements of gaseous bubbles in fluid lines by using a glass bead model.

    PubMed

    Eitschberger, S; Henseler, A; Krasenbrink, B; Oedekoven, B; Mottaghy, K

    2001-01-01

    Detectors based on ultrasonic principles are today's state of the art devices to detect gaseous bubbles that may be present in extracorporeal circuits (ECC) for various reasons. Referring to theoretical considerations and other studies, it also seems possible to use this technology to measure the size of detected bubbles, thus offering the chance to evaluate their potential hazardous effect if introduced into a patient's circulation. Based on these considerations, a commercially available ultrasound bubble detector has been developed by Hatteland Instrumentering, Norway, to deliver bubble size measurements by means of supplementary software. This device consists of an ultrasound sensor that can be clamped onto the ECC tubing, and the necessary electronic equipment to amplify and rectify the received signals. It is supplemented by software that processes these signals and presents them as specific data. On the basis of our knowledge and experience with bubble detection by ultrasound technology, we believe it is particularly difficult to meet all the requirements for size measurements, especially if these are to be achieved by using a mathematical procedure rather than exact devices. Therefore, we tried to evaluate the quality of the offered bubble detector in measuring bubble sizes. After establishing a standardized test stand, including a roller pump and a temperature sensor, we performed several sets of experiments using the manufacturers software and a program specifically designed at our department for this purpose. The first set revealed that the manufacturer's recommended calibration material did not meet essential requirements as established by other authors. Having solved that problem, we could actually demonstrate that the ultrasonic field, as generated by the bubble detector, has been correctly calculated by the manufacturer. Simply, it is a field having the strongest reflecting region in the center, subsequently losing strength toward the ECC tubing's edge. The following set of experiments revealed that the supplementary software not only does not compensate for the ultrasonic field's inhomogeneity, but, furthermore, delivers results that are inappropriate to the applied calibration material. In the last set of experiments, we were able to demonstrate that the signals as recorded by the bubble detector heavily depend upon the circulating fluid's temperature, a fact that the manufacturer does not address. Therefore, it seems impossible to resolve all these sensor related problems by ever-increasing mathematical intervention. We believe it is more appropriate to develop a new kind of ultrasound device, free of these shortcomings. This seems to be particularly useful, because the problem of determining the size of gaseous bubbles in ECC is not yet solved.

  13. SymDex: increasing the efficiency of chemical fingerprint similarity searches for comparing large chemical libraries by using query set indexing.

    PubMed

    Tai, David; Fang, Jianwen

    2012-08-27

    The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.

  14. Unintended consequences of increasing block tariffs pricing policy in urban water

    NASA Astrophysics Data System (ADS)

    Dahan, Momi; Nisan, Udi

    2007-03-01

    We exploit a unique data set to estimate the degree of economies of scale in water consumption, controlling for the standard demand factors. We found a linear Engel curve in water consumption: each additional household member consumes the same water quantity regardless of household size, except for a single-person household. Our evidence suggests that the increasing block tariffs (IBT) structure, which is indifferent to household size, has unintended consequences. Large households, which are also likely to be poor given the negative correlation between income and household size, are charged a higher price for water. The degree of economies of scale found here erodes the effectiveness of IBT price structure as a way to introduce an equity consideration. This implication is important in view of the global trend toward the use of IBT.

  15. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  16. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations.

    PubMed

    Dziak, John J; Nahum-Shani, Inbal; Collins, Linda M

    2012-06-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. (c) 2012 APA, all rights reserved

  17. Plasmon‐Mediated Solar Energy Conversion via Photocatalysis in Noble Metal/Semiconductor Composites

    PubMed Central

    Wang, Mengye; Ye, Meidan; Iocozzia, James

    2016-01-01

    Plasmonics has remained a prominent and growing field over the past several decades. The coupling of various chemical and photo phenomenon has sparked considerable interest in plasmon‐mediated photocatalysis. Given plasmonic photocatalysis has only been developed for a relatively short period, considerable progress has been made in improving the absorption across the full solar spectrum and the efficiency of photo‐generated charge carrier separation. With recent advances in fundamental (i.e., mechanisms) and experimental studies (i.e., the influence of size, geometry, surrounding dielectric field, etc.) on plasmon‐mediated photocatalysis, the rational design and synthesis of metal/semiconductor hybrid nanostructure photocatalysts has been realized. This review seeks to highlight the recent impressive developments in plasmon‐mediated photocatalytic mechanisms (i.e., Schottky junction, direct electron transfer, enhanced local electric field, plasmon resonant energy transfer, and scattering and heating effects), summarize a set of factors (i.e., size, geometry, dielectric environment, loading amount and composition of plasmonic metal, and nanostructure and properties of semiconductors) that largely affect plasmonic photocatalysis, and finally conclude with a perspective on future directions within this rich field of research. PMID:27818901

  18. Plasmon-Mediated Solar Energy Conversion via Photocatalysis in Noble Metal/Semiconductor Composites.

    PubMed

    Wang, Mengye; Ye, Meidan; Iocozzia, James; Lin, Changjian; Lin, Zhiqun

    2016-06-01

    Plasmonics has remained a prominent and growing field over the past several decades. The coupling of various chemical and photo phenomenon has sparked considerable interest in plasmon-mediated photocatalysis. Given plasmonic photocatalysis has only been developed for a relatively short period, considerable progress has been made in improving the absorption across the full solar spectrum and the efficiency of photo-generated charge carrier separation. With recent advances in fundamental (i.e., mechanisms) and experimental studies (i.e., the influence of size, geometry, surrounding dielectric field, etc.) on plasmon-mediated photocatalysis, the rational design and synthesis of metal/semiconductor hybrid nanostructure photocatalysts has been realized. This review seeks to highlight the recent impressive developments in plasmon-mediated photocatalytic mechanisms (i.e., Schottky junction, direct electron transfer, enhanced local electric field, plasmon resonant energy transfer, and scattering and heating effects), summarize a set of factors (i.e., size, geometry, dielectric environment, loading amount and composition of plasmonic metal, and nanostructure and properties of semiconductors) that largely affect plasmonic photocatalysis, and finally conclude with a perspective on future directions within this rich field of research.

  19. The dust environment of comet 67P/Churyumov-Gerasimenko: results from Monte Carlo dust tail modelling applied to a large ground-based observation data set

    NASA Astrophysics Data System (ADS)

    Moreno, Fernando; Muñoz, Olga; Gutiérrez, Pedro J.; Lara, Luisa M.; Snodgrass, Colin; Lin, Zhong Y.; Della Corte, Vincenzo; Rotundi, Alessandra; Yagi, Masafumi

    2017-07-01

    We present an extensive data set of ground-based observations and models of the dust environment of comet 67P/Churyumov-Gerasimenko covering a large portion of the orbital arc from about 4.5 au pre-perihelion through 3.0 au post-perihelion, acquired during the current orbit. In addition, we have also applied the model to a dust trail image acquired during this orbit, as well as to dust trail observations obtained during previous orbits, in both the visible and the infrared. The results of the Monte Carlo modelling of the dust tail and trail data are generally consistent with the in situ results reported so far by the Rosetta instruments Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) and Grain Impact Analyser and Dust Accumulator (GIADA). We found the comet nucleus already active at 4.5 au pre-perihelion, with a dust production rate increasing up to ˜3000 kg s-1 some 20 d after perihelion passage. The dust size distribution at sizes smaller than r = 1 mm is linked to the nucleus seasons, being described by a power law of index -3.0 during the comet nucleus southern hemisphere winter but becoming considerably steeper, with values between -3.6 and -4.3, during the nucleus southern hemisphere summer, which includes perihelion passage (from about 1.7 au inbound to 2.4 au outbound). This agrees with the increase of the steepness of the dust size distribution found from GIADA measurements at perihelion showing a power index of -3.7. The size distribution at sizes larger than 1 mm for the current orbit is set to a power law of index -3.6, which is near the average value of insitu measurements by OSIRIS on large particles. However, in order to fit the trail data acquired during past orbits previous to the 2009 perihelion passage, a steeper power-law index of -4.1 has been set at those dates, in agreement with previous trail modelling. The particle sizes are set at a minimum of r = 10 μm, and a maximum size, which increases with decreasing heliocentric distance, in the 1-40 cm radius domain. The particle terminal velocities are found to be consistent with the in situ measurements as derived from the instrument GIADA on board Rosetta.

  20. Density-Dependent Quantized Least Squares Support Vector Machine for Large Data Sets.

    PubMed

    Nan, Shengyu; Sun, Lei; Chen, Badong; Lin, Zhiping; Toh, Kar-Ann

    2017-01-01

    Based on the knowledge that input data distribution is important for learning, a data density-dependent quantization scheme (DQS) is proposed for sparse input data representation. The usefulness of the representation scheme is demonstrated by using it as a data preprocessing unit attached to the well-known least squares support vector machine (LS-SVM) for application on big data sets. Essentially, the proposed DQS adopts a single shrinkage threshold to obtain a simple quantization scheme, which adapts its outputs to input data density. With this quantization scheme, a large data set is quantized to a small subset where considerable sample size reduction is generally obtained. In particular, the sample size reduction can save significant computational cost when using the quantized subset for feature approximation via the Nyström method. Based on the quantized subset, the approximated features are incorporated into LS-SVM to develop a data density-dependent quantized LS-SVM (DQLS-SVM), where an analytic solution is obtained in the primal solution space. The developed DQLS-SVM is evaluated on synthetic and benchmark data with particular emphasis on large data sets. Extensive experimental results show that the learning machine incorporating DQS attains not only high computational efficiency but also good generalization performance.

  1. Intraspecific competition and high food availability are associated with insular gigantism in a lizard.

    PubMed

    Pafilis, Panayiotis; Meiri, Shai; Foufopoulos, Johannes; Valakos, Efstratios

    2009-09-01

    Resource availability, competition, and predation commonly drive body size evolution. We assess the impact of high food availability and the consequent increased intraspecific competition, as expressed by tail injuries and cannibalism, on body size in Skyros wall lizards (Podarcis gaigeae). Lizard populations on islets surrounding Skyros (Aegean Sea) all have fewer predators and competitors than on Skyros but differ in the numbers of nesting seabirds. We predicted the following: (1) the presence of breeding seabirds (providing nutrients) will increase lizard population densities; (2) dense lizard populations will experience stronger intraspecific competition; and (3) such aggression, will be associated with larger average body size. We found a positive correlation between seabird and lizard densities. Cannibalism and tail injuries were considerably higher in dense populations. Increases in cannibalism and tail loss were associated with large body sizes. Adult cannibalism on juveniles may select for rapid growth, fuelled by high food abundance, setting thus the stage for the evolution of gigantism.

  2. hERG blocking potential of acids and zwitterions characterized by three thresholds for acidity, size and reactivity.

    PubMed

    Nikolov, Nikolai G; Dybdahl, Marianne; Jónsdóttir, Svava Ó; Wedebye, Eva B

    2014-11-01

    Ionization is a key factor in hERG K(+) channel blocking, and acids and zwitterions are known to be less probable hERG blockers than bases and neutral compounds. However, a considerable number of acidic compounds block hERG, and the physico-chemical attributes which discriminate acidic blockers from acidic non-blockers have not been fully elucidated. We propose a rule for prediction of hERG blocking by acids and zwitterionic ampholytes based on thresholds for only three descriptors related to acidity, size and reactivity. The training set of 153 acids and zwitterionic ampholytes was predicted with a concordance of 91% by a decision tree based on the rule. Two external validations were performed with sets of 35 and 48 observations, respectively, both showing concordances of 91%. In addition, a global QSAR model of hERG blocking was constructed based on a large diverse training set of 1374 chemicals covering all ionization classes, externally validated showing high predictivity and compared to the decision tree. The decision tree was found to be superior for the acids and zwitterionic ampholytes classes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Quantification of the evolution of firm size distributions due to mergers and acquisitions.

    PubMed

    Lera, Sandro Claudio; Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.

  4. Solvent extraction employing a static micromixer: a simple, robust and versatile technology for the microencapsulation of proteins.

    PubMed

    Freitas, S; Walz, A; Merkle, H P; Gander, B

    2003-01-01

    The potential of a static micromixer for the production of protein-loaded biodegradable polymeric microspheres by a modified solvent extraction process was examined. The mixer consists of an array of microchannels and features a simple set-up, consumes only very small space, lacks moving parts and offers simple control of the microsphere size. Scale-up from lab bench to industrial production is easily feasible through parallel installation of a sufficient number of micromixers ('number-up'). Poly(lactic-co-glycolic acid) microspheres loaded with a model protein, bovine serum albumin (BSA), were prepared. The influence of various process and formulation parameters on the characteristics of the microspheres was examined with special focus on particle size distribution. Microspheres with monomodal size distributions having mean diameters of 5-30 micro m were produced with excellent reproducibility. Particle size distributions were largely unaffected by polymer solution concentration, polymer type and nominal BSA load, but depended on the polymer solvent. Moreover, particle mean diameters could be varied in a considerable range by modulating the flow rates of the mixed fluids. BSA encapsulation efficiencies were mostly in the region of 75-85% and product yields ranged from 90-100%. Because of its simple set-up and its suitability for continuous production, static micromixing is suggested for the automated and aseptic production of protein-loaded microspheres.

  5. Tree-based flood damage modeling of companies: Damage processes and model performance

    NASA Astrophysics Data System (ADS)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  6. System and technology considerations for space-based air traffic surveillance

    NASA Technical Reports Server (NTRS)

    Vaisnys, A.

    1986-01-01

    This paper describes the system trade-offs examined in a recent study of space-based air traffic surveillance. Three system options, each satisfying a set of different constraints, were considered. The main difference in the technology needed to implement the three systems was determined to be the size of the spacecraft antenna aperture. It was found that essentially equivalent position location accuracy could be achieved with apertures from 50 meters down to less than a meter in diameter, depending on the choice of signal structure and on the desired user update rate.

  7. Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.

    PubMed

    Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi

    2014-12-01

    To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.

  8. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  9. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  10. Cobble cam: Grain-size measurements of sand to boulder from digital photographs and autocorrelation analyses

    USGS Publications Warehouse

    Warrick, J.A.; Rubin, D.M.; Ruggiero, P.; Harney, J.N.; Draut, A.E.; Buscombe, D.

    2009-01-01

    A new application of the autocorrelation grain size analysis technique for mixed to coarse sediment settings has been investigated. Photographs of sand- to boulder-sized sediment along the Elwha River delta beach were taken from approximately 1??2 m above the ground surface, and detailed grain size measurements were made from 32 of these sites for calibration and validation. Digital photographs were found to provide accurate estimates of the long and intermediate axes of the surface sediment (r2 > 0??98), but poor estimates of the short axes (r2 = 0??68), suggesting that these short axes were naturally oriented in the vertical dimension. The autocorrelation method was successfully applied resulting in total irreducible error of 14% over a range of mean grain sizes of 1 to 200 mm. Compared with reported edge and object-detection results, it is noted that the autocorrelation method presented here has lower error and can be applied to a much broader range of mean grain sizes without altering the physical set-up of the camera (~200-fold versus ~6-fold). The approach is considerably less sensitive to lighting conditions than object-detection methods, although autocorrelation estimates do improve when measures are taken to shade sediments from direct sunlight. The effects of wet and dry conditions are also evaluated and discussed. The technique provides an estimate of grain size sorting from the easily calculated autocorrelation standard error, which is correlated with the graphical standard deviation at an r2 of 0??69. The technique is transferable to other sites when calibrated with linear corrections based on photo-based measurements, as shown by excellent grain-size analysis results (r2 = 0??97, irreducible error = 16%) from samples from the mixed grain size beaches of Kachemak Bay, Alaska. Thus, a method has been developed to measure mean grain size and sorting properties of coarse sediments. ?? 2009 John Wiley & Sons, Ltd.

  11. Using Stochastic Approximation Techniques to Efficiently Construct Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran

    2018-06-22

    Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.

  12. Characterizing the phytoplankton soup: pump and plumbing effects on the particle assemblage in underway optical seawater systems.

    PubMed

    Cetinić, Ivona; Poulton, Nicole; Slade, Wayne H

    2016-09-05

    Many optical and biogeochemical data sets, crucial for algorithm development and satellite data validation, are collected using underway seawater systems over the course of research cruises. Phytoplankton and particle size distribution (PSD) in the ocean is a key measurement, required in oceanographic research and ocean optics. Using a data set collected in the North Atlantic, spanning different oceanic water types, we outline the differences observed in concurrent samples collected from two different flow-through systems: a permanently plumbed science seawater supply with an impeller pump, and an independent system with shorter, clean tubing runs and a diaphragm pump. We observed an average of 40% decrease in phytoplankton counts, and significant changes to the PSD in 10-45 µm range, when comparing impeller and diaphragm pump systems. Change in PSD seems to be more dependent on the type of the phytoplankton, than the size, with photosynthetic ciliates displaying the largest decreases in cell counts (78%). Comparison of chlorophyll concentrations across the two systems demonstrated lower sensitivity to sampling system type. Observed changes in several measured biogeochemical parameters (associated with phytoplankton size distribution) using the two sampling systems, should be used as a guide towards building best practices when it comes to the deployment of flow-through systems in the field for examining optics and biogeochemistry. Using optical models, we evaluated potential impact of the observed change in measured phytoplankton size spectra onto scattering measurements, resulting in significant differences between modeled optical properties across systems (~40%). Researchers should be aware of the methods used with previously collected data sets, and take into consideration the potentially significant and highly variable ecosystem-dependent biases in designing field studies in the future.

  13. Defining populations and injecting parameters among people who inject drugs: Implications for the assessment of hepatitis C treatment programs.

    PubMed

    Larney, Sarah; Grebely, Jason; Hickman, Matthew; De Angelis, Daniela; Dore, Gregory J; Degenhardt, Louisa

    2015-10-01

    There is considerable interest in determining the impact that increased uptake of treatment for hepatitis C virus (HCV) infection will have on the burden of HCV among people who inject drugs (PWID). An understanding of the size of the population of PWID, rates of injecting cessation and HCV prevalence and incidence within the PWID population is essential for such exercises. However, these parameters are often uncertain. In this paper we review methods for estimating the size of the population of PWID and related parameters, taking into account the uncertainty that exists around data on the natural history of injecting drug use; consider issues in the estimation of HCV prevalence among PWID; and consider the importance of opioid substitution therapy and prisons as settings for the prevention and treatment of HCV infection among PWID. These latter two points are illustrated through examples of ongoing work in England, Scotland and Australia. We conclude that an improved understanding of the size of PWID populations, including current and former PWID and parameters related to injecting drug use and settings where PWID may be reached, is necessary to inform HCV prevention and treatment strategies. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. GPU-based relative fuzzy connectedness image segmentation.

    PubMed

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  15. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  16. Emergent polyethism as a consequence of increased colony size in insect societies.

    PubMed

    Gautrais, Jacques; Theraulaz, Guy; Deneubourg, Jean-Louis; Anderson, Carl

    2002-04-07

    A threshold reinforcement model in insect societies is explored over a range of colony sizes and levels of task demand to examine their effects upon worker polyethism. We find that increasing colony size while keeping the demand proportional to the colony size causes an increase in the differentiation among individuals in their activity levels, thus explaining the occurrence of elitism (individuals that do a disproportionately large proportion of work) in insect societies. Similar results were obtained when the overall work demand is increased while keeping the colony size constant. Our model can reproduce a whole suite of distributions of the activity levels among colony members that have been found in empirical studies. When there are two tasks, we demonstrate that increasing demand and colony size generates highly specialized individuals, but without invoking any strict assumptions about spatial organization of work or any inherent abilities of individuals to tackle different tasks. Importantly, such specialization only occurs above a critical colony size such that smaller colonies contain a set of undifferentiated equally inactive individuals while larger colonies contain both active specialists and inactive generalists, as has been found in empirical studies and is predicted from other theoretical considerations. Copyright 2002 Elsevier Science Ltd. All rights reserved.

  17. Quantification of the evolution of firm size distributions due to mergers and acquisitions

    PubMed Central

    Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683

  18. Data splitting for artificial neural networks using SOM-based stratified sampling.

    PubMed

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Modelling the effects of trade-offs between long and short-term objectives in fisheries management.

    PubMed

    Mardle, Simon; Pascoe, Sean

    2002-05-01

    Fisheries management is typically a complex problem, from both an environmental and political perspective. The main source of conflict occurs between the need for stock conservation and the need for fishing community well-being, which is typically measured by employment and income levels. For most fisheries, overexploitation of the stock requires a reduction in the level of fishing activity. While this may lead to long-term benefits (both conservation and economic), it also leads to a short-term reduction in employment and regional incomes. In regions which are heavily dependent on fisheries, short-term consequences of conservation efforts may be considerable. The relatively high degree of scientific uncertainty with respect to the status of the stocks and the relatively short lengths of political terms of office, generally give rise to the short-run view taking the highest priority when defining policy objectives. In this paper, a multi-objective model of the North Sea is developed that incorporates both long-term and short-term objectives. Optimal fleet sizes are estimated taking into consideration different preferences between the defined short-term and long-term objectives. The subsequent results from the model give the short-term and long-term equilibrium status of the fishery incorporating the effects of the short-term objectives. As would be expected, an optimal fleet from a short-term perspective is considerably larger than an optimal fleet from a long-run perspective. Conversely, stock sizes and sustainable yields are considerably lower in the long-term if a short-term perspective is used in setting management policies. The model results highlight what is essentially a principal-agent problem, with the objectives of the policy makers not necessarily reflecting the objectives of society as a whole.

  20. Spatial competition and price formation

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Shubik, Martin; Paczuski, Maya; Bak, Per

    2000-12-01

    We look at price formation in a retail setting, that is, companies set prices, and consumers either accept prices or go someplace else. In contrast to most other models in this context, we use a two-dimensional spatial structure for information transmission, that is, consumers can only learn from nearest neighbors. Many aspects of this can be understood in terms of generalized evolutionary dynamics. In consequence, we first look at spatial competition and cluster formation without price. This leads to establishement size distributions, which we compare to reality. After some theoretical considerations, which at least heuristically explain our simulation results, we finally return to price formation, where we demonstrate that our simple model with nearly no organized planning or rationality on the part of any of the agents indeed leads to an economically plausible price.

  1. Data-driven outbreak forecasting with a simple nonlinear growth model

    PubMed Central

    Lega, Joceline; Brown, Heidi E.

    2016-01-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. PMID:27770752

  2. Determination of the optimal sample size for a clinical trial accounting for the population size.

    PubMed

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Sizing-tube-fin space radiators

    NASA Technical Reports Server (NTRS)

    Peoples, J. A.

    1978-01-01

    Temperature and size considerations of the tube fin space radiator were characterized by charts and equations. An approach of accurately assessing rejection capability commensurate with a phase A/B level output is reviewed. A computer program, based on Mackey's equations, is also presented which sizes the rejection area for a given thermal load. The program also handles the flow and thermal considerations of the film coefficient.

  4. The cognitive loci of the display and task-relevant set size effects on distractor interference: Evidence from a dual-task paradigm.

    PubMed

    Park, Bo Youn; Kim, Sujin; Cho, Yang Seok

    2018-02-01

    The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.

  5. Influence of the confinement potential on the size-dependent optical response of metallic nanometric particles

    NASA Astrophysics Data System (ADS)

    Zapata-Herrera, Mario; Camacho, Ángela S.; Ramírez, Hanz Y.

    2018-06-01

    In this paper, different confinement potential approaches are considered in the simulation of size effects on the optical response of silver spheres with radii at the few nanometer scale. By numerically obtaining dielectric functions from different sets of eigenenergies and eigenstates, we simulate the absorption spectrum and the field enhancement factor for nanoparticles of various sizes, within a quantum framework for both infinite and finite potentials. The simulations show significant dependence on the sphere radius of the dipolar surface plasmon resonance, as a direct consequence of energy discretization associated to the strong confinement experienced by conduction electrons in small nanospheres. Considerable reliance of the calculated optical features on the chosen wave functions and transition energies is evidenced, so that discrepancies in the plasmon resonance frequencies obtained with the three studied models reach up to above 30%. Our results are in agreement with reported measurements and shade light on the puzzling shift of the plasmon resonance in metallic nanospheres.

  6. Design considerations for eye-safe single-aperture laser radars

    NASA Astrophysics Data System (ADS)

    Starodubov, D.; McCormick, K.; Volfson, L.

    2015-05-01

    The design considerations for low cost, shock resistant, compact and efficient laser radars and ranging systems are discussed. The reviewed approach with single optical aperture allows reducing the size, weight and power of the system. Additional design benefits include improved stability, reliability and rigidity of the overall system. The proposed modular architecture provides simplified way of varying the performance parameters of the range finder product family by selecting the sets of specific illumination and detection modules. The performance operation challenges are presented. The implementation of non-reciprocal optical elements is considered. The cross talk between illumination and detection channels for single aperture design is reviewed. 3D imaging capability for the ranging applications is considered. The simplified assembly and testing process for single aperture range finders that allows to mass produce the design are discussed. The eye safety of the range finder operation is summarized.

  7. Economic lot sizing in a production system with random demand

    NASA Astrophysics Data System (ADS)

    Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan

    2016-04-01

    An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.

  8. A model based on Rock-Eval thermal analysis to quantify the size of the centennially persistent organic carbon pool in temperate soils

    NASA Astrophysics Data System (ADS)

    Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre

    2018-05-01

    Changes in global soil carbon stocks have considerable potential to influence the course of future climate change. However, a portion of soil organic carbon (SOC) has a very long residence time ( > 100 years) and may not contribute significantly to terrestrial greenhouse gas emissions during the next century. The size of this persistent SOC reservoir is presumed to be large. Consequently, it is a key parameter required for the initialization of SOC dynamics in ecosystem and Earth system models, but there is considerable uncertainty in the methods used to quantify it. Thermal analysis methods provide cost-effective information on SOC thermal stability that has been shown to be qualitatively related to SOC biogeochemical stability. The objective of this work was to build the first quantitative model of the size of the centennially persistent SOC pool based on thermal analysis. We used a unique set of 118 archived soil samples from four agronomic experiments in northwestern Europe with long-term bare fallow and non-bare fallow treatments (e.g., manure amendment, cropland and grassland) as a sample set for which estimating the size of the centennially persistent SOC pool is relatively straightforward. At each experimental site, we estimated the average concentration of centennially persistent SOC and its uncertainty by applying a Bayesian curve-fitting method to the observed declining SOC concentration over the duration of the long-term bare fallow treatment. Overall, the estimated concentrations of centennially persistent SOC ranged from 5 to 11 g C kg-1 of soil (lowest and highest boundaries of four 95 % confidence intervals). Then, by dividing the site-specific concentrations of persistent SOC by the total SOC concentration, we could estimate the proportion of centennially persistent SOC in the 118 archived soil samples and the associated uncertainty. The proportion of centennially persistent SOC ranged from 0.14 (standard deviation of 0.01) to 1 (standard deviation of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.

  9. Cooperative capture of large prey solves scaling challenge faced by spider societies

    PubMed Central

    Yip, Eric C.; Powers, Kimberly S.; Avilés, Leticia

    2008-01-01

    A decrease in the surface area per unit volume is a well known constraint setting limits to the size of organisms at both the cellular and whole-organismal levels. Similar constraints may apply to social groups as they grow in size. The communal three-dimensional webs that social spiders build function ecologically as single units that intercept prey through their surface and should thus be subject to this constraint. Accordingly, we show that web prey capture area per spider, and thus number of insects captured per capita, decreases with colony size in a neotropical social spider. Prey biomass intake per capita, however, peaks at intermediate colony sizes because the spiders forage cooperatively and larger colonies capture increasingly large insects. A peaked prey biomass intake function would explain not only why these spiders live in groups and cooperate but also why they disperse only at large colony sizes, thus addressing both sociality and colony size range in this social spider. These findings may also explain the conspicuous absence of social spiders from higher latitudes and higher elevations, areas that we have previously shown to harbor considerably fewer insects of the largest size classes than the lowland tropical rainforests where social spiders thrive. Our findings thus illustrate the relevance of scaling laws to the size and functioning of levels of organization above the individual. PMID:18689677

  10. Arbitrary Steady-State Solutions with the K-epsilon Model

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Pettersson Reif, B. A.; Gatski, Thomas B.

    2006-01-01

    Widely-used forms of the K-epsilon turbulence model are shown to yield arbitrary steady-state converged solutions that are highly dependent on numerical considerations such as initial conditions and solution procedure. These solutions contain pseudo-laminar regions of varying size. By applying a nullcline analysis to the equation set, it is possible to clearly demonstrate the reasons for the anomalous behavior. In summary, the degenerate solution acts as a stable fixed point under certain conditions, causing the numerical method to converge there. The analysis also suggests a methodology for preventing the anomalous behavior in steady-state computations.

  11. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  12. The ISEE-3 ULEWAT: Flux tape description and heavy ion fluxes 1978-1984. [plasma diagnostics

    NASA Technical Reports Server (NTRS)

    Mason, G. M.; Klecker, B.

    1985-01-01

    The ISEE ULEWAT FLUX tapes contain ULEWAT and ISEE pool tape data summarized over relatively long time intervals (1hr) in order to compact the data set into an easily usable size. (Roughly 3 years of data fit onto one 1600 BPI 9-track magnetic tape). In making the tapes, corrections were made to the ULEWAT basic data tapes in order to, remove rate spikes and account for changes in instrument response so that to a large extent instrument fluxes can be calculated easily from the FLUX tapes without further consideration of instrument performance.

  13. Development of a Biodegradable Bone Cement for Craniofacial Applications

    PubMed Central

    Henslee, Allan M.; Gwak, Dong-Ho; Mikos, Antonios G.; Kasper, F. Kurtis

    2015-01-01

    This study investigated the formulation of a two-component biodegradable bone cement comprising the unsaturated linear polyester macromer poly(propylene fumarate) (PPF) and crosslinked PPF microparticles for use in craniofacial bone repair applications. A full factorial design was employed to evaluate the effects of formulation parameters such as particle weight percentage, particle size, and accelerator concentration on the setting and mechanical properties of crosslinked composites. It was found that the addition of crosslinked microparticles to PPF macromer significantly reduced the temperature rise upon crosslinking from 100.3 ± 21.6 to 102.7 ± 49.3 °C for formulations without microparticles to 28.0 ± 2.0 to 65.3 ± 17.5 °C for formulations with microparticles. The main effects of increasing the particle weight percentage from 25 to 50% were to significantly increase the compressive modulus by 37.7 ± 16.3 MPa, increase the compressive strength by 2.2 ± 0.5 MPa, decrease the maximum temperature by 9.5 ± 3.7 °C, and increase the setting time by 0.7 ± 0.3 min. Additionally, the main effects of increasing the particle size range from 0–150 μm to 150–300 μm were to significantly increase the compressive modulus by 31.2 ± 16.3 MPa and the compressive strength by 1.3 ± 0.5 MPa. However, the particle size range did not have a significant effect on the maximum temperature and setting time. Overall, the composites tested in this study were found to have properties suitable for further consideration in craniofacial bone repair applications. PMID:22499285

  14. Economic epidemiology of avian influenza on smallholder poultry farms☆

    PubMed Central

    Boni, Maciej F.; Galvani, Alison P.; Wickelgren, Abraham L.; Malani, Anup

    2013-01-01

    Highly pathogenic avian influenza (HPAI) is often controlled through culling of poultry. Compensating farmers for culled chickens or ducks facilitates effective culling and control of HPAI. However, ensuing price shifts can create incentives that alter the disease dynamics of HPAI. Farmers control certain aspects of the dynamics by setting a farm size, implementing infection control measures, and determining the age at which poultry are sent to market. Their decisions can be influenced by the market price of poultry which can, in turn, be set by policy makers during an HPAI outbreak. Here, we integrate these economic considerations into an epidemiological model in which epidemiological parameters are determined by an outside agent (the farmer) to maximize profit from poultry sales. Our model exhibits a diversity of behaviors which are sensitive to (i) the ability to identify infected poultry, (ii) the average price of infected poultry, (iii) the basic reproductive number of avian influenza, (iv) the effect of culling on the market price of poultry, (v) the effect of market price on farm size, and (vi) the effect of poultry density on disease transmission. We find that under certain market and epidemiological conditions, culling can increase farm size and the total number of HPAI infections. Our model helps to inform the optimization of public health outcomes that best weigh the balance between public health risk and beneficial economic outcomes for farmers. PMID:24161559

  15. Fast and accurate 3D tensor calculation of the Fock operator in a general basis

    NASA Astrophysics Data System (ADS)

    Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.

    2012-11-01

    The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.

  16. Engineering ultrasmall water-soluble gold and silver nanoclusters for biomedical applications.

    PubMed

    Luo, Zhentao; Zheng, Kaiyuan; Xie, Jianping

    2014-05-25

    Gold and silver nanoclusters or Au/Ag NCs with core sizes smaller than 2 nm have been an attractive frontier of nanoparticle research because of their unique physicochemical properties such as well-defined molecular structure, discrete electronic transitions, quantized charging, and strong luminescence. As a result of these unique properties, ultrasmall size, and good biocompatibility, Au/Ag NCs have great potential for a variety of biomedical applications, such as bioimaging, biosensing, antimicrobial agents, and cancer therapy. In this feature article, we will first discuss some critical biological considerations, such as biocompatibility and renal clearance, of Au/Ag NCs that are applied for biomedical applications, leading to some design criteria for functional Au/Ag NCs in the biological settings. According to these biological considerations, we will then survey some efficient synthetic strategies for the preparation of protein- and peptide-protected Au/Ag NCs with an emphasis on our recent contributions in this fast-growing field. In the last part, we will highlight some potential biomedical applications of these protein- and peptide-protected Au/Ag NCs. It is believed that with continued efforts to understand the interactions of biomolecule-protected Au/Ag NCs with the biological systems, scientists can largely realize the great potential of Au/Ag NCs for biomedical applications, which could finally pave their way towards clinical use.

  17. Exploring how to increase response rates to surveys of older people.

    PubMed

    Palonen, Mira; Kaunonen, Marja; Åstedt-Kurki, Päivi

    2016-05-01

    To address the special considerations that need to be taken into account when collecting data from older people in healthcare research. An objective of all research studies is to ensure there is an adequate sample size. The final sample size will be influenced by methods of recruitment and data collection, among other factors. There are some special considerations that need to be addressed when collecting data among older people. Quantitative surveys of people aged 60 or over in 2009-2014 were analysed using statistical methods. A quantitative study of patients aged 75 or over in an emergency department was used as an example. A methodological approach to analysing quantitative studies concerned with older people. The best way to ensure high response rates in surveys involving people aged 60 or over is to collect data in the presence of the researcher; response rates are lowest in posted surveys and settings where the researcher is not present when data are collected. Response rates do not seem to vary according to the database from which information about the study participants is obtained or according to who is responsible for recruitment to the survey. Implications for research/practice To conduct coherent studies with older people, the data collection process should be carefully considered.

  18. Understanding how lake populations of arctic char are structured and function with special consideration of the potential effects of climate change: a multi-faceted approach.

    PubMed

    Budy, Phaedra; Luecke, Chris

    2014-09-01

    Size dimorphism in fish populations, both its causes and consequences, has been an area of considerable focus; however, uncertainty remains whether size dimorphism is dynamic or stabilizing and about the role of exogenous factors. Here, we explored patterns among empirical vital rates, population structure, abundance and trend, and predicted the effects of climate change on populations of arctic char (Salvelinus alpinus) in two lakes. Both populations cycle dramatically between dominance by small (≤300 mm) and large (>300 mm) char. Apparent survival (Φ) and specific growth rates (SGR) were relatively high (40-96%; SGR range 0.03-1.5%) and comparable to those of conspecifics at lower latitudes. Climate change scenarios mimicked observed patterns of warming and resulted in temperatures closer to optimal for char growth (15.15 °C) and a longer growing season. An increase in consumption rates (28-34%) under climate change scenarios led to much greater growth rates (23-34%). Higher growth rates predicted under climate change resulted in an even greater predicted amplitude of cycles in population structure as well as an increase in reproductive output (Ro) and decrease in generation time (Go). Collectively, these results indicate arctic char populations (not just individuals) are extremely sensitive to small changes in the number of ice-free days. We hypothesize years with a longer growing season, predicted to occur more often under climate change, produce elevated growth rates of small char and act in a manner similar to a "resource pulse," allowing a sub-set of small char to "break through," thus setting the cycle in population structure.

  19. Understanding how lake populations of arctic char are structured and function with special consideration of the potential effects of climate change: A multi-faceted approach.

    USGS Publications Warehouse

    Budy, Phaedra; Luecke, Chris

    2014-01-01

    Size dimorphism in fish populations, both its causes and consequences, has been an area of considerable focus; however, uncertainty remains whether size dimorphism is dynamic or stabilizing and about the role of exogenous factors. Here, we explored patterns among empirical vital rates, population structure, abundance and trend, and predicted the effects of climate change on populations of arctic char (Salvelinus alpinus) in two lakes. Both populations cycle dramatically between dominance by small (≤300 mm) and large (>300 mm) char. Apparent survival (Φ) and specific growth rates (SGR) were relatively high (40–96 %; SGR range 0.03–1.5 %) and comparable to those of conspecifics at lower latitudes. Climate change scenarios mimicked observed patterns of warming and resulted in temperatures closer to optimal for char growth (15.15 °C) and a longer growing season. An increase in consumption rates (28–34 %) under climate change scenarios led to much greater growth rates (23–34 %). Higher growth rates predicted under climate change resulted in an even greater predicted amplitude of cycles in population structure as well as an increase in reproductive output (Ro) and decrease in generation time (Go). Collectively, these results indicate arctic char populations (not just individuals) are extremely sensitive to small changes in the number of ice-free days. We hypothesize years with a longer growing season, predicted to occur more often under climate change, produce elevated growth rates of small char and act in a manner similar to a “resource pulse,” allowing a sub-set of small char to “break through,” thus setting the cycle in population structure.

  20. Visual analysis of mass cytometry data by hierarchical stochastic neighbour embedding reveals rare cell types.

    PubMed

    van Unen, Vincent; Höllt, Thomas; Pezzotti, Nicola; Li, Na; Reinders, Marcel J T; Eisemann, Elmar; Koning, Frits; Vilanova, Anna; Lelieveldt, Boudewijn P F

    2017-11-23

    Mass cytometry allows high-resolution dissection of the cellular composition of the immune system. However, the high-dimensionality, large size, and non-linear structure of the data poses considerable challenges for the data analysis. In particular, dimensionality reduction-based techniques like t-SNE offer single-cell resolution but are limited in the number of cells that can be analyzed. Here we introduce Hierarchical Stochastic Neighbor Embedding (HSNE) for the analysis of mass cytometry data sets. HSNE constructs a hierarchy of non-linear similarities that can be interactively explored with a stepwise increase in detail up to the single-cell level. We apply HSNE to a study on gastrointestinal disorders and three other available mass cytometry data sets. We find that HSNE efficiently replicates previous observations and identifies rare cell populations that were previously missed due to downsampling. Thus, HSNE removes the scalability limit of conventional t-SNE analysis, a feature that makes it highly suitable for the analysis of massive high-dimensional data sets.

  1. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  2. A practical and theoretical definition of very small field size for radiotherapy output factor measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, P. H., E-mail: p.charles@qut.edu.au; Crowe, S. B.; Langton, C. M.

    Purpose: This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods: A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated intomore » additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. Results: According to the practical definition established in this project, field sizes ≤15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤12 mm. Source occlusion also caused a large change in OPF for field sizes ≤8 mm. Based on the results of this study, field sizes ≤12 mm were considered to be theoretically very small for 6 MV beams. Conclusions: Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤12 mm and more conservatively≤15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.« less

  3. A practical and theoretical definition of very small field size for radiotherapy output factor measurements.

    PubMed

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Crowe, S B; Kairn, T; Knight, R T; Kenny, J; Langton, C M; Trapp, J V

    2014-04-01

    This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. According to the practical definition established in this project, field sizes ≤ 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤ 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤ 12 mm. Source occlusion also caused a large change in OPF for field sizes ≤ 8 mm. Based on the results of this study, field sizes ≤ 12 mm were considered to be theoretically very small for 6 MV beams. Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least ≤ 12 mm and more conservatively ≤ 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection. © 2014 American Association of Physicists in Medicine.

  4. The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    PubMed Central

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael; Chu, Rong; Gluud, Christian; Wetterslev, Jørn; Guyatt, Gordon; Devereaux, Philip J.; Thabane, Lehana

    2011-01-01

    Background Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. Methods We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. Results The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. Conclusions Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation. PMID:22028777

  5. Coil geometry effects on scanning single-coil magnetic induction tomography

    NASA Astrophysics Data System (ADS)

    Feldkamp, Joe R.; Quirk, Stephen

    2017-09-01

    Alternative coil designs for single coil magnetic induction tomography are considered in this work, with the intention of improving upon the standard design used previously. In particular, we note that the blind spot associated with this coil type, a portion of space along its axis where eddy current generation can be very weak, has an important effect on performance. The seven designs tested here vary considerably in the size of their blind spot. To provide the most discerning test possible, we use laboratory phantoms containing feature dimensions similar to blind spot size. Furthermore, conductivity contrasts are set higher than what would occur naturally in biological systems, which has the effect of weakening eddy current generation at coil locations that straddle the border between high and low conductivity features. Image reconstruction results for the various coils show that coils with smaller blind spots give markedly better performance, though improvements in signal-to-noise ratio could alter that conclusion.

  6. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Negative Thrust and Torque Characteristics of an Adjustable-Pitch Metal Propeller

    NASA Technical Reports Server (NTRS)

    Hartman, Edwin P

    1934-01-01

    This report presents the results of a series of negative thrust and torque measurements made with a 4 foot diameter model of a conventional aluminum-alloy propeller. The tests were made in the 20-foot propeller-research tunnel of the National Advisory Committee for Aeronautics. The results show that the negative thrust is considerably affected by the shape and size of the body behind the propeller, that the maximum negative thrust increases with decrease in blade-angle setting, and that the drag of a locked propeller may be greatly reduced by feathering it into the wind. Several examples of possible applications of the data are given.

  8. Emerging applications of conjugated polymers in molecular imaging.

    PubMed

    Li, Junwei; Liu, Jie; Wei, Chen-Wei; Liu, Bin; O'Donnell, Matthew; Gao, Xiaohu

    2013-10-28

    In recent years, conjugated polymers have attracted considerable attention from the imaging community as a new class of contrast agent due to their intriguing structural, chemical, and optical properties. Their size and emission wavelength tunability, brightness, photostability, and low toxicity have been demonstrated in a wide range of in vitro sensing and cellular imaging applications, and have just begun to show impact in in vivo settings. In this Perspective, we summarize recent advances in engineering conjugated polymers as imaging contrast agents, their emerging applications in molecular imaging (referred to as in vivo uses in this paper), as well as our perspectives on future research.

  9. Declustering of clustered preferential sampling for histogram and semivariogram inference

    USGS Publications Warehouse

    Olea, R.A.

    2007-01-01

    Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.

  10. Tailoring magnetic properties of Co nanocluster assembled films using hydrogen

    NASA Astrophysics Data System (ADS)

    Romero, C. P.; Volodin, A.; Paddubrouskaya, H.; Van Bael, M. J.; Van Haesendonck, C.; Lievens, P.

    2018-07-01

    Tailoring magnetic properties in nanocluster assembled cobalt (Co) thin films was achieved by admitting a small percentage of H2 gas (∼2%) into the Co gas phase cluster formation chamber prior to deposition. The oxygen content in the films is considerably reduced by the presence of hydrogen during the cluster formation, leading to enhanced magnetic interactions between clusters. Two sets of Co samples were fabricated, one without hydrogen gas and one with hydrogen gas. Magnetic properties of the non-hydrogenated and the hydrogen-treated Co nanocluster assembled films are comparatively studied using magnetic force microscopy and vibrating sample magnetometry. When comparing the two sets of samples the considerably larger coercive field of the H2-treated Co nanocluster film and the extended micrometer-sized magnetic domain structure confirm the enhancement of magnetic interactions between clusters. The thickness of the antiferromagnetic CoO layer is controlled with this procedure and modifies the exchange bias effect in these films. The exchange bias shift is lower for the H2-treated Co nanocluster film, which indicates that a thinner antiferromagnetic CoO reduces the coupling with the ferromagnetic Co. The hydrogen-treatment method can be used to tailor the oxidation levels thus controlling the magnetic properties of ferromagnetic cluster-assembled films.

  11. Effects of prefrontal tDCS on executive function: Methodological considerations revealed by meta-analysis.

    PubMed

    Imburgio, Michael J; Orr, Joseph M

    2018-05-01

    A meta-analysis of studies using single-session transcranial direct current stimulation (tDCS) to target the dorsolateral prefrontal cortex (DLPFC) was undertaken to examine the effect of stimulation on executive function (EF) in healthy samples. 27 studies were included in analyses, yielding 71 effect sizes. The most relevant measure for each task was determined a priori and used to calculate Hedge's g. Methodological characteristics of each study were examined individually as potential moderators of effect size. Stimulation effects on three domains of EF (inhibition of prepotent responses, mental set shifting, and information updating and monitoring) were analyzed separately. In line with previous work, the current study found no significant effect of anodal unilateral tDCS, cathodal unilateral tDCS, or bilateral tDCS on EF. Further moderator and subgroup analyses were only carried out for anodal unilateral montages due to the small number of studies using other montages. Subgroup analyses revealed a significant effect of anodal unilateral tDCS on updating tasks, but not on inhibition or set-shifting tasks. Cathode location significantly moderated the effect of anodal unilateral tDCS. Extracranial cathodes yielded a significant effect on EF while cranial cathodes yielded no effect. Anode size also significantly moderated effect of anodal unilateral tDCS, with smaller anodes being more effective than larger anodes. In summary, anodal DLPFC stimulation is more effective at improving updating ability than inhibition and set-shifting ability, but anodal stimulation can significantly improve general executive function when extracranial cathodes or small anodes are used. Future meta-analyses may examine how stimulation's effects on specific behavioral tasks, rather than broader domains, might be affected by methodological moderators. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. The evolution of genetic and conditional alternative reproductive tactics

    PubMed Central

    2016-01-01

    Frequency-dependent selection may drive adaptive diversification within species. It is yet unclear why the occurrence of alternative reproductive tactics (ARTs) is highly divergent between major animal taxa. Here we aim to clarify the environmental and social conditions favouring the evolution of intra-population variance of male reproductive phenotypes. Our results suggest that genetically determined ARTs that are fixed for life evolve when there is strong selection on body size due to size-dependent competitiveness, in combination with environmental factors reducing size benefits. The latter may result from growth costs or, more generally, from age-dependent but size-independent mortality causes. This generates disruptive selection on growth trajectories underlying tactic choice. In many parameter settings, the model also predicts ARTs to evolve that are flexible and responsive to current conditions. Interestingly, the conditions favouring the evolution of flexible tactics diverge considerably from those favouring genetic variability. Nevertheless, in a restricted but relevant parameter space, our model predicts the simultaneous emergence and maintenance of a mixture of multiple tactics, both genetically and conditionally determined. Important conditions for the emergence of ARTs include size variation of competitors, which is inherently greater in species with indeterminate growth than in taxa reproducing only after reaching their terminal body size. This is probably the reason why ARTs are more common in fishes than in other major taxa. PMID:26911960

  13. Rain, prey and predators: climatically driven shifts in frog abundance modify reproductive allometry in a tropical snake.

    PubMed

    Brown, Gregory P; Shine, Richard

    2007-11-01

    To predict the impacts of climate change on animal populations, we need long-term data sets on the effects of annual climatic variation on the demographic traits (growth, survival, reproductive output) that determine population viability. One frequent complication is that fecundity also depends upon maternal body size, a trait that often spans a wide range within a single population. During an eight-year field study, we measured annual variation in weather conditions, frog abundance and snake reproduction on a floodplain in the Australian wet-dry tropics. Frog numbers varied considerably from year to year, and were highest in years with hotter wetter conditions during the monsoonal season ("wet season"). Mean maternal body sizes, egg sizes and post-partum maternal body conditions of frog-eating snakes (keelback, Tropidonophis mairii, Colubridae) showed no significant annual variation over this period, but mean clutch sizes were higher in years with higher prey abundance. Larger females were more sensitive to frog abundance in this respect than were smaller conspecifics, so that the rate at which fecundity increased with body size varied among years, and was highest when prey availability was greatest. Thus, the link between female body size and reproductive output varied among years, with climatic factors modifying the relative reproductive rates of larger (older) versus smaller (younger) animals within the keelback population.

  14. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  15. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  16. 13 CFR 121.1009 - What are the procedures for making the size determination?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... the size determination? 121.1009 Section 121.1009 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS SIZE REGULATIONS Size Eligibility Provisions and Standards Procedures for Size.... The concern whose size is under consideration has the burden of establishing its small business size...

  17. The Influence of Framing on Risky Decisions: A Meta-analysis.

    PubMed

    Kühberger

    1998-07-01

    In framing studies, logically equivalent choice situations are differently described and the resulting preferences are studied. A meta-analysis of framing effects is presented for risky choice problems which are framed either as gains or as losses. This evaluates the finding that highlighting the positive aspects of formally identical problems does lead to risk aversion and that highlighting their equivalent negative aspects does lead to risk seeking. Based on a data pool of 136 empirical papers that reported framing experiments with nearly 30,000 participants, we calculated 230 effect sizes. Results show that the overall framing effect between conditions is of small to moderate size and that profound differences exist between research designs. Potentially relevant characteristics were coded for each study. The most important characteristics were whether framing is manipulated by changing reference points or by manipulating outcome salience, and response mode (choice vs. rating/judgment). Further important characteristics were whether options differ qualitatively or quantitatively in risk, whether there is one or multiple risky events, whether framing is manipulated by gain/loss or by task-responsive wording, whether dependent variables are measured between- or within- subjects, and problem domains. Sample (students vs. target populations) and unit of analysis (individual vs. group) was not influential. It is concluded that framing is a reliable phenomenon, but that outcome salience manipulations, which constitute a considerable amount of work, have to be distinguished from reference point manipulations and that procedural features of experimental settings have a considerable effect on effect sizes in framing experiments. Copyright 1998 Academic Press.

  18. Phasor Domain Steady-State Modeling and Design of the DC–DC Modular Multilevel Converter

    DOE PAGES

    Yang, Heng; Qin, Jiangchao; Debnath, Suman; ...

    2016-01-06

    The DC-DC Modular Multilevel Converter (MMC), which originated from the AC-DC MMC, is an attractive converter topology for interconnection of medium-/high-voltage DC grids. This paper presents design considerations for the DC-DC MMC to achieve high efficiency and reduced component sizes. A steady-state mathematical model of the DC-DC MMC in the phasor-domain is developed. Based on the developed model, a design approach is proposed to size the components and to select the operating frequency of the converter to satisfy a set of design constraints while achieving high efficiency. The design approach includes sizing of the arm inductor, Sub-Module (SM) capacitor, andmore » phase filtering inductor along with the selection of AC operating frequency of the converter. The accuracy of the developed model and the effectiveness of the design approach are validated based on the simulation studies in the PSCAD/EMTDC software environment. The analysis and developments of this paper can be used as a guideline for design of the DC-DC MMC.« less

  19. Short-Term Cognitive-Behavioural Group Treatment for Hoarding Disorder: A Naturalistic Treatment Outcome Study.

    PubMed

    Moulding, Richard; Nedeljkovic, Maja; Kyrios, Michael; Osborne, Debra; Mogan, Christopher

    2017-01-01

    The study aim was to test whether a 12-week publically rebated group programme, based upon Steketee and Frost's Cognitive Behavioural Therapy-based hoarding treatment, would be efficacious in a community-based setting. Over a 3-year period, 77 participants with clinically significant hoarding were recruited into 12 group programmes. All completed treatment; however, as this was a community-based naturalistic study, only 41 completed the post-treatment assessment. Treatment included psychoeducation about hoarding, skills training for organization and decision making, direct in-session exposure to sorting and discarding, and cognitive and behavioural techniques to support out-of-session sorting and discarding, and nonacquiring. Self-report measures used to assess treatment effect were the Savings Inventory-Revised (SI-R), Savings Cognition Inventory, and the Depression, Anxiety and Stress Scales. Pre-post analyses indicated that after 12 weeks of treatment, hoarding symptoms as measured on the SI-R had reduced significantly, with large effect sizes reported in total and across all subscales. Moderate effect sizes were also reported for hoarding-related beliefs (emotional attachment and responsibility) and depressive symptoms. Of the 41 participants who completed post-treatment questionnaires, 14 (34%) were conservatively calculated to have clinically significant change, which is considerable given the brevity of the programme judged against the typical length of the disorder. The main limitation of the study was the moderate assessment completion rate, given its naturalistic setting. This study demonstrated that a 12-week group treatment for hoarding disorders was effective in reducing hoarding and depressive symptoms in an Australian clinical cohort and provides evidence for use of this treatment approach in a community setting. Copyright © 2016 John Wiley & Sons, Ltd. A 12-week group programme delivered in a community setting was effective for helping with hoarding symptoms with a large effect size. Hoarding beliefs (emotional attachment and responsibility) and depression were reduced, with moderate effect sizes. A third of all participants who completed post-treatment questionnaires experienced clinically significant change. Suggests that hoarding CBT treatment can be effectively translated into real-world settings and into a brief 12-session format, albeit the study had a moderate assessment completion rate. Copyright © 2016 John Wiley & Sons, Ltd.

  20. GPU-based relative fuzzy connectedness image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less

  1. Data-driven outbreak forecasting with a simple nonlinear growth model.

    PubMed

    Lega, Joceline; Brown, Heidi E

    2016-12-01

    Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Ensemble representations: effects of set size and item heterogeneity on average size perception.

    PubMed

    Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W

    2013-02-01

    Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Productivity, or quality of work as the decisive factor in marketing ergonomics? Design considerations for a new ergonomic welding-table.

    PubMed

    van der Veen, F; Regensburg, R E

    1990-04-01

    Quality tools should be designed from the starting point of adjusting tasks and equipment to human possibilities and limitations. Companies should consider an investment in ergonomic equipment as a profitable addition to indispensable productive machinery. As an example to support this statement, the authors describe the health risks of welders and the possible solutions. As the result of investigations a list of requirements was drafted for a product that would have less of the disadvantages of the products mentioned. The designed product, the 'ergonomic welding-table', aims to be a quality tool for welders working at small and medium-sized tasks. The product consists of a cabin (2.35 m wide) with a built-in ventilator for very efficient welding-fume extraction (90%-95%). Welders can set their preferred working height at any time. Another advantage is the option of performing the welding task while standing or sitting. The results of user-evaluation among welders and purchasers indicates considerable satisfaction.

  4. Review and Assessment of JPL's Thermal Margins

    NASA Technical Reports Server (NTRS)

    Siebes, G.; Kingery, C.; Farguson, C.; White, M.; Blakely, M.; Nunes, J.; Avila, A.; Man, K.; Hoffman, A.; Forgrave, J.

    2012-01-01

    JPL has captured its experience from over four decades of robotic space exploration into a set of design rules. These rules have gradually changed into explicit requirements and are now formally implemented and verified. Over an extended period of time, the initial understanding of intent and rationale for these rules has faded and rules are now frequently applied without further consideration. In the meantime, mission classes and their associated risk postures have evolved, coupled with resource constraints and growing design diversity, bringing into question the current "one size fits all" thermal margin approach. This paper offers a systematic review of the heat flow path from an electronic junction to the eventual heat rejection to space. This includes the identification of different regimes along this path and the associated requirements. The work resulted in a renewed understanding of the intent behind JPL requirements for hot thermal margins and a framework for relevant considerations, which in turn enables better decision making when a deviation to these requirements is considered.

  5. Developing Market Opportunities for Flexible Rooftop Applications of PV Using Flexible CIGS Technology: Market Considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabnani, L.; Skumanich, A.; Ryabova, E.

    There has been a recent upsurge in developments for building-integrated phototovoltaics (BiPV) roof top materials based on CIGS. Several new companies have increased their presence and are looking to bring products to market for this application in 2011. For roof-top application, there are significant key requirements beyond just having good conversion efficiency. Other attributes include lightweight, as well as moisture-proof, and fully functionally reliable. The companies bringing these new BIPV/BAPV products need to ensure functionality with a rigorous series of tests, and have an extensive set of 'torture' tests to validate the capability. There is a convergence of form, aesthetics,more » and physics to ensure that the CIGS BiPV deliver on their promises. This article will cover the developments in this segment of the BiPV market and delve into the specific tests and measurements needed to characterize the products. The potential market sizes are evaluated and the technical considerations developed.« less

  6. The quantitative modelling of human spatial habitability

    NASA Technical Reports Server (NTRS)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  7. Single versus multiple sets of resistance exercise: a meta-regression.

    PubMed

    Krieger, James W

    2009-09-01

    There has been considerable debate over the optimal number of sets per exercise to improve musculoskeletal strength during a resistance exercise program. The purpose of this study was to use hierarchical, random-effects meta-regression to compare the effects of single and multiple sets per exercise on dynamic strength. English-language studies comparing single with multiple sets per exercise, while controlling for other variables, were considered eligible for inclusion. The analysis comprised 92 effect sizes (ESs) nested within 30 treatment groups and 14 studies. Multiple sets were associated with a larger ES than a single set (difference = 0.26 +/- 0.05; confidence interval [CI]: 0.15, 0.37; p < 0.0001). In a dose-response model, 2 to 3 sets per exercise were associated with a significantly greater ES than 1 set (difference = 0.25 +/- 0.06; CI: 0.14, 0.37; p = 0.0001). There was no significant difference between 1 set per exercise and 4 to 6 sets per exercise (difference = 0.35 +/- 0.25; CI: -0.05, 0.74; p = 0.17) or between 2 to 3 sets per exercise and 4 to 6 sets per exercise (difference = 0.09 +/- 0.20; CI: -0.31, 0.50; p = 0.64). There were no interactions between set volume and training program duration, subject training status, or whether the upper or lower body was trained. Sensitivity analysis revealed no highly influential studies, and no evidence of publication bias was observed. In conclusion, 2 to 3 sets per exercise are associated with 46% greater strength gains than 1 set, in both trained and untrained subjects.

  8. Value of information methods to design a clinical trial in a small population to optimise a health economic utility function.

    PubMed

    Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel

    2018-02-08

    Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.

  9. Initial steps toward the realization of large area arrays of single photon counting pixels based on polycrystalline silicon TFTs

    NASA Astrophysics Data System (ADS)

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao; Street, Robert A.; Lu, Jeng Ping

    2014-03-01

    The thin-film semiconductor processing methods that enabled creation of inexpensive liquid crystal displays based on amorphous silicon transistors for cell phones and televisions, as well as desktop, laptop and mobile computers, also facilitated the development of devices that have become ubiquitous in medical x-ray imaging environments. These devices, called active matrix flat-panel imagers (AMFPIs), measure the integrated signal generated by incident X rays and offer detection areas as large as ~43×43 cm2. In recent years, there has been growing interest in medical x-ray imagers that record information from X ray photons on an individual basis. However, such photon counting devices have generally been based on crystalline silicon, a material not inherently suited to the cost-effective manufacture of monolithic devices of a size comparable to that of AMFPIs. Motivated by these considerations, we have developed an initial set of small area prototype arrays using thin-film processing methods and polycrystalline silicon transistors. These prototypes were developed in the spirit of exploring the possibility of creating large area arrays offering single photon counting capabilities and, to our knowledge, are the first photon counting arrays fabricated using thin film techniques. In this paper, the architecture of the prototype pixels is presented and considerations that influenced the design of the pixel circuits, including amplifier noise, TFT performance variations, and minimum feature size, are discussed.

  10. Individual selection of X-ray tube settings in computed tomography coronary angiography: Reliability of an automated software algorithm to maintain constant image quality.

    PubMed

    Durmus, Tahir; Luhur, Reny; Daqqaq, Tareef; Schwenke, Carsten; Knobloch, Gesine; Huppertz, Alexander; Hamm, Bernd; Lembcke, Alexander

    2016-05-01

    To evaluate a software tool that claims to maintain a constant contrast-to-noise ratio (CNR) in high-pitch dual-source computed tomography coronary angiography (CTCA) by automatically selecting both X-ray tube voltage and current. A total of 302 patients (171 males; age 61±12years; body weight 82±17kg, body mass index 27.3±4.6kg/cm(2)) underwent CTCA with a topogram-based, automatic selection of both tube voltage and current using dedicated software with quality reference values of 100kV and 250mAs/rotation (i.e., standard values for an average adult weighing 75kg) and an injected iodine load of 222mg/kg. The average radiation dose was estimated to be 1.02±0.64mSv. All data sets had adequate contrast enhancement. Average CNR in the aortic root, left ventricle, and left and right coronary artery was 15.7±4.5, 8.3±2.9, 16.1±4.3 and 15.3±3.9 respectively. Individual CNR values were independent of patients' body size and radiation dose. However, individual CNR values may vary considerably between subjects as reflected by interquartile ranges of 12.6-18.6, 6.2-9.9, 12.8-18.9 and 12.5-17.9 respectively. Moreover, average CNR values were significantly lower in males than females (15.1±4.1 vs. 16.6±11.7 and 7.9±2.7 vs. 8.9±3.0, 15.5±3.9 vs. 16.9±4.6 and 14.7±3.6 vs. 16.0±4.1 respectively). A topogram-based automatic selection of X-ray tube settings in CTCA provides diagnostic image quality independent of patients' body size. Nevertheless, considerable variation of individual CNR values between patients and significant differences of CNR values between males and females occur which questions the reliability of this approach. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Oscillatory Critical Amplitudes in Hierarchical Models and the Harris Function of Branching Processes

    NASA Astrophysics Data System (ADS)

    Costin, Ovidiu; Giacomin, Giambattista

    2013-02-01

    Oscillatory critical amplitudes have been repeatedly observed in hierarchical models and, in the cases that have been taken into consideration, these oscillations are so small to be hardly detectable. Hierarchical models are tightly related to iteration of maps and, in fact, very similar phenomena have been repeatedly reported in many fields of mathematics, like combinatorial evaluations and discrete branching processes. It is precisely in the context of branching processes with bounded off-spring that T. Harris, in 1948, first set forth the possibility that the logarithm of the moment generating function of the rescaled population size, in the super-critical regime, does not grow near infinity as a power, but it has an oscillatory prefactor (the Harris function). These oscillations have been observed numerically only much later and, while the origin is clearly tied to the discrete character of the iteration, the amplitude size is not so well understood. The purpose of this note is to reconsider the issue for hierarchical models and in what is arguably the most elementary setting—the pinning model—that actually just boils down to iteration of polynomial maps (and, notably, quadratic maps). In this note we show that the oscillatory critical amplitude for pinning models and the Harris function coincide. Moreover we make explicit the link between these oscillatory functions and the geometry of the Julia set of the map, making thus rigorous and quantitative some ideas set forth in Derrida et al. (Commun. Math. Phys. 94:115-132, 1984).

  12. Immediate Judgments of Learning are Insensitive to Implicit Interference Effects at Retrieval

    PubMed Central

    Eakin, Deborah K.; Hertzog, Christopher

    2013-01-01

    We conducted three experiments to determine whether metamemory predictions at encoding, immediate judgments of learning (IJOLs) are sensitive to implicit interference effects that will occur at retrieval. Implicit interference was manipulated by varying the association set size of the cue (Exps. 1 & 2) or the target (Exp. 3). The typical finding is that memory is worse for large-set-size cues and targets, but only when the target is studied alone and later prompted with a related cue (extralist). When the pairs are studied together (intralist), recall is the same regardless of set size; set-size effects are eliminated. Metamemory predictions at retrieval, such as delayed JOLs (DJOLs) and feeling of knowing (FOK) judgments accurately reflect implicit interference effects (e.g., Eakin & Hertzog, 2006). In Experiment 1, we contrasted cue-set-size effects on IJOLs, DJOLs, and FOKs. After wrangling with an interesting methodological conundrum related to set size effects (Exp. 2), we found that whereas DJOLs and FOKs accurately predicted set size effects on retrieval, a comparison between IJOLs and no-cue IJOLs demonstrated that immediate judgments did not vary with set size. In Experiment 3, we confirmed this finding by manipulating target set size. Again, IJOLs did not vary with set size whereas DJOLs and FOKs did. The findings provide further evidence for the inferential view regarding the source of metamemory predictions, as well as indicate that inferences are based on different sources depending on when in the memory process predictions are made. PMID:21915761

  13. Evidence from Meteorites for Multiple Possible Amino Acid Alphabets for the Origins of Life

    NASA Technical Reports Server (NTRS)

    Burton, A. S.; Elsila, J. E.; Callahan, M. P.; Glavin, D. P.; Dworkin, J. P.

    2015-01-01

    A key question for the origins of life is understanding which amino acids made up the first proteins synthesized during the origins of life. The canonical set of 20 - 22 amino acids used in proteins are all alpha-amino, alpha-hydrogen isomers that, nevertheless, show considerable variability in properties including size, hydrophobicity, and ionizability. Abiotic amino acid synthesis experiments such as Miller-Urey spark discharge reactions produce a set of up to 23 amino acids, depending on starting materials and reaction conditions, with significant abundances of both alpha- and non-alpha-amino acid isomers. These two sets of amino acids do not completely overlap; of the 23 spark discharge amino acids, only 11 are used in modern proteins. Furthermore, because our understanding of conditions on the early Earth are limited, it is unclear which set(s) of conditions employed in spark discharge or hydrothermal reactions are correct, leaving us with significant uncertainty about the amino acid alphabet available for the origins of life on Earth. Meteorites, the surviving remnants of asteroids and comets that fall to the Earth, offer the potential to study authentic samples of naturally-occurring abiotic chemistry, and thus can provide an alternative approach to constraining the amino acid library during the origins of life.

  14. Liquid–Liquid Mixing Studies in Annular Centrifugal Contactors Comparing Stationary Mixing Vane Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardle, Kent E.

    2015-11-10

    Comparative studies of multiphase operation of annular centrifugal contactors showing the impact of housing stationary mixing vane configuration. A number of experimental results for several different mixing vane options are reported with selected measurements in a lab-scale 5 cm contactor and 12.5 cm engineering-scale unit. Fewer straight vanes give greater mixingzone hold-up compared to curved vanes. Quantitative comparison of droplet size distribution also showed a significant decrease in mean diameter for four straight vanes versus eight curved vanes. This set of measurements gives a compelling case for careful consideration of mixing vane geometry when evaluating hydraulic operation and extraction processmore » efficiency of annular centrifugal contactors.« less

  15. Analysis of survival data from telemetry projects

    USGS Publications Warehouse

    Bunck, C.M.; Winterstein, S.R.; Pollock, K.H.

    1985-01-01

    Telemetry techniques can be used to study the survival rates of animal populations and are particularly suitable for species or settings for which band recovery models are not. Statistical methods for estimating survival rates and parameters of survival distributions from observations of radio-tagged animals will be described. These methods have been applied to medical and engineering studies and to the study of nest success. Estimates and tests based on discrete models, originally introduced by Mayfield, and on continuous models, both parametric and nonparametric, will be described. Generalizations, including staggered entry of subjects into the study and identification of mortality factors will be considered. Additional discussion topics will include sample size considerations, relocation frequency for subjects, and use of covariates.

  16. Liquid–liquid mixing studies in annular centrifugal contactors comparing stationary mixing vane options

    DOE PAGES

    Wardle, Kent E.

    2015-09-11

    Comparative studies of multiphase operation of an annular centrifugal contactor show the impact of housing stationary mixing vane configuration. A number of experimental results for several different mixing vane options are reported for operation of a 12.5 cm engineering-scale contactor unit. Fewer straight vanes give greater mixing-zone hold-up compared to curved vanes. Quantitative comparison of droplet size distribution also showed a significant decrease in mean diameter for four straight vanes versus eight curved vanes. This set of measurements gives a compelling case for careful consideration of mixing vane geometry when evaluating hydraulic operation and extraction process efficiency of annular centrifugalmore » contactors.« less

  17. Selecting Question-Specific Genes to Reduce Incongruence in Phylogenomics: A Case Study of Jawed Vertebrate Backbone Phylogeny.

    PubMed

    Chen, Meng-Yun; Liang, Dan; Zhang, Peng

    2015-11-01

    Incongruence between different phylogenomic analyses is the main challenge faced by phylogeneticists in the genomic era. To reduce incongruence, phylogenomic studies normally adopt some data filtering approaches, such as reducing missing data or using slowly evolving genes, to improve the signal quality of data. Here, we assembled a phylogenomic data set of 58 jawed vertebrate taxa and 4682 genes to investigate the backbone phylogeny of jawed vertebrates under both concatenation and coalescent-based frameworks. To evaluate the efficiency of extracting phylogenetic signals among different data filtering methods, we chose six highly intractable internodes within the backbone phylogeny of jawed vertebrates as our test questions. We found that our phylogenomic data set exhibits substantial conflicting signal among genes for these questions. Our analyses showed that non-specific data sets that are generated without bias toward specific questions are not sufficient to produce consistent results when there are several difficult nodes within a phylogeny. Moreover, phylogenetic accuracy based on non-specific data is considerably influenced by the size of data and the choice of tree inference methods. To address such incongruences, we selected genes that resolve a given internode but not the entire phylogeny. Notably, not only can this strategy yield correct relationships for the question, but it also reduces inconsistency associated with data sizes and inference methods. Our study highlights the importance of gene selection in phylogenomic analyses, suggesting that simply using a large amount of data cannot guarantee correct results. Constructing question-specific data sets may be more powerful for resolving problematic nodes. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. 28 CFR 2.15 - Petition for consideration of parole prior to date set at hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... prior to date set at hearing. 2.15 Section 2.15 Judicial Administration DEPARTMENT OF JUSTICE PAROLE... hearing. When a prisoner has served the minimum term of imprisonment required by law, the Bureau of... consideration for parole prior to the date set by the Commission at the initial or review hearing. The petition...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.

    In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less

  20. Peptide Functionalized Gold Nanorods for the Sensitive Detection of a Cardiac Biomarker Using Plasmonic Paper Devices.

    PubMed

    Tadepalli, Sirimuvva; Kuang, Zhifeng; Jiang, Qisheng; Liu, Keng-Ku; Fisher, Marilee A; Morrissey, Jeremiah J; Kharasch, Evan D; Slocik, Joseph M; Naik, Rajesh R; Singamaneni, Srikanth

    2015-11-10

    The sensitivity of localized surface plasmon resonance (LSPR) of metal nanostructures to adsorbates lends itself to a powerful class of label-free biosensors. Optical properties of plasmonic nanostructures are dependent on the geometrical features and the local dielectric environment. The exponential decay of the sensitivity from the surface of the plasmonic nanotransducer calls for the careful consideration in its design with particular attention to the size of the recognition and analyte layers. In this study, we demonstrate that short peptides as biorecognition elements (BRE) compared to larger antibodies as target capture agents offer several advantages. Using a bioplasmonic paper device (BPD), we demonstrate the selective and sensitive detection of the cardiac biomarker troponin I (cTnI). The smaller sized peptide provides higher sensitivity and a lower detection limit using a BPD. Furthermore, the excellent shelf-life and thermal stability of peptide-based LSPR sensors, which precludes the need for special storage conditions, makes it ideal for use in resource-limited settings.

  1. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  2. Binary mesh partitioning for cache-efficient visualization.

    PubMed

    Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno

    2010-01-01

    One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.

  3. The contribution of stimulus frequency and recency to set-size effects.

    PubMed

    van 't Wout, Félice

    2018-06-01

    Hick's law describes the increase in choice reaction time (RT) with the number of stimulus-response (S-R) mappings. However, in choice RT experiments, set-size is typically confounded with stimulus recency and frequency: With a smaller set-size, each stimulus occurs on average more frequently and more recently than with a larger set-size. To determine to what extent stimulus recency and frequency contribute to the set-size effect, stimulus set-size was manipulated independently of stimulus recency and frequency, by keeping recency and frequency constant for a subset of the stimuli. Although this substantially reduced the set-size effect (by approximately two-thirds for these stimuli), it did not eliminate it. Thus, the time required to retrieve an S-R mapping from memory is (at least in part) determined by the number of alternatives. In contrast, a recent task switching study (Van 't Wout et al. in Journal of Experimental Psychology: Learning, Memory & Cognition., 41, 363-376, 2015) using the same manipulation found that the time required to retrieve a task-set from memory is not influenced by the number of alternatives per se. Hence, this experiment further supports a distinction between two levels of representation in task-set control: The level of task-sets, and the level of S-R mappings.

  4. Three Years of TRMM Precipitation Features. Part 1; Radar, Radiometric, and Lightning Characteristics

    NASA Technical Reports Server (NTRS)

    Cecil, Daniel J.; Goodman, Steven J.; Boccippio, Dennis J.; Zipser, Edward J.; Nesbitt, Stephen W.

    2004-01-01

    During its first three years, the Tropical Rainfall Measuring Mission (TRMM) satellite observed nearly six million precipitation features. The population of precipitation features is sorted by lightning flash rate, minimum brightness temperature, maximum radar reflectivity, areal extent, and volumetric rainfall. For each of these characteristics, essentially describing the convective intensity or the size of the features, the population is broken into categories consisting of the top 0.001%, top 0.01%, top 0.1%, top 1%, top 2.4%, and remaining 97.6%. The set of 'weakest / smallest' features comprises 97.6% of the population because that fraction does not have detected lightning, with a minimum detectable flash rate 0.7 fl/min. The greatest observed flash rate is 1351 fl/min; the lowest brightness temperatures are 42 K (85-GHz) and 69 K (37- GHz). The largest precipitation feature covers 335,000 sq km and the greatest rainfall from an individual precipitation feature exceeds 2 x 10(exp 12) kg of water. There is considerable overlap between the greatest storms according to different measures of convective intensity. The largest storms are mostly independent of the most intense storms. The set of storms producing the most rainfall is a convolution of the largest and the most intense storms. This analysis is a composite of the global tropics and subtropics. Significant variability is known to exist between locations, seasons, and meteorological regimes. Such variability will be examined in Part II. In Part I, only a crude land / Ocean separation is made. The known differences in bulk lightning flash rates over land and Ocean result from at least two differences in the precipitation feature population: the frequency of occurrence of intense storms, and the magnitude of those intense storms that do occur. Even when restricted to storms with the same brightness temperature, same size, or same radar reflectivity aloft, the storms over water are considerably less likely to produce lightning than are comparable storms over land.

  5. Three Years of TRMM Precipitation Features. Part 1; Radar, Radiometric, and Lightning Characteristics

    NASA Technical Reports Server (NTRS)

    Cecil, Daniel J.; Goodman, Steven J.; Boccippio, Dennis J.; Zipser, Edward J.; Nesbitt, Stephen W.

    2005-01-01

    During its first three years, the Tropical Rainfall Measuring Mission (TRMM) satellite observed nearly six million precipitation features. The population of precipitation features is sorted by lightning flash rate, minimum brightness temperature, maximum radar reflectivity. areal extent, and volumetric rainfall. For each of these characteristics, essentially describing the convective intensity or the size of the features, the population is broken into categories consisting of the top 0.001%, top 0.01%, top 0.1%, top 1%, top 2.4%. and remaining 97.6%. The set of weakest/smallest features composes 97.6% of the population because that fraction does not have detected lightning, with a minimum detectable flash rate of 0.7 flashes (fl) per minute. The greatest observed flash rate is 1351 fl per minute; the lowest brightness temperatures are 42 K (85 GHz) and 69 K (37 GHz). The largest precipitation feature covers 335 000 square kilometers and the greatest rainfall from an individual precipitation feature exceeds 2 x 10 kg per hour of water. There is considerable overlap between the greatest storms according to different measures of convective intensity. The largest storms are mostly independent of the most intense storms. The set of storms producing the most rainfall is a convolution of the largest and the most intense storms. This analysis is a composite of the global Tropics and subtropics. Significant variability is known to exist between locations. seasons, and meteorological regimes. Such variability will be examined in Part II. In Part I, only a crude land-ocean separation is made. The known differences in bulk lightning flash rates over land and ocean result from at least two differences in the precipitation feature population: the frequency of occurrence of intense storms and the magnitude of those intense storms that do occur. Even when restricted to storms with the same brightness temperature, same size, or same radar reflectivity aloft, the storms over water are considerably less likely to produce lightning than are comparable storms over land.

  6. Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

    PubMed Central

    Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.

    2012-01-01

    Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460

  7. Reconstruction of biofilm images: combining local and global structural parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk

    2014-10-20

    Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less

  8. Education of eye health professionals to meet the needs of the Pacific.

    PubMed

    du Toit, Renee; Brian, Garry; Palagyi, Anna; Williams, Carmel; Ramke, Jacqueline

    2009-03-13

    Vision impairment has significant impact on quality of life and substantial economic consequences. Yet, in the Pacific Islands, as in other low resource settings, it is predominantly caused by chronic conditions that can be treated or prevented. A whole of health approach is required to rectify this, and must include an increase in workforce capacity, both in size and effectiveness, by providing competency-based education for eye care professionals. Training in curative clinical skills is not sufficient: broader competencies--including those for chronic conditions, issues of care quality, integration into the wider health care system, and commitment to professionalism and life-long learning--need to be addressed. Using current best practice approaches in education, and taking into consideration local needs, The Pacific Eye Institute, an initiative of The Fred Hollows Foundation New Zealand, aims to produce graduates with these core competencies who are capable of effectively and acceptably working in community or hospital settings to provide sustainable high quality, comprehensive eye care with ongoing desirable and consistent eye health outcomes.

  9. Thermal Inspection of a Composite Fuselage Section Using a Fixed Eigenvector Principal Component Analysis Method

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Bolduc, Sean; Harman, Rebecca

    2017-01-01

    A composite fuselage aircraft forward section was inspected with flash thermography. The fuselage section is 24 feet long and approximately 8 feet in diameter. The structure is primarily configured with a composite sandwich structure of carbon fiber face sheets with a Nomex(Trademark) honeycomb core. The outer surface area was inspected. The thermal data consisted of 477 data sets totaling in size of over 227 Gigabytes. Principal component analysis (PCA) was used to process the data sets for substructure and defect detection. A fixed eigenvector approach using a global covariance matrix was used and compared to a varying eigenvector approach. The fixed eigenvector approach was demonstrated to be a practical analysis method for the detection and interpretation of various defects such as paint thickness variation, possible water intrusion damage, and delamination damage. In addition, inspection considerations are discussed including coordinate system layout, manipulation of the fuselage section, and the manual scanning technique used for full coverage.

  10. An aftereffect of adaptation to mean size

    PubMed Central

    Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David

    2013-01-01

    The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083

  11. Approximations to complete basis set-extrapolated, highly correlated non-covalent interaction energies.

    PubMed

    Mackie, Iain D; DiLabio, Gino A

    2011-10-07

    The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics

  12. Ranked set sampling: cost and optimal set size.

    PubMed

    Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying

    2002-12-01

    McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.

  13. Comparisons of auction mechanisms in a multiple unit setting: A consideration for restructuring electric power markets

    NASA Astrophysics Data System (ADS)

    Bernard, John Charles

    The objective of this study was to compare the performance of five single sided auctions that could be used in restructured electric power markets across different market sizes in a multiple unit setting. Auction selection would profoundly influence an industry over $200 billion in size in the United States, and the consequences of implementing an inappropriate mechanism would be great. Experimental methods were selected to analyze the auctions. Two rounds of experiments were conducted, the first testing the sealed offer last accepted offer (LAO) and first rejected offer (FRO), and the clock English (ENG) and sealed offer English (SOE) in markets of sizes two and six. The FRO, SOE, and ENG used the same pricing rule. Second round testing was on the LAO, FRO, and the nonuniform price multiple unit Vickrey (MUV) in markets of sizes two, four, and six. Experiments lasted 23 and 75 periods for rounds 1 and 2 respectively. Analysis of variance and contrast analysis were used to examine the data. The four performance measures used were price, efficiency, profits per unit, and supply revelation. Five basic principles were also assessed: no sales at losses, all low cost capacity should be offered and sold, no high cost capacity should sell, and the market should clear. It was expected group size and auction type would affect performance. For all performance measures, group size was a significant variable, with smaller groups showing poorer performance. Auction type was significant only for the efficiency performance measure, where clock auctions outperformed the others. Clock auctions also proved superior for the first four principles. The FRO performed poorly in almost all situations, and should not be a preferred mechanism in any market. The ENG was highly efficient, but expensive for the buyer. The SOE appeared superior to the FRO and ENG. The clock improves efficiency over the FRO while less information kept prices under the ENG. The MUV was superior in revealing costs, but performed less well in other categories. While concerns existed for all the mechanisms investigated, the commonly proposed LAO was the best option for restructured electric power markets.

  14. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  15. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  16. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  17. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  18. 46 CFR 160.054-2 - Type and size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Inflatable Liferafts § 160.054-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight type... special consideration. (b) Size. First-aid kits shall be of a size adequate for packing 12 standard single...

  19. Microbiological testing of Skylab foods.

    NASA Technical Reports Server (NTRS)

    Heidelbaugh, N. D.; Mcqueen, J. L.; Rowley, D. B.; Powers , E. M.; Bourland, C. T.

    1973-01-01

    Review of some of the unique food microbiology problems and problem-generating circumstances the Skylab manned space flight program involves. The situations these problems arise from include: extended storage times, variations in storage temperatures, no opportunity to resupply or change foods after launch of the Skylab Workshop, first use of frozen foods in space, first use of a food-warming device in weightlessness, relatively small size of production lots requiring statistically valid sampling plans, and use of food as an accurately controlled part in a set of sophisticated life science experiments. Consideration of all of these situations produced the need for definite microbiological tests and test limits. These tests are described along with the rationale for their selection. Reported test results show good compliance with the test limits.

  20. Using mobile location data in biomedical research while preserving privacy.

    PubMed

    Goldenholz, Daniel M; Goldenholz, Shira R; Krishnamurthy, Kaarkuzhali B; Halamka, John; Karp, Barbara; Tyburski, Matthew; Wendler, David; Moss, Robert; Preston, Kenzie L; Theodore, William

    2018-06-07

    Location data are becoming easier to obtain and are now bundled with other metadata in a variety of biomedical research applications. At the same time, the level of sophistication required to protect patient privacy is also increasing. In this article, we provide guidance for institutional review boards (IRBs) to make informed decisions about privacy protections in protocols involving location data. We provide an overview of some of the major categories of technical algorithms and medical-legal tools at the disposal of investigators, as well as the shortcomings of each. Although there is no "one size fits all" approach to privacy protection, this article attempts to describe a set of practical considerations that can be used by investigators, journal editors, and IRBs.

  1. User Interface Considerations for Collecting Data at the Point of Care in the Tablet PC Computing Environment

    PubMed Central

    Silvey, Garry M.; Lobach, David F.; Macri, Jennifer M.; Hunt, Megan; Kacmaz, Roje O.; Lee, Paul P.

    2006-01-01

    Collecting clinical data directly from clinicians is a challenge. Many standard development environments designed to expedite the creation of user interfaces for electronic healthcare applications do not provide acceptable components for satisfying the requirements for collecting and displaying clinical data at the point of care on the tablet computer. Through an iterative design and testing approach using think-aloud sessions in the eye care setting, we were able to identify and resolve several user interface issues. Issues that we discovered and subsequently resolved included checkboxes that were too small to be selectable with a stylus, radio buttons that could not be unselected, and font sizes that were too small to be read at arm’s length. PMID:17238715

  2. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  3. Performance Comparison of a Set of Periodic and Non-Periodic Tridiagonal Solvers on SP2 and Paragon Parallel Computers

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Moitra, Stuti

    1996-01-01

    Various tridiagonal solvers have been proposed in recent years for different parallel platforms. In this paper, the performance of three tridiagonal solvers, namely, the parallel partition LU algorithm, the parallel diagonal dominant algorithm, and the reduced diagonal dominant algorithm, is studied. These algorithms are designed for distributed-memory machines and are tested on an Intel Paragon and an IBM SP2 machines. Measured results are reported in terms of execution time and speedup. Analytical study are conducted for different communication topologies and for different tridiagonal systems. The measured results match the analytical results closely. In addition to address implementation issues, performance considerations such as problem sizes and models of speedup are also discussed.

  4. Construction and application of a new dual-hybrid random phase approximation.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Kállay, Mihály

    2015-10-13

    The direct random phase approximation (dRPA) combined with Kohn-Sham reference orbitals is among the most promising tools in computational chemistry and applicable in many areas of chemistry and physics. The reason for this is that it scales as N(4) with the system size, which is a considerable advantage over the accurate ab initio wave function methods like standard coupled-cluster. dRPA also yields a considerably more accurate description of thermodynamic and electronic properties than standard density-functional theory methods. It is also able to describe strong static electron correlation effects even in large systems with a small or vanishing band gap missed by common single-reference methods. However, dRPA has several flaws due to its self-correlation error. In order to obtain accurate and precise reaction energies, barriers and noncovalent intra- and intermolecular interactions, we construct a new dual-hybrid dRPA (hybridization of exact and semilocal exchange in both the energy and the orbitals) and test the performance of this new functional on isogyric, isodesmic, hypohomodesmotic, homodesmotic, and hyperhomodesmotic reaction classes. We also use a test set of 14 Diels-Alder reactions, six atomization energies (AE6), 38 hydrocarbon atomization energies, and 100 reaction barrier heights (DBH24, HT-BH38, and NHT-BH38). For noncovalent complexes, we use the NCCE31 and S22 test sets. To test the intramolecular interactions, we use a set of alkane, cysteine, phenylalanine-glycine-glycine tripeptide, and monosaccharide conformers. We also discuss the delocalization and static correlation errors. We show that a universally accurate description of chemical properties can be provided by a large, 75% exact exchange mixing both in the calculation of the reference orbitals and the final energy.

  5. Measuring Spray Droplet Size from Agricultural Nozzles Using Laser Diffraction

    PubMed Central

    Fritz, Bradley K.; Hoffmann, W. Clint

    2016-01-01

    When making an application of any crop protection material such as an herbicide or pesticide, the applicator uses a variety of skills and information to make an application so that the material reaches the target site (i.e., plant). Information critical in this process is the droplet size that a particular spray nozzle, spray pressure, and spray solution combination generates, as droplet size greatly influences product efficacy and how the spray moves through the environment. Researchers and product manufacturers commonly use laser diffraction equipment to measure the spray droplet size in laboratory wind tunnels. The work presented here describes methods used in making spray droplet size measurements with laser diffraction equipment for both ground and aerial application scenarios that can be used to ensure inter- and intra-laboratory precision while minimizing sampling bias associated with laser diffraction systems. Maintaining critical measurement distances and concurrent airflow throughout the testing process is key to this precision. Real time data quality analysis is also critical to preventing excess variation in the data or extraneous inclusion of erroneous data. Some limitations of this method include atypical spray nozzles, spray solutions or application conditions that result in spray streams that do not fully atomize within the measurement distances discussed. Successful adaption of this method can provide a highly efficient method for evaluation of the performance of agrochemical spray application nozzles under a variety of operational settings. Also discussed are potential experimental design considerations that can be included to enhance functionality of the data collected. PMID:27684589

  6. How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants.

    PubMed

    Chapman, Simon N; Mumby, Hannah S; Crawley, Jennie A H; Mar, Khyne U; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi

    2016-01-01

    Information on an organism's body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings.

  7. How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants

    PubMed Central

    Chapman, Simon N.; Mumby, Hannah S.; Crawley, Jennie A. H.; Mar, Khyne U.; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi

    2016-01-01

    Information on an organism’s body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings. PMID:26938085

  8. Impact of wave action on the structure of material on the beach in Calypsobyen (Spitsbergen)

    NASA Astrophysics Data System (ADS)

    Mędrek, Karolina; Herman, Agnieszka; Moskalik, Mateusz; Rodzik, Jan; Zagórski, Piotr

    2015-04-01

    The research was conducted during the XXVI Polar Expedition of Maria Curie-Sklodowska University in Lublin on Spitsbergen. It involved recording water wave action in the Bellsund Strait, and taking daily photographs of the beach on its shore in Calypsobyen. The base of polar expeditions of UMCS, Calypsobyen, is located on the coast of Calypsostranda, developed by raised marine terraces. Weakly resistant Tertiary sandstones occur in the substrate, covered with glacigenic sediments and marine gravels. No skerries are encountered along this section of the accumulation coast. The shore is dominated by gravel deposits. The bottom slopes gently. The recording of wave action was performed from 8 July to 27 August 2014 by means of a pressure based MIDAS WTR Wave and Tide Recorder set at a depth of 10 m at a distance of about 1 km from the shore. The obtained data provided the basis for the calculation of the significant wave height, and the corresponding mean wave period . These parameters reflect wave energy and wave level, having a considerable impact on the dynamics of coastal processes and the type and grain size of sediments accumulated on the beach. Material consisting of medium gravel and seaweed appeared on the beach at high values of significant wave height and when the corresponding mean wave period showed average values. The contribution of fine, gravel-sandy material grew with an increase in mean period and a decrease in significant wave height. At maximum values of mean period and low values of significant wave height, the beach was dominated by well-sorted fine-grained gravel. The lowest mean periods resulted in the least degree of sorting of the sediment (from very coarse sand to medium gravel). The analysis of data from the wave and tide recorder set and their comparison with photographs of the beach suggest that wave action, and particularly wave energy manifested in significant wave height, has a considerable impact on the type and grain size of material occurring on the shore of the fjord. The mean period is mainly responsible for sorting out the sediment, and the size of gravels is associated with significant wave height. Project of National Science Centre no. DEC-2013/09/B/ST10/04141

  9. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    PubMed

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  10. Cryo-comminution of plastic waste.

    PubMed

    Gente, Vincenzo; La Marca, Floriana; Lucci, Federica; Massacci, Paolo; Pani, Eleonora

    2004-01-01

    Recycling of plastics is a big issue in terms of environmental sustainability and of waste management. The development of proper technologies for plastic recycling is recognised as a priority. To achieve this aim, the technologies applied in mineral processing can be adapted to recycling systems. In particular, the improvement of comminution technologies is one of the main actions to improve the quality of recycled plastics. The aim of this work is to point out suitable comminution processes for different types of plastic waste. Laboratory comminution tests have been carried out under different conditions of temperature and sample pre-conditioning adopting as refrigerant agents CO2 and liquid nitrogen. The temperature has been monitored by thermocouples placed in the milling chamber. Also different internal mill screens have been adopted. A proper procedure has been set up in order to obtain a selective comminution and a size reduction suitable for further separation treatment. Tests have been performed on plastics coming from medical plastic waste and from a plant for spent lead batteries recycling. Results coming from different mill devices have been compared taking into consideration different indexes for representative size distributions. The results of the performed tests show as cryo-comminution improves the effectiveness of size reduction of plastics, promotes liberation of constituents and increases specific surface size of comminuted particles in comparison to a comminution process carried out at room temperature. Copyright 2004 Elsevier Ltd.

  11. Are Small Schools Better? School Size Considerations for Safety & Learning. Policy Brief.

    ERIC Educational Resources Information Center

    McRobbie, Joan

    New studies from the 1990s have strengthened an already notable consensus on school size: smaller is better. This policy brief outlines research findings on why size makes a difference, how small is small enough, effective approaches to downsizing, and key barriers. No agreement exists at present on optimal school size, but research suggests a…

  12. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  13. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  14. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″ × 9″ × 21/2″ inside) adequate for...

  15. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″ × 9″ × 21/2″ inside) adequate for...

  16. 46 CFR 160.041-2 - Type and size.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Kits, First-Aid, for Merchant Vessels § 160.041-2 Type and size. (a) Type. First-aid kits covered by this specification shall be of the water-tight cabinet carrying... consideration. (b) Size. First-aid kits shall be of a size (approximately 9″×9″×21/2″ inside) adequate for...

  17. Fuzzy set methods for object recognition in space applications

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1991-01-01

    Progress on the following tasks is reported: (1) fuzzy set-based decision making methodologies; (2) feature calculation; (3) clustering for curve and surface fitting; and (4) acquisition of images. The general structure for networks based on fuzzy set connectives which are being used for information fusion and decision making in space applications is described. The structure and training techniques for such networks consisting of generalized means and gamma-operators are described. The use of other hybrid operators in multicriteria decision making is currently being examined. Numerous classical features on image regions such as gray level statistics, edge and curve primitives, texture measures from cooccurrance matrix, and size and shape parameters were implemented. Several fractal geometric features which may have a considerable impact on characterizing cluttered background, such as clouds, dense star patterns, or some planetary surfaces, were used. A new approach to a fuzzy C-shell algorithm is addressed. NASA personnel are in the process of acquiring suitable simulation data and hopefully videotaped actual shuttle imagery. Photographs have been digitized to use in the algorithms. Also, a model of the shuttle was assembled and a mechanism to orient this model in 3-D to digitize for experiments on pose estimation is being constructed.

  18. Choice Set Size and Decision-Making: The Case of Medicare Part D Prescription Drug Plans

    PubMed Central

    Bundorf, M. Kate; Szrek, Helena

    2013-01-01

    Background The impact of choice on consumer decision-making is controversial in U.S. health policy. Objective Our objective was to determine how choice set size influences decision-making among Medicare beneficiaries choosing prescription drug plans. Methods We randomly assigned members of an internet-enabled panel age 65 and over to sets of prescription drug plans of varying sizes (2, 5, 10, and 16) and asked them to choose a plan. Respondents answered questions about the plan they chose, the choice set, and the decision process. We used ordered probit models to estimate the effect of choice set size on the study outcomes. Results Both the benefits of choice, measured by whether the chosen plan is close to the ideal plan, and the costs, measured by whether the respondent found decision-making difficult, increased with choice set size. Choice set size was not associated with the probability of enrolling in any plan. Conclusions Medicare beneficiaries face a tension between not wanting to choose from too many options and feeling happier with an outcome when they have more alternatives. Interventions that reduce cognitive costs when choice sets are large may make this program more attractive to beneficiaries. PMID:20228281

  19. Signal detection theory applied to three visual search tasks--identification, yes/no detection and localization.

    PubMed

    Cameron, E Leslie; Tai, Joanna C; Eckstein, Miguel P; Carrasco, Marisa

    2004-01-01

    Adding distracters to a display impairs performance on visual tasks (i.e. the set-size effect). While keeping the display characteristics constant, we investigated this effect in three tasks: 2 target identification, yes-no detection with 2 targets, and 8-alternative localization. A Signal Detection Theory (SDT) model, tailored for each task, accounts for the set-size effects observed in identification and localization tasks, and slightly under-predicts the set-size effect in a detection task. Given that sensitivity varies as a function of spatial frequency (SF), we measured performance in each of these three tasks in neutral and peripheral precue conditions for each of six spatial frequencies (0.5-12 cpd). For all spatial frequencies tested, performance on the three tasks decreased as set size increased in the neutral precue condition, and the peripheral precue reduced the effect. Larger set-size effects were observed at low SFs in the identification and localization tasks. This effect can be described using the SDT model, but was not predicted by it. For each of these tasks we also established the extent to which covert attention modulates performance across a range of set sizes. A peripheral precue substantially diminished the set-size effect and improved performance, even at set size 1. These results provide support for distracter exclusion, and suggest that signal enhancement may also be a mechanism by which covert attention can impose its effect.

  20. Increased body size along urbanization gradients at both community and intraspecific level in macro-moths.

    PubMed

    Merckx, Thomas; Kaiser, Aurélien; Van Dyck, Hans

    2018-05-23

    Urbanization involves a cocktail of human-induced rapid environmental changes and is forecasted to gain further importance. Urban-heat-island effects result in increased metabolic costs expected to drive shifts towards smaller body sizes. However, urban environments are also characterized by strong habitat fragmentation, often selecting for dispersal phenotypes. Here, we investigate to what extent, and at which spatial scale(s), urbanization drives body size shifts in macro-moths-an insect group characterized by positive size-dispersal links-at both the community and intraspecific level. Using light and bait trapping as part of a replicated, spatially nested sampling design, we show that despite the observed urban warming of their woodland habitat, macro-moth communities display considerable increases in community-weighted mean body size because of stronger filtering against small species along urbanization gradients. Urbanization drives intraspecific shifts towards increased body size too, at least for a third of species analysed. These results indicate that urbanization drives shifts towards larger, and hence, more mobile species and individuals in order to mitigate low connectivity of ecological resources in urban settings. Macro-moths are a key group within terrestrial ecosystems, and since body size is central to species interactions, such urbanization-driven phenotypic change may impact urban ecosystem functioning, especially in terms of nocturnal pollination and food web dynamics. Although we show that urbanization's size-biased filtering happens simultaneously and coherently at both the inter- and intraspecific level, we demonstrate that the impact at the community level is most pronounced at the 800 m radius scale, whereas species-specific size increases happen at local and landscape scales (50-3,200 m radius), depending on the species. Hence, measures-such as creating and improving urban green infrastructure-to mitigate the effects of urbanization on body size will have to be implemented at multiple spatial scales in order to be most effective. © 2018 John Wiley & Sons Ltd.

  1. An adaptive radiotherapy planning strategy for bladder cancer using deformation vector fields.

    PubMed

    Vestergaard, Anne; Kallehauge, Jesper Folsted; Petersen, Jørgen Breede Baltzer; Høyer, Morten; Søndergaard, Jimmi; Muren, Ludvig Paul

    2014-09-01

    Adaptive radiotherapy (ART) has considerable potential in treatment of bladder cancer due to large inter-fractional changes in shape and size of the target. The aim of this study was to compare our clinically applied method for plan library creation that involves manual bladder delineations (Clin-ART) with a method using the deformation vector fields (DVFs) resulting from intensity-based deformable image registrations (DVF-based ART). The study included thirteen patients with urinary bladder cancer who had daily cone beam CTs (CBCTs) acquired for set-up. In both ART strategies investigated, three plan selection volumes were generated using the CBCTs from the first four fractions; in Clin-ART boolean combinations of delineated bladders were used, while the DVF-based strategy applied combinations of the mean and standard deviation of patient-specific DVFs. The volume ratios (VRs) of the course-averaged PTV for the two ART strategies relative the non-adaptive PTV were calculated. Both Clin-ART and DVF-based ART considerably reduced the course-averaged PTV, compared to non-adaptive RT. The VR for DVF-based ART was lower than for Clin-ART (0.65 vs. 0.73; p<0.01). DVF-based ART for bladder irradiation has a considerable normal tissue sparing potential surpassing our already highly conformal clinically applied ART strategy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Penile size and penile enlargement surgery: a review.

    PubMed

    Dillon, B E; Chama, N B; Honig, S C

    2008-01-01

    Penile size is a considerable concern for men of all ages. Herein, we review the data on penile size and conditions that will result in penile shortening. Penile augmentation procedures are discussed, including indications, procedures and complications of penile lengthening procedures, penile girth enhancement procedures and penile skin reconstruction.

  3. 4 CFR 21.5 - Protest issues not for consideration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... official to file a protest or not to file a protest in connection with a public-private competition. [61 FR... business size standards and North American Industry Classification System (NAICS) standards. Challenges of established size standards or the size status of particular firms, and challenges of the selected NAICS code...

  4. Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.

    PubMed

    Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver

    2016-01-01

    The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.

  5. Will Big Data Close the Missing Heritability Gap?

    PubMed

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I; Hsu, Stephen; de Los Campos, Gustavo

    2017-11-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity ( e.g. , number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing ( n = 22,221) of 0.24 (95% C.I.: 0.23-0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. Copyright © 2017 by the Genetics Society of America.

  6. Will Big Data Close the Missing Heritability Gap?

    PubMed Central

    Kim, Hwasoon; Grueneberg, Alexander; Vazquez, Ana I.; Hsu, Stephen; de los Campos, Gustavo

    2017-01-01

    Despite the important discoveries reported by genome-wide association (GWA) studies, for most traits and diseases the prediction R-squared (R-sq.) achieved with genetic scores remains considerably lower than the trait heritability. Modern biobanks will soon deliver unprecedentedly large biomedical data sets: Will the advent of big data close the gap between the trait heritability and the proportion of variance that can be explained by a genomic predictor? We addressed this question using Bayesian methods and a data analysis approach that produces a surface response relating prediction R-sq. with sample size and model complexity (e.g., number of SNPs). We applied the methodology to data from the interim release of the UK Biobank. Focusing on human height as a model trait and using 80,000 records for model training, we achieved a prediction R-sq. in testing (n = 22,221) of 0.24 (95% C.I.: 0.23–0.25). Our estimates show that prediction R-sq. increases with sample size, reaching an estimated plateau at values that ranged from 0.1 to 0.37 for models using 500 and 50,000 (GWA-selected) SNPs, respectively. Soon much larger data sets will become available. Using the estimated surface response, we forecast that larger sample sizes will lead to further improvements in prediction R-sq. We conclude that big data will lead to a substantial reduction of the gap between trait heritability and the proportion of interindividual differences that can be explained with a genomic predictor. However, even with the power of big data, for complex traits we anticipate that the gap between prediction R-sq. and trait heritability will not be fully closed. PMID:28893854

  7. Using simulation to aid trial design: Ring-vaccination trials.

    PubMed

    Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc

    2017-03-01

    The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.

  8. Fluctuations in energy loss and their implications for dosimetry and radiobiology

    NASA Technical Reports Server (NTRS)

    Baily, N. A.; Steigerwalt, J. E.

    1972-01-01

    Serious consideration of the physics of energy deposition indicates that a fundamental change in the interpretation of absorbed dose is required at least for considerations of effects in biological systems. In addition, theoretical approaches to radiobiology and microdosimetry seem to require statistical considerations incorporating frequency distributions of the magnitude of the event sizes within the volume of interest.

  9. Nurse practitioner caseload in primary health care: Scoping review.

    PubMed

    Martin-Misener, Ruth; Kilpatrick, Kelley; Donald, Faith; Bryant-Lukosius, Denise; Rayner, Jennifer; Valaitis, Ruta; Carter, Nancy; Miller, Patricia A; Landry, Véronique; Harbman, Patricia; Charbonneau-Smith, Renee; McKinlay, R James; Ziegler, Erin; Boesveld, Sarah; Lamb, Alyson

    2016-10-01

    To identify recommendations for determining patient panel/caseload size for nurse practitioners in community-based primary health care settings. Scoping review of the international published and grey literature. The search included electronic databases, international professional and governmental websites, contact with experts, and hand searches of reference lists. Eligible papers had to (a) address caseload or patient panels for nurse practitioners in community-based primary health care settings serving an all-ages population; and (b) be published in English or French between January 2000 and July 2014. Level one testing included title and abstract screening by two team members. Relevant papers were retained for full text review in level two testing, and reviewed by two team members. A third reviewer acted as a tiebreaker. Data were extracted using a structured extraction form by one team member and verified by a second member. Descriptive statistics were estimated. Content analysis was used for qualitative data. We identified 111 peer-reviewed articles and grey literature documents. Most of the papers were published in Canada and the United States after 2010. Current methods to determine panel/caseload size use large administrative databases, provider work hours and the average number of patient visits. Most of the papers addressing the topic of patient panel/caseload size in community-based primary health care were descriptive. The average number of patients seen by nurse practitioners per day varied considerably within and between countries; an average of 9-15 patients per day was common. Patient characteristics (e.g., age, gender) and health conditions (e.g., multiple chronic conditions) appear to influence patient panel/caseload size. Very few studies used validated tools to classify patient acuity levels or disease burden scores. The measurement of productivity and the determination of panel/caseload size is complex. Current metrics may not capture activities relevant to community-based primary health care nurse practitioners. Tools to measure all the components of these role are needed when determining panel/caseload size. Outcomes research is absent in the determination of panel/caseload size. There are few systems in place to track and measure community-based primary health care nurse practitioner activities. The development of such mechanisms is an important next step to assess community-based primary health care nurse practitioner productivity and determine patient panel/caseload size. Decisions about panel/caseload size must take into account the effects of nurse practitioner activities on outcomes of care. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Multilevel Factorial Experiments for Developing Behavioral Interventions: Power, Sample Size, and Resource Considerations†

    PubMed Central

    Dziak, John J.; Nahum-Shani, Inbal; Collins, Linda M.

    2012-01-01

    Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions, by helping investigators to screen several candidate intervention components simultaneously and decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or employees within organizations). In this article we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements such as the number of clusters, the number of lower-level units, and the intraclass correlation affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes, because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation. PMID:22309956

  11. Environmental heterogeneity, dispersal mode, and co-occurrence in stream macroinvertebrates

    PubMed Central

    Heino, Jani

    2013-01-01

    Both environmental heterogeneity and mode of dispersal may affect species co-occurrence in metacommunities. Aquatic invertebrates were sampled in 20–30 streams in each of three drainage basins, differing considerably in environmental heterogeneity. Each drainage basin was further divided into two equally sized sets of sites, again differing profoundly in environmental heterogeneity. Benthic invertebrate data were divided into three groups of taxa based on overland dispersal modes: passive dispersers with aquatic adults, passive dispersers with terrestrial winged adults, and active dispersers with terrestrial winged adults. The co-occurrence of taxa in each dispersal mode group, drainage basin, and heterogeneity site subset was measured using the C-score and its standardized effect size. The probability of finding high levels of species segregation tended to increase with environmental heterogeneity across the drainage basins. These patterns were, however, contingent on both dispersal mode and drainage basin. It thus appears that environmental heterogeneity and dispersal mode interact in affecting co-occurrence in metacommunities, with passive dispersers with aquatic adults showing random patterns irrespective of environmental heterogeneity, and active dispersers with terrestrial winged adults showing increasing segregation with increasing environmental heterogeneity. PMID:23467653

  12. Effect of the combination of different welding parameters on melting characteristics of grade 1 titanium with a pulsed Nd-Yag laser.

    PubMed

    Bertrand, C; Laplanche, O; Rocca, J P; Le Petitcorps, Y; Nammour, S

    2007-11-01

    The laser is a very attractive tool for joining dental metallic alloys. However, the choice of the setting parameters can hardly influence the welding performances. The aim of this research was to evaluate the impact of several parameters (pulse shaping, pulse frequency, focal spot size...) on the quality of the microstructure. Grade 1 titanium plates have been welded with a pulsed Nd-Yag laser. Suitable power, pulse duration, focal spot size, and flow of argon gas were fixed by the operator. Five different pulse shapes and three pulse frequencies were investigated. Two pulse shapes available on this laser unit were eliminated because they considerably hardened the metal. As the pulse frequency rose, the metal was more and more ejected, and a plasma on the surface of the metal increased the oxygen contamination in the welded area. Frequencies of 1 or 2 Hz are optimum for a dental use. Three pulse shapes can be used for titanium but the rectangular shape gives better results.

  13. Plant basket hydraulic structures (PBHS) as a new river restoration measure.

    PubMed

    Kałuża, Tomasz; Radecki-Pawlik, Artur; Szoszkiewicz, Krzysztof; Plesiński, Karol; Radecki-Pawlik, Bartosz; Laks, Ireneusz

    2018-06-15

    River restoration has become increasingly attractive worldwide as it provides considerable benefits to the environment as well as to the economy. This study focuses on changes of hydromorphological conditions in a small lowland river recorded during an experiment carried out in the Flinta River, central Poland. The proposed solution was a pilot project of the construction of vegetative sediment traps (plant basket hydraulic structures - PBHS). A set of three PBSH was installed in the riverbed in one row and a range of hydraulic parameters were recorded over a period of three years (six measurement sessions). Changes of sediment grain size were analysed, and the amount and size of plant debris in the plant barriers were recorded. Plant debris accumulation influencing flow hydrodynamics was detected as a result of the installation of vegetative sediment traps. Moreover, various hydromorphological processes in the river were initiated. Additional simulations based on the detected processes showed that the proposed plant basket hydraulic structures can improve the hydromorphological status of the river. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Sparse feature learning for instrument identification: Effects of sampling and pooling methods.

    PubMed

    Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu

    2016-05-01

    Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.

  15. Imputing unobserved values with the EM algorithm under left and right-truncation, and interval censoring for estimating the size of hidden populations.

    PubMed

    Robb, Matthew L; Böhning, Dankmar

    2011-02-01

    Capture–recapture techniques have been used for considerable time to predict population size. Estimators usually rely on frequency counts for numbers of trappings; however, it may be the case that these are not available for a particular problem, for example if the original data set has been lost and only a summary table is available. Here, we investigate techniques for specific examples; the motivating example is an epidemiology study by Mosley et al., which focussed on a cholera outbreak in East Pakistan. To demonstrate the wider range of the technique, we also look at a study for predicting the long-term outlook of the AIDS epidemic using information on number of sexual partners. A new estimator is developed here which uses the EM algorithm to impute unobserved values and then uses these values in a similar way to the existing estimators. The results show that a truncated approach – mimicking the Chao lower bound approach – gives an improved estimate when population homogeneity is violated.

  16. Interactive archives of scientific data

    NASA Technical Reports Server (NTRS)

    Treinish, Lloyd A.

    1994-01-01

    A focus on qualitative methods of presenting data shows that visualization provides a mechanism for browsing independent of the source of data and is an effective alternative to traditional image-based browsing of image data. To be generally applicable, such visualization methods, however, must be based upon an underlying data model with support for a broad class of data types and structures. Interactive, near-real-time browsing for data sets of interesting size today requires a browse server of considerable power. A symmetric multi-processor with very high internal and external bandwidth demonstrates the feasibility of this concept. Although this technology is likely to be available on the desktop within a few years, the increase in the size and complexity of achieved data will continue to exceed the capacity of 'worksation' systems. Hence, a higher class of performance, especially in bandwidth, will generally be required for on-demand browsing. A few experiments with differing digital compression techniques indicates that a MPEG-1 implementation within the context of a high-performance browse server (i.e., parallized) is a practical method of converting a browse product to a form suitable for network or CD-ROM distribution.

  17. Designing to Save Energy

    ERIC Educational Resources Information Center

    Santamaria, Joseph W.

    1977-01-01

    While tripling the campus size of Alvin Community College in Texas, architects and engineers cut back on nonessential lighting, recaptured waste heat, insulated everything possible, and let energy considerations dictate the size and shape of the building. (Author/MLF)

  18. Ecological correlates of group-size variation in a resource-defense ungulate, the sedentary guanaco.

    PubMed

    Marino, Andrea; Baldi, Ricardo

    2014-01-01

    For large herbivores, predation-risk, habitat structure and population density are often reported as major determinants of group size variation within and between species. However, whether the underlying causes of these relationships imply an ecological adaptation or are the result of a purely mechanistic process in which fusion and fragmentation events only depend on the rate of group meeting, is still under debate. The aim of this study was to model guanaco family and bachelor group sizes in contrasting ecological settings in order to test hypotheses regarding the adaptive significance of group-size variation. We surveyed guanaco group sizes within three wildlife reserves located in eastern Patagonia where guanacos occupy a mosaic of grasslands and shrublands. Two of these reserves have been free from predators for decades while in the third, pumas often prey on guanacos. All locations have experienced important changes in guanaco abundance throughout the study offering the opportunity to test for density effects. We found that bachelor group size increased with increasing density, as expected by the mechanistic approach, but was independent of habitat structure or predation risk. In contrast, the smaller and territorial family groups were larger in the predator-exposed than in the predator-free locations, and were larger in open grasslands than in shrublands. However, the influence of population density on these social units was very weak. Therefore, family group data supported the adaptive significance of group-size variation but did not support the mechanistic idea. Yet, the magnitude of the effects was small and between-population variation in family group size after controlling for habitat and predation was negligible, suggesting that plasticity of these social units is considerably low. Our results showed that different social units might respond differentially to local ecological conditions, supporting two contrasting hypotheses in a single species, and highlight the importance of taking into account the proximate interests and constraints to which group members may be exposed to when deriving predictions about group-size variation.

  19. Comparing two periphyton collection methods commonly used for stream bioassessment and the development of numeric nutrient standards.

    PubMed

    Rodman, Ashley R; Scott, J Thad

    2017-07-01

    Periphyton is an important component of stream bioassessment, yet methods for quantifying periphyton biomass can differ substantially. A case study within the Arkansas Ozarks is presented to demonstrate the potential for linking chlorophyll-a (chl-a) and ash-free dry mass (AFDM) data sets amassed using two frequently used periphyton sampling protocols. Method A involved collecting periphyton from a known area on the top surface of variably sized rocks gathered from relatively swift-velocity riffles without discerning canopy cover. Method B involved collecting periphyton from the entire top surface of cobbles systematically gathered from riffle-run habitat where canopy cover was intentionally avoided. Chl-a and AFDM measurements were not different between methods (p = 0.123 and p = 0.550, respectively), and there was no interaction between method and time in the repeated measures structure of the study. However, significantly different seasonal distinctions were observed for chl-a and AFDM from all streams when data from the methods were combined (p < 0.001 and p = 0.012, respectively), with greater mean biomass in the cooler sampling months. Seasonal trends were likely the indirect results of varying temperatures. Although the size and range of this study were small, results suggest data sets collected using different methods may effectively be used together with some minor considerations due to potential confounding factors. This study provides motivation for the continued investigation of combining data sets derived from multiple methods of data collection, which could be useful in stream bioassessment and particularly important for the development of regional stream nutrient criteria for the southern Ozarks.

  20. Using fiberglass volumes for VPI of superconductive magnetic systems’ insulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreev, I. S.; Bezrukov, A. A.; Pischugin, A. B.

    2014-01-29

    The paper describes the method of manufacturing fiberglass molds for vacuum pressure impregnation (VPI) of high-voltage insulation of superconductive magnetic systems (SMS) with epoxidian hot-setting compounds. The basic advantages of using such vacuum volumes are improved quality of insulation impregnation in complex-shaped areas, and considerable cost-saving of preparing VPI of large-sized components due to dispensing with the stage of fabricating a metal impregnating volume. Such fiberglass vacuum molds were used for VPI of high-voltage insulation samples of an ITER reactor’s PF1 poloidal coil. Electric insulation of these samples has successfully undergone a wide range of high-voltage and mechanical tests atmore » room and cryogenic temperatures. Some results of the tests are also given in this paper.« less

  1. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  2. Macroscopic and mesoscopic approach to the alkali-silica reaction in concrete

    NASA Astrophysics Data System (ADS)

    Grymin, Witold; Koniorczyk, Marcin; Pesavento, Francesco; Gawin, Dariusz

    2018-01-01

    A model of the alkali-silica reaction, which takes into account couplings between thermal, hygral, mechanical and chemical phenomena in concrete, has been discussed. The ASR may be considered at macroscopic or mesoscopic scale. The main features of each approach have been summarized and development of the model for both scales has been briefly described. Application of the model to experimental results for both scales has been presented. Even though good accordance of the model has been obtained for both approaches, consideration of the model at the mesoscopic scale allows to model different mortar mixes, prepared with the same aggregate, but of different grain size, using the same set of parameters. It enables also to predict reaction development assuming different alkali sources, such as de-icing salts or alkali leaching.

  3. Enrollment in prescription drug insurance: the interaction of numeracy and choice set size.

    PubMed

    Szrek, Helena; Bundorf, M Kate

    2014-04-01

    To determine how choice set size affects decision quality among individuals of different levels of numeracy choosing prescription drug plans. Members of an Internet-enabled panel age 65 and over were randomly assigned to sets of prescription drug plans varying in size from 2 to 16 plans from which they made a hypothetical choice. They answered questions about enrollment likelihood and the costs and benefits of their choice. The measure of decision quality was enrollment likelihood among those for whom enrollment was beneficial. Enrollment likelihood by numeracy and choice set size was calculated. A model of moderated mediation was analyzed to understand the role of numeracy as a moderator of the relationship between the number of plans and the quality of the enrollment decision and the roles of the costs and benefits in mediating that relationship. More numerate adults made better decisions than less numerate adults when choosing among a small number of alternatives but not when choice sets were larger. Choice set size had little effect on decision making of less numerate adults. Differences in decision making costs between more and less numerate adults helped explain the effect of choice set size on decision quality. Interventions to improve decision making in the context of Medicare Part D may differentially affect lower and higher numeracy adults. The conflicting results on choice overload in the psychology literature may be explained in part by differences amongst individuals in how they respond to choice set size.

  4. Accuracy of stream habitat interpolations across spatial scales

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2013-01-01

    Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.

  5. Payer and Pharmaceutical Manufacturer Considerations for Outcomes-Based Agreements in the United States.

    PubMed

    Brown, Joshua D; Sheer, Rich; Pasquale, Margaret; Sudharshan, Lavanya; Axelsen, Kirsten; Subedi, Prasun; Wiederkehr, Daniel; Brownfield, Fred; Kamal-Bahl, Sachin

    2018-01-01

    Considerable interest exists among health care payers and pharmaceutical manufacturers in designing outcomes-based agreements (OBAs) for medications for which evidence on real-world effectiveness is limited at product launch. To build hypothetical OBA models in which both payer and manufacturer can benefit. Models were developed for a hypothetical hypercholesterolemia OBA, in which the OBA was assumed to increase market access for a newly marketed medication. Fixed inputs were drug and outcome event costs from the literature over a 1-year OBA period. Model estimates were developed using a range of inputs for medication effectiveness, medical cost offsets, and the treated population size. Positive or negative feedback to the manufacturer was incorporated on the basis of expectations of drug performance through changes in the reimbursement level. Model simulations demonstrated that parameters had the greatest impact on payer cost and manufacturer reimbursement. Models suggested that changes in the size of the population treated and drug effectiveness had the largest influence on reimbursement and costs. Despite sharing risk for potential product underperformance, manufacturer reimbursement increased relative to having no OBA, if the OBA improved market access for the new product. Although reduction in medical costs did not fully offset the cost of the medication, the payer could still save on net costs per patient relative to having no OBA by tying reimbursement to drug effectiveness. Pharmaceutical manufacturers and health care payers have demonstrated interest in OBAs, and under a certain set of assumptions both may benefit. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Metabolomics analysis: Finding out metabolic building blocks

    PubMed Central

    2017-01-01

    In this paper we propose a new methodology for the analysis of metabolic networks. We use the notion of strongly connected components of a graph, called in this context metabolic building blocks. Every strongly connected component is contracted to a single node in such a way that the resulting graph is a directed acyclic graph, called a metabolic DAG, with a considerably reduced number of nodes. The property of being a directed acyclic graph brings out a background graph topology that reveals the connectivity of the metabolic network, as well as bridges, isolated nodes and cut nodes. Altogether, it becomes a key information for the discovery of functional metabolic relations. Our methodology has been applied to the glycolysis and the purine metabolic pathways for all organisms in the KEGG database, although it is general enough to work on any database. As expected, using the metabolic DAGs formalism, a considerable reduction on the size of the metabolic networks has been obtained, specially in the case of the purine pathway due to its relative larger size. As a proof of concept, from the information captured by a metabolic DAG and its corresponding metabolic building blocks, we obtain the core of the glycolysis pathway and the core of the purine metabolism pathway and detect some essential metabolic building blocks that reveal the key reactions in both pathways. Finally, the application of our methodology to the glycolysis pathway and the purine metabolism pathway reproduce the tree of life for the whole set of the organisms represented in the KEGG database which supports the utility of this research. PMID:28493998

  7. The Comparability of the Standardized Mean Difference Effect Size across Different Measures of the Same Construct: Measurement Considerations

    ERIC Educational Resources Information Center

    Nugent, William R.

    2006-01-01

    One of the most important effect sizes used in meta-analysis is the standardized mean difference (SMD). In this article, the conditions under which SMD effect sizes based on different measures of the same construct are directly comparable are investigated. The results show that SMD effect sizes from different measures of the same construct are…

  8. Adolescent Sexual Health Communication and Condom Use: A Meta-Analysis

    PubMed Central

    Widman, Laura; Noar, Seth M.; Choukas-Bradley, Sophia; Francis, Diane

    2014-01-01

    Objective Condom use is critical for the health of sexually active adolescents, and yet many adolescents fail to use condoms consistently. One interpersonal factor that may be key to condom use is sexual communication between sexual partners; however, the association between communication and condom use has varied considerably in prior studies of youth. The purpose of this meta-analysis was to synthesize the growing body of research linking adolescents’ sexual communication to condom use, and to examine several moderators of this association. Methods A total of 41 independent effect sizes from 34 studies with 15,046 adolescent participants (Mage=16.8, age range=12–23) were meta-analyzed. Results Results revealed a weighted mean effect size of the sexual communication-condom use relationship of r = .24, which was statistically heterogeneous (Q=618.86, p<.001, I2 =93.54). Effect sizes did not differ significantly by gender, age, recruitment setting, country of study, or condom measurement timeframe; however, communication topic and communication format were statistically significant moderators (p<.001). Larger effect sizes were found for communication about condom use (r = .34) than communication about sexual history (r = .15) or general safer sex topics (r = .14). Effect sizes were also larger for communication behavior formats (r = .27) and self-efficacy formats (r = .28), than for fear/concern (r = .18), future intention (r = .15), or communication comfort (r = −.15) formats. Conclusions Results highlight the urgency of emphasizing communication skills, particularly about condom use, in HIV/STI prevention work for youth. Implications for the future study of sexual communication are discussed. PMID:25133828

  9. Evaluating Unsupervised Methods to Size and Classify Suspended Particles Using Digital Holography

    NASA Astrophysics Data System (ADS)

    Davies, E. J.; Buscombe, D.; Graham, G.; Nimmo-Smith, A.

    2013-12-01

    The use of digital holography to image suspended particles in-situ using submersible systems is on the ascendancy. Such systems allow visualization of the in-focus particles without the depth-of-field issues associated with conventional imaging. The size and concentration of all particles, and each individual particle, can be rapidly and automatically assessed. The automated methods by which to extract these quantities can be readily evaluated using manual measurements. These methods are not possible using instruments based on optical and acoustic (back- or forward-) scattering, so-called 'sediment surrogate' methods, which are sensitive to the bulk quantities of all suspended particles in a sample volume, and rely on mathematically inverting a measured signal to derive the property of interest. Depending on the intended application, the number of holograms required to elucidate a process could range from tens to millions. Therefore manual particle extraction is not feasible for most data-sets. This has created a pressing need among the growing community of holography users, for accurate, automated processing which is comparable in output to more well-established in-situ sizing techniques such as laser diffraction. Here we discuss the computational considerations required to focus and segment individual particles from raw digital holograms, and then size and classify these particles by type; all using unsupervised (automated) image processing. To do so, we draw upon imagery from both controlled laboratory conditions to near-shore coastal environments, using different holographic system designs, and constituting a significant variety in particle types, sizes and shapes. We evaluate the success of these techniques, and suggest directions for future developments.

  10. ICP-MS based methods to characterize nanoparticles of TiO2 and ZnO in sunscreens with focus on regulatory and safety issues.

    PubMed

    Bocca, Beatrice; Caimi, Stefano; Senofonte, Oreste; Alimonti, Alessandro; Petrucci, Francesco

    2018-07-15

    This study sought to develop analytical methods to characterize titanium dioxide (TiO 2 ) and zinc oxide (ZnO) nanoparticles (NPs), including the particle size distribution and concentration, in cream and spray sunscreens with different sun protection factor (SPF). The Single Particle Inductively Coupled Plasma-Mass Spectrometry (SP ICP-MS) was used as screening and fast method to determine particles size and number. The Asymmetric Flow-Field Flow Fractionation (AF4-FFF) as a pre-separation technique was on-line coupled to the Multi-Angle Light Scattering (MALS) and ICP-MS to determine particle size distributions and size dependent multi-elemental concentration. Both methods were optimized in sunscreens in terms of recovery, repeatability, limit of detection and linear dynamic range. Results showed that sunscreens contained TiO 2 particles with an average size of ≤107 nm and also a minor number of ZnO particles sized ≤98 nm. The higher fraction of particles <100 nm was observed in sunscreens with SPF 50+ (ca. 80%); the lower percentage (12-35%) in sunscreens with lower SPF values. Also the higher TiO 2 (up to 24% weight) and ZnO (ca. 0.25% weight) concentrations were found in formulations of SPF 50+. Creamy sunscreens could be considered safe containing TiO 2 and ZnO NPs less than the maximum allowable concentration of 25% weight as set by the European legislation. On the contrary, spray products required additional considerations with regard to the potential inhalation of NPs. The developed methods can contribute to the actual demand for regulatory control and safety assessment of metallic NPs in consumers' products. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Twelve- to 14-Month-Old Infants Can Predict Single-Event Probability with Large Set Sizes

    ERIC Educational Resources Information Center

    Denison, Stephanie; Xu, Fei

    2010-01-01

    Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas "et al.", 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A…

  12. Sizing ocean giants: patterns of intraspecific size variation in marine megafauna

    PubMed Central

    Balk, Meghan A.; Benfield, Mark C.; Branch, Trevor A.; Chen, Catherine; Cosgrove, James; Dove, Alistair D.M.; Gaskins, Lindsay C.; Helm, Rebecca R.; Hochberg, Frederick G.; Lee, Frank B.; Marshall, Andrea; McMurray, Steven E.; Schanche, Caroline; Stone, Shane N.; Thaler, Andrew D.

    2015-01-01

    What are the greatest sizes that the largest marine megafauna obtain? This is a simple question with a difficult and complex answer. Many of the largest-sized species occur in the world’s oceans. For many of these, rarity, remoteness, and quite simply the logistics of measuring these giants has made obtaining accurate size measurements difficult. Inaccurate reports of maximum sizes run rampant through the scientific literature and popular media. Moreover, how intraspecific variation in the body sizes of these animals relates to sex, population structure, the environment, and interactions with humans remains underappreciated. Here, we review and analyze body size for 25 ocean giants ranging across the animal kingdom. For each taxon we document body size for the largest known marine species of several clades. We also analyze intraspecific variation and identify the largest known individuals for each species. Where data allows, we analyze spatial and temporal intraspecific size variation. We also provide allometric scaling equations between different size measurements as resources to other researchers. In some cases, the lack of data prevents us from fully examining these topics and instead we specifically highlight these deficiencies and the barriers that exist for data collection. Overall, we found considerable variability in intraspecific size distributions from strongly left- to strongly right-skewed. We provide several allometric equations that allow for estimation of total lengths and weights from more easily obtained measurements. In several cases, we also quantify considerable geographic variation and decreases in size likely attributed to humans. PMID:25649000

  13. Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence

    PubMed Central

    Claggett, B.; Lagakos, S.W.; Wang, R.

    2011-01-01

    Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904

  14. Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.

    PubMed

    Claggett, B; Lagakos, S W; Wang, R

    2012-03-01

    Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.

  15. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline

    PubMed Central

    2013-01-01

    Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. Results We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS A : DE genes with non-zero effect sizes in all studies, (2) HS B : DE genes with non-zero effect sizes in one or more studies and (3) HS r : DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. Conclusions The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS A , HS B , and HS r ). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author’s publication website. PMID:24359104

  17. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    PubMed

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  18. SU-F-T-255: Accuracy and Precision of Dynamic Tracking Irradiation with VERO-4DRT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, N; Takada, Y; Mizuno, T

    2016-06-15

    Purpose: The VERO-4DRT system is able to provide dynamic tracking irradiation (DTI) for the target with respiratory motion. This technique requires enough commissioning for clinical implementation. The purpose of this study is to make sure the accuracy and precision of DTI using VERO- 4DRT through commissioning from fundamental evaluation to end-to-end test. Method: We evaluated several contents for DTI commissioning: the accuracy of absorption dose at isocenter in DTI, the field size and penumbra of DTI, the accuracy of 4D modeling in DTI. All evaluations were performed by respiratory motion phantom (Quasar phantom). These contents were compared the results betweenmore » static irradiation and DTI. The shape of radiation field was set to square from 3 cm × 3 cm to 10 cm × 10 cm. The micro 3D chamber and Gafchromic EBT3 film were used for absorbed dose and relative dose distribution measurement, respectively. The sine and irregular shaped waves were used for demonstrative respiratory motion. The visicoil was implanted into the phantom for guidance of respiratory motion. The respiration patterns of frequency and motion amount were set to 10–15 BPM and 1–2 cm, respectively. Results: As the result of absorbed dose of DTI in comparison with static irradiation, the average dose error at isocenter was 0.5% even though various respiratory patterns were set on. As the result of relative dose distribution, the field size (set it on 50% dose line) was not significantly changed in all respiratory patterns. However, the penumbra was larger in greater respiratory motion (up to 4.1 mm). The 4D modeling coincidence between actual and created waves was within 1%. Conclusion: The DTI using VERO-4DRT can provide sufficient accuracy and precision in absorbed dose and distribution. However, the patientspecific quantitative internal margin corresponding respiratory motion should be taken into consideration with image guidance.« less

  19. Some Considerations on the Dynamics of Nanometric Suspensions in Fluid Media

    NASA Astrophysics Data System (ADS)

    Lungu, Mihai; Neculae, Adrian; Bunoiu, Madalin

    2009-05-01

    Nano-sized particles received considerable interest in the last decade. The manipulation of nanoparticles is becoming an important issue as they are more and more produced as a result of material synthesis and combustion emission. The nanometric particles represent a very important threat for human health because they can readily enter the human body through inhalation and their toxicity is relatively high due to the large specific surface area. The separation of the nano-sized particles into distinct bands, spatially separated one of each other had also brought recently considerable attention in many scientific areas; the usages of nanoparticles are very promising for the new technologies. The behavior of a suspension of sub-micronic particles under the action of dielectrophoretic force is numerically investigated and a theoretical model is proposed.

  20. Population-Based Resequencing of Experimentally Evolved Populations Reveals the Genetic Basis of Body Size Variation in Drosophila melanogaster

    PubMed Central

    Turner, Thomas L.; Stewart, Andrew D.; Fields, Andrew T.; Rice, William R.; Tarone, Aaron M.

    2011-01-01

    Body size is a classic quantitative trait with evolutionarily significant variation within many species. Locating the alleles responsible for this variation would help understand the maintenance of variation in body size in particular, as well as quantitative traits in general. However, successful genome-wide association of genotype and phenotype may require very large sample sizes if alleles have low population frequencies or modest effects. As a complementary approach, we propose that population-based resequencing of experimentally evolved populations allows for considerable power to map functional variation. Here, we use this technique to investigate the genetic basis of natural variation in body size in Drosophila melanogaster. Significant differentiation of hundreds of loci in replicate selection populations supports the hypothesis that the genetic basis of body size variation is very polygenic in D. melanogaster. Significantly differentiated variants are limited to single genes at some loci, allowing precise hypotheses to be formed regarding causal polymorphisms, while other significant regions are large and contain many genes. By using significantly associated polymorphisms as a priori candidates in follow-up studies, these data are expected to provide considerable power to determine the genetic basis of natural variation in body size. PMID:21437274

  1. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    NASA Astrophysics Data System (ADS)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  2. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: dispersion, induction, and basis set superposition error.

    PubMed

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J

    2012-10-07

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  3. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    PubMed Central

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-01-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587

  4. Working memory for visual features and conjunctions in schizophrenia.

    PubMed

    Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J

    2003-02-01

    The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.

  5. Effect of deformation induced nucleation and phase mixing, a two phase model for the ductile deformation of rocks.

    NASA Astrophysics Data System (ADS)

    Bevillard, Benoit; Richard, Guillaume; Raimbourg, Hugues

    2017-04-01

    Rocks are complex materials and particularly their rheological behavior under geological stresses remains a long-standing question in geodynamics. To test large scale lithosphere dynamics numerical modeling is the main tool but encounter substantial difficulties to account for this complexity. One major unknown is the origin and development of the localization of deformation. This localization is observed within a large range of scales and is commonly characterized by sharp grain size reduction. These considerations argues for a control of the microscopical scale over the largest ones through one predominant variable: the mean grain-size. However, the presence of second phase and broad grain-size distribution may also have a important impact on this phenomenon. To address this question, we built a model for ductile rocks deformation based on the two-phase damage theory of Bercovici & Ricard 2012. We aim to investigate the role of grain-size reduction but also phase mixing on strain localization. Instead of considering a Zener-pining effect on damage evolution, we propose to take into account the effect of the grain-boundary sliding (GBS)-induced nucleation mechanism which is better supported by experimental or natural observations (Precigout et al 2016). This continuum theory allows to represent a two mineral phases aggregate with explicit log-normal grain-size distribution as a reasonable approximation for polymineralic rocks. Quantifying microscopical variables using a statistical approach may allow for calibration at small (experimental) scale. The general set of evolutions equations remains up-scalable provided some conditions on the homogenization scale. Using the interface density as a measure of mixture quality, we assume unlike Bercovici & Ricard 2012 that it may depend for some part on grain-size . The grain-size independent part of it is being represented by a "contact fraction" variable, whose evolution may be constrained by the dominant deformation mechanism. To derive the related evolution equations and account for the interdependence of thermodynamic state variables, we use Onsager's thermodynamic extremum principle. Eventually, we solve for our set of equations using an Anorthite/Pyroxene gabbroic composition. The results are used to discuss the interaction between grain-size reduction and phase mixing on strain localization on several simple cases. Bercovici D, Ricard Y (2012) Mechanisms for the generation of plate tectonics by two phase grain damage and pinning. Physics of the Earth and Planetary Interiors 202-203:27-55 Precigout J, Stunitz H (2016) Evidence of phase nucleation during olivine diffusion creep: A new perspective for mantle strain localisation. Earth and Planetary Science Letters 405:94-105

  6. How to Buy School Seating.

    ERIC Educational Resources Information Center

    Summerville, D.G.

    1966-01-01

    An expert tells what kind of furniture you need for the different rooms in your schools. Suggestions are made separately for both elementary and secondary classrooms emphasizing consideration for the student. General considerations are listed regarding durability, floor protection, storage, chair leg finish, wooden vs. fiberglass, size, and…

  7. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  8. Analysis of the scattering and absorption properties of ellipsoidal nanoparticle arrays for the design of full-color transparent screens

    NASA Astrophysics Data System (ADS)

    Monti, Alessio; Toscano, Alessandro; Bilotti, Filiberto

    2017-06-01

    The introduction of nanoparticles-based screens [C. W. Hsu, Nat. Commun. 5, 3152 (2014)] has paved the way to the realization of low-cost transparent displays with a wide viewing angle and scalability to large size. Despite the huge potentialities of this approach, the design of a nanoparticles array exhibiting a sharp scattering response in the optical spectrum is still a challenging task. In this manuscript, we investigate the suitability of ellipsoidal plasmonic nanoparticles for this purpose. First, we show that some trade-offs between the sharpness of the scattering response of the array and its absorption level apply. Starting from these considerations, we prove that prolate nanoparticles may be a plausible candidate for achieving the peculiar features required in transparent screen applications. An example of a full-color and almost-isotropic transparent screen is finally proposed and its robustness towards the geometrical inaccuracies that may arise during the fabrication process is assessed. All the analytical considerations, carried out through an analytical model taking into account the surface dispersion effect affecting the nanoparticles, are supported by a proper set of full-wave simulations.

  9. Dynamic Task Allocation in Multi-Hop Multimedia Wireless Sensor Networks with Low Mobility

    PubMed Central

    Jin, Yichao; Vural, Serdar; Gluhak, Alexander; Moessner, Klaus

    2013-01-01

    This paper presents a task allocation-oriented framework to enable efficient in-network processing and cost-effective multi-hop resource sharing for dynamic multi-hop multimedia wireless sensor networks with low node mobility, e.g., pedestrian speeds. The proposed system incorporates a fast task reallocation algorithm to quickly recover from possible network service disruptions, such as node or link failures. An evolutional self-learning mechanism based on a genetic algorithm continuously adapts the system parameters in order to meet the desired application delay requirements, while also achieving a sufficiently long network lifetime. Since the algorithm runtime incurs considerable time delay while updating task assignments, we introduce an adaptive window size to limit the delay periods and ensure an up-to-date solution based on node mobility patterns and device processing capabilities. To the best of our knowledge, this is the first study that yields multi-objective task allocation in a mobile multi-hop wireless environment under dynamic conditions. Simulations are performed in various settings, and the results show considerable performance improvement in extending network lifetime compared to heuristic mechanisms. Furthermore, the proposed framework provides noticeable reduction in the frequency of missing application deadlines. PMID:24135992

  10. Occupational exposure limit for silver nanoparticles: considerations on the derivation of a general health-based value.

    PubMed

    Weldon, Brittany A; M Faustman, Elaine; Oberdörster, Günter; Workman, Tomomi; Griffith, William C; Kneuer, Carsten; Yu, Il Je

    2016-09-01

    With the increased production and widespread commercial use of silver nanoparticles (AgNPs), human and environmental exposures to silver nanoparticles are inevitably increasing. In particular, persons manufacturing and handling silver nanoparticles and silver nanoparticle containing products are at risk of exposure, potentially resulting in health hazards. While silver dusts, consisting of micro-sized particles and soluble compounds have established occupational exposure limits (OELs), silver nanoparticles exhibit different physicochemical properties from bulk materials. Therefore, we assessed silver nanoparticle exposure and related health hazards in order to determine whether an additional OEL may be needed. Dosimetric evaluations in our study identified the liver as the most sensitive target organ following inhalation exposure, and as such serves as the critical target organ for setting an occupational exposure standard for airborne silver nanoparticles. This study proposes an OEL of 0.19 μg/m(3) for silver nanoparticles derived from benchmark concentrations (BMCs) from subchronic rat inhalation toxicity assessments and the human equivalent concentration (HEC) with kinetic considerations and additional uncertainty factors. It is anticipated that this level will protect workers from potential health hazards, including lung, liver, and skin damage.

  11. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  12. Comparison of multi-arm VRX CT scanners through computer models

    NASA Astrophysics Data System (ADS)

    Rendon, David A.; DiBianca, Frank A.; Keyes, Gary S.

    2007-03-01

    Variable Resolution X-ray (VRX) CT scanners allow imaging of different sized anatomy at the same level of detail using the same device. This is achieved by tilting the x-ray detectors so that the projected size of the detecting elements is varied producing reconstructions of smaller fields of view with higher spatial resolution.1 The detector can be divided in two or more separate segments, called arms, which can be placed at different angles, allowing some flexibility for the scanner design. In particular, several arms can be set at different angles creating a target region of considerably higher resolution that can be used to track the evolution of a previously diagnosed condition, while keeping the patient completely inside the field of view (FOV).2 This work presents newly-developed computer models of single-slice VRX scanners that allow us to study and compare different configurations (that is, various types of detectors arranged in any number of arms arranged in different geometries) in terms of spatial and contrast resolution. In particular, we are interested in comparing the performance of various geometric configurations that would otherwise be considered equivalent (using the same equipment, imaging FOVs of the same sizes, and having a similar overall scanner size). For this, a VRX simulator was developed, along with mathematical phantoms for spatial resolution and contrast analysis. These tools were used to compare scanner configurations that can be reproduced with materials presently available in our lab.

  13. Accuracy of iodine quantification in dual-layer spectral CT: Influence of iterative reconstruction, patient habitus and tube parameters.

    PubMed

    Sauter, Andreas P; Kopp, Felix K; Münzel, Daniela; Dangelmaier, Julia; Renz, Martin; Renger, Bernhard; Braren, Rickmer; Fingerle, Alexander A; Rummeny, Ernst J; Noël, Peter B

    2018-05-01

    Evaluation of the influence of iterative reconstruction, tube settings and patient habitus on the accuracy of iodine quantification with dual-layer spectral CT (DL-CT). A CT abdomen phantom with different extension rings and four iodine inserts (1, 2, 5 and 10 mg/ml) was scanned on a DL-CT. The phantom was scanned with tube-voltages of 120 and 140 kVp and CTDI vol of 2.5, 5, 10 and 20 mGy. Reconstructions were performed for eight levels of iterative reconstruction (i0-i7). Diagnostic dose levels are classified depending on patient-size and radiation dose. Measurements of iodine concentration showed accurate and reliable results. Taking all CTDI vol -levels into account, the mean absolute percentage difference (MAPD) showed less accuracy for low CTDI vol -levels (2.5 mGy: 34.72%) than for high CTDI vol -levels (20 mGy: 5.89%). At diagnostic dose levels, accurate quantification of iodine was possible (MAPD 3.38%). Level of iterative reconstruction did not significantly influence iodine measurements. Iodine quantification worked more accurately at a tube voltage of 140 kVp. Phantom size had a considerable effect only at low-dose-levels; at diagnostic dose levels the effect of phantom size decreased (MAPD <5% for all phantom sizes). With DL-CT, even low iodine concentrations can be accurately quantified. Accuracies are higher when diagnostic radiation doses are employed. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Forest Fuels Management in Europe

    Treesearch

    Gavriil Xanthopoulos; David Caballero; Miguel Galante; Daniel Alexandrian; Eric Rigolot; Raffaella Marzano

    2006-01-01

    Current fuel management practices vary considerably between European countries. Topography, forest and forest fuel characteristics, size and compartmentalization of forests, forest management practices, land uses, land ownership, size of properties, legislation, and, of course, tradition, are reasons for these differences.Firebreak construction,...

  15. A parallel expert system for the control of a robotic air vehicle

    NASA Technical Reports Server (NTRS)

    Shakley, Donald; Lamont, Gary B.

    1988-01-01

    Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.

  16. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  17. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  18. Evaluation of injectable strontium-containing borate bioactive glass cement with enhanced osteogenic capacity in a critical-sized rabbit femoral condyle defect model.

    PubMed

    Zhang, Yadong; Cui, Xu; Zhao, Shichang; Wang, Hui; Rahaman, Mohamed N; Liu, Zhongtang; Huang, Wenhai; Zhang, Changqing

    2015-02-04

    The development of a new generation of injectable bone cements that are bioactive and have enhanced osteogenic capacity for rapid osseointegration is receiving considerable interest. In this study, a novel injectable cement (designated Sr-BBG) composed of strontium-doped borate bioactive glass particles and a chitosan-based bonding phase was prepared and evaluated in vitro and in vivo. The bioactive glass provided the benefits of bioactivity, conversion to hydroxyapatite, and the ability to stimulate osteogenesis, while the chitosan provided a cohesive biocompatible and biodegradable bonding phase. The Sr-BBG cement showed the ability to set in situ (initial setting time = 11.6 ± 1.2 min) and a compressive strength of 19 ± 1 MPa. The Sr-BBG cement enhanced the proliferation and osteogenic differentiation of human bone marrow-derived mesenchymal stem cells in vitro when compared to a similar cement (BBG) composed of chitosan-bonded borate bioactive glass particles without Sr. Microcomputed tomography and histology of critical-sized rabbit femoral condyle defects implanted with the cements showed the osteogenic capacity of the Sr-BBG cement. New bone was observed at different distances from the Sr-BBG implants within eight weeks. The bone-implant contact index was significantly higher for the Sr-BBG implant than it was for the BBG implant. Together, the results indicate that this Sr-BBG cement is a promising implant for healing irregularly shaped bone defects using minimally invasive surgery.

  19. Special considerations--Induction of labor in low-resource settings.

    PubMed

    Smid, Marcela; Ahmed, Yusuf; Ivester, Thomas

    2015-10-01

    Induction of labor in resource-limited settings has the potential to significantly improve health outcomes for both mothers and infants. However, there are relatively little context-specific data to guide practice, and few specific guidelines. Also, there may be considerable issues regarding the facilities and organizational capacities necessary to support safe practices in many aspects of obstetrical practice, and for induction of labor in particular. Herein we describe the various opportunities as well as challenges presented by induction of labor in these settings. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Lack of Set Size Effects in Spatial Updating: Evidence for Offline Updating

    ERIC Educational Resources Information Center

    Hodgson, Eric; Waller, David

    2006-01-01

    Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these…

  1. Developing Foreign Language Curriculum in the Total School Setting: The Macro-Picture. The ACTFL Foreign Language Education Series, Vol. 10.

    ERIC Educational Resources Information Center

    Zais, Robert S.

    Four broad questions are addressed in this consideration of foreign language study in the total school setting. The first part deals with the broad perspective that is needed in order to integrate the educational enterprise and to form a community of educators each with a special contribution to make. Some considerations relevant to this question…

  2. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  3. Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

    ERIC Educational Resources Information Center

    Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.

    2013-01-01

    Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…

  4. Class Size Effects on Reading Achievement Using PIRLS Data: Evidence from Greece

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros; Traynor, Anne

    2014-01-01

    Background/Context: The effects of class size on student achievement have gained considerable attention in education research and policy, especially over the last 30 years. Perhaps the best evidence about the effects of class size thus far has been produced from analyses of Project STAR data, a large-scale experiment where students and teachers…

  5. 48 CFR 970.1100-2 - Additional considerations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Additional considerations. 970.1100-2 Section 970.1100-2 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY... considerations. (a) While it is not feasible to set forth standard language which would apply to every contract...

  6. 48 CFR 970.1100-2 - Additional considerations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Additional considerations. 970.1100-2 Section 970.1100-2 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY... considerations. (a) While it is not feasible to set forth standard language which would apply to every contract...

  7. 48 CFR 970.1100-2 - Additional considerations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Additional considerations. 970.1100-2 Section 970.1100-2 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY... considerations. (a) While it is not feasible to set forth standard language which would apply to every contract...

  8. 48 CFR 970.1100-2 - Additional considerations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Additional considerations. 970.1100-2 Section 970.1100-2 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY... considerations. (a) While it is not feasible to set forth standard language which would apply to every contract...

  9. 48 CFR 970.1100-2 - Additional considerations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Additional considerations. 970.1100-2 Section 970.1100-2 Federal Acquisition Regulations System DEPARTMENT OF ENERGY AGENCY... considerations. (a) While it is not feasible to set forth standard language which would apply to every contract...

  10. Historical changes in genotypic frequencies at the Pantophysin locus in Atlantic cod (Gadus morhua) in Icelandic waters: evidence of fisheries-induced selection?

    PubMed Central

    Jakobsdóttir, Klara B; Pardoe, Heidi; Magnússon, Árni; Björnsson, Höskuldur; Pampoulie, Christophe; Ruzzante, Daniel E; Marteinsdóttir, Guðrún

    2011-01-01

    The intense fishing mortality imposed on Atlantic cod in Icelandic waters during recent decades has resulted in marked changes in stock abundance, as well as in age and size composition. Using a molecular marker known to be under selection (Pan I) along with a suite of six neutral microsatellite loci, we analysed an archived data set and revealed evidence of distinct temporal changes in the frequencies of genotypes at the Pan I locus among spawning Icelandic cod, collected between 1948 and 2002, a period characterized by high fishing pressure. Concurrently, temporal stability in the composition of the microsatellite loci was established within the same data set. The frequency of the Pan IBB genotype decreased over a period of six decades, concomitant with considerable spatial and technical changes in fishing effort that resulted in the disappearance of older individuals from the fishable stock. Consequently, these changes have likely led to a change in the genotype frequencies at this locus in the spawning stock of Icelandic cod. The study highlights the value of molecular genetic approaches that combine functional and neutral markers examined in the same set of individuals for investigations of the selective effects of harvesting and reiterates the need for an evolutionary dimension to fisheries management. PMID:25568005

  11. Solvation effects on chemical shifts by embedded cluster integral equation theory.

    PubMed

    Frach, Roland; Kast, Stefan M

    2014-12-11

    The accurate computational prediction of nuclear magnetic resonance (NMR) parameters like chemical shifts represents a challenge if the species studied is immersed in strongly polarizing environments such as water. Common approaches to treating a solvent in the form of, e.g., the polarizable continuum model (PCM) ignore strong directional interactions such as H-bonds to the solvent which can have substantial impact on magnetic shieldings. We here present a computational methodology that accounts for atomic-level solvent effects on NMR parameters by extending the embedded cluster reference interaction site model (EC-RISM) integral equation theory to the prediction of chemical shifts of N-methylacetamide (NMA) in aqueous solution. We examine the influence of various so-called closure approximations of the underlying three-dimensional RISM theory as well as the impact of basis set size and different treatment of electrostatic solute-solvent interactions. We find considerable and systematic improvement over reference PCM and gas phase calculations. A smaller basis set in combination with a simple point charge model already yields good performance which can be further improved by employing exact electrostatic quantum-mechanical solute-solvent interaction energies. A larger basis set benefits more significantly from exact over point charge electrostatics, which can be related to differences of the solvent's charge distribution.

  12. Case Report of a Patient With Idiopathic Hypersomnia and a Family History of Malignant Hyperthermia Undergoing General Anesthesia: An Overview of the Anesthetic Considerations.

    PubMed

    Aflaki, Sena; Hu, Sally; Kamel, Rami A; Chung, Frances; Singh, Mandeep

    2017-05-01

    The pathophysiologic underpinnings of idiopathic hypersomnia and its interactions with anesthetic medications remain poorly understood. There is a scarcity of literature describing this patient population in the surgical setting. This case report outlines the anesthetic considerations and management plan for a 55-year-old female patient with a known history of idiopathic hypersomnia undergoing an elective shoulder arthroscopy in the ambulatory setting. In addition, this case offers a unique set of considerations and conflicts related to the patient having a family history of malignant hyperthermia. A combined technique of general and regional anesthesia was used. Anesthesia was maintained with total intravenous anesthesia via the use of propofol and remifentanil. The depth of anesthesia was monitored with entropy. There were no perioperative complications.

  13. Effects of Group Size on Students Mathematics Achievement in Small Group Settings

    ERIC Educational Resources Information Center

    Enu, Justice; Danso, Paul Amoah; Awortwe, Peter K.

    2015-01-01

    An ideal group size is hard to obtain in small group settings; hence there are groups with more members than others. The purpose of the study was to find out whether group size has any effects on students' mathematics achievement in small group settings. Two third year classes of the 2011/2012 academic year were selected from two schools in the…

  14. Neural Plasticity and Neurorehabilitation Following Traumatic Brain Injury

    DTIC Science & Technology

    2009-10-01

    Nissl . Using the Nissl stained sections, Dorothy Kozlowski’s lab has analyzed the size of the contusions. Previous studies have shown that if...brains, staining one set with Nissl , saving the remaining sets for Immunohistochemical staining . • Dr. Kozlowski’s lab is analyzing contusion size...serially and coronaly into sets and immunohistochemically analyzed for the following: contusion size estimated as volume of remaining tissue in Nissl

  15. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  16. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    NASA Astrophysics Data System (ADS)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  17. 76 FR 23335 - Wilderness Stewardship Plan/Environmental Impact Statement, Sequoia and Kings Canyon National...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ... planning and environmental impact analysis process required to inform consideration of alternative... 5, 1996. Based on an analysis of the numerous scoping comments received, and with consideration of a... proper food storage; party size; camping and campsites; human waste management; stock use; meadow...

  18. Conceptual design considerations and neutronics of lithium fall laser fusion target chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, W.R.; Thomson, W.B.

    1978-05-31

    Atomics International and Lawrence Livermore Laboratory are involved in the conceptual design of a laser fusion power plant incorporating the lithium fall target chamber. In this paper we discuss some of the more important design considerations for the target chamber and evaluate its nuclear performance. Sizing and configuration of the fall, hydraulic effects, and mechanical design considerations are addressed. The nuclear aspects examined include tritium breeding, energy deposition, and radiation damage.

  19. Lead and Arsenic Bioaccessibility and Speciation as a Function of Soil Particle Size

    EPA Science Inventory

    Bioavailability research of soil metals has advanced considerably from default values to validated in vitro bioaccessibility (IVBA) assays for site-specific risk assessment. Previously, USEPA determined that the soil-size fraction representative of dermal adherence and consequent...

  20. Using specific volume increment (SVI) for quantifying growth responses in trees - theoretical and practical considerations

    Treesearch

    Eddie Bevilacqua

    2002-01-01

    Comparative analysis of growth responses among trees following natural or anthropogenic disturbances is often confounded when comparing trees of different size because of the high correlation between growth and initial tree size: large trees tend to have higher absolute grow rates. Relative growth rate (RGR) may not be the most suitable size-dependent measure of growth...

  1. Size and burden of mental disorders in Europe--a critical review and appraisal of 27 studies.

    PubMed

    Wittchen, Hans-Ulrich; Jacobi, Frank

    2005-08-01

    Epidemiological data on a wide range of mental disorders from community studies conducted in European countries are presented to determine the availability and consistency of prevalence, disability and treatment findings for the EU. Using a stepwise multimethod approach, 27 eligible studies with quite variable designs and methods including over 150,000 subjects from 16 European countries were identified. Prevalence: On the basis of meta-analytic techniques as well as on reanalyses of selected data sets, it is estimated that about 27% (equals 82.7 million; 95% CI: 78.5-87.1) of the adult EU population, 18-65 of age, is or has been affected by at least one mental disorder in the past 12 months. Taking into account the considerable degree of comorbidity (about one third had more than one disorder), the most frequent disorders are anxiety disorders, depressive, somatoform and substance dependence disorders. When taking into account design, sampling and other methodological differences between studies, little evidence seems to exist for considerable cultural or country variation. Disability and treatment: despite very divergent and fairly crude assessment strategies, the available data consistently demonstrate (a) an association of all mental disorders with a considerable disability burden in terms of number of work days lost (WLD) and (b) generally low utilization and treatment rates. Only 26% of all cases had any consultation with professional health care services, a finding suggesting a considerable degree of unmet need. The paper highlights considerable future research needs for coordinated EU studies across all disorders and age groups. As prevalence estimates could not simply be equated with defined treatment needs, such studies should determine the degree of met and unmet needs for services by taking into account severity, disability and comorbidity. These needs are most pronounced for the new EU member states as well as more generally for adolescent and older populations.

  2. The effects of delay duration on visual working memory for orientation.

    PubMed

    Shin, Hongsup; Zou, Qijia; Ma, Wei Ji

    2017-12-01

    We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.

  3. Adult and Child Semantic Neighbors of the Kroll and Potter (1984) Nonobjects

    PubMed Central

    Storkel, Holly L.; Adlof, Suzanne M.

    2008-01-01

    Purpose The purpose was to determine the number of semantic neighbors, namely semantic set size, for 88 nonobjects (Kroll & Potter, 1984) and determine how semantic set size related to other measures and age. Method Data were collected from 82 adults and 92 preschool children in a discrete association task. The nonobjects were presented via computer, and participants reported the first word that came to mind that was meaningfully related to the nonobject. Words reported by two or more participants were considered semantic neighbors. The strength of each neighbor was computed as the proportion of participants who reported the neighbor. Results Results showed that semantic set size was not significantly correlated with objectlikeness ratings or object decision reaction times from Kroll and Potter (1984). However, semantic set size was significantly negatively correlated with the strength of the strongest neighbor(s). In terms of age effects, adult and child semantic set sizes were significantly positively correlated and the majority of numeric differences were on the order of 0–3 neighbors. Comparison of actual neighbors showed greater discrepancies; however, this varied by neighbor strength. Conclusions Semantic set size can be determined for nonobjects. Specific guidelines are suggested for using these nonobjects in future research. PMID:19252127

  4. Conducting meta-analyses of HIV prevention literatures from a theory-testing perspective.

    PubMed

    Marsh, K L; Johnson, B T; Carey, M P

    2001-09-01

    Using illustrations from HIV prevention research, the current article advocates approaching meta-analysis as a theory-testing scientific method rather than as merely a set of rules for quantitative analysis. Like other scientific methods, meta-analysis has central concerns with internal, external, and construct validity. The focus of a meta-analysis should only rarely be merely describing the effects of health promotion, but rather should be on understanding and explaining phenomena and the processes underlying them. The methodological decisions meta-analysts make in conducting reviews should be guided by a consideration of the underlying goals of the review (e.g., simply effect size estimation or, preferably theory testing). From the advocated perspective that a health behavior meta-analyst should test theory, the authors present a number of issues to be considered during the conduct of meta-analyses.

  5. Integrating the human element into the systems engineering process and MBSE methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tadros, Michael Samir

    In response to the challenges related to the increasing size and complexity of systems, organizations have recognized the need to integrate human considerations in the beginning stages of systems development. Human Systems Integration (HSI) seeks to accomplish this objective by incorporating human factors within systems engineering (SE) processes and methodologies, which is the focus of this paper. A representative set of HSI methods from multiple sources are organized, analyzed, and mapped to the systems engineering Vee-model. These methods are then consolidated and evaluated against the SE process and Models-Based Systems Engineering (MBSE) methodology to determine where and how they couldmore » integrate within systems development activities in the form of specific enhancements. Overall conclusions based on these evaluations are presented and future research areas are proposed.« less

  6. Attitude control requirements for various solar sail missions

    NASA Technical Reports Server (NTRS)

    Williams, Trevor

    1990-01-01

    The differences are summarized between the attitude control requirements for various types of proposed solar sail missions (Earth orbiting; heliocentric; asteroid rendezvous). In particular, it is pointed out that the most demanding type of mission is the Earth orbiting one, with the solar orbit case quite benign and asteroid station keeping only slightly more difficult. It is then shown, using numerical results derived for the British Solar Sail Group Earth orbiting design, that the disturbance torques acting on a realistic sail can completely dominate the torques required for nominal maneuvering of an 'ideal' sail. This is obviously an important consideration when sizing control actuators; not so obvious is the fact that it makes the standard rotating vane actuator unsatisfactory in practice. The reason for this is given, and a set of new actuators described which avoids the difficulty.

  7. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  8. Setting Up CD-ROM Work Areas. Part I: Ergonomic Considerations, User Furniture, Location.

    ERIC Educational Resources Information Center

    Vasi, John; LaGuardia, Cheryl

    1992-01-01

    The first of a two-part series on design of CD-ROM work areas in libraries discusses (1) space and location considerations; (2) ergonomics, including work surface, chairs, lighting, printers, other accessories, and security; and (3) other considerations, including staff assistance, reference tools, literature racks, and promotional materials. (MES)

  9. Comparison of the Exomes of Common Carp (Cyprinus carpio) and Zebrafish (Danio rerio)

    PubMed Central

    Henkel, Christiaan V.; Dirks, Ron P.; Jansen, Hans J.; Forlenza, Maria; Wiegertjes, Geert F.; Howe, Kerstin; van den Thillart, Guido E.E.J.M.

    2012-01-01

    Abstract Research on common carp, Cyprinus carpio, is beneficial for zebrafish research because of resources available owing to its large body size, such as the availability of sufficient organ material for transcriptomics, proteomics, and metabolomics. Here we describe the shot gun sequencing of a clonal double-haploid common carp line. The assembly consists of 511891 scaffolds with an N50 of 17 kb, predicting a total genome size of 1.4–1.5 Gb. A detailed analysis of the ten largest scaffolds indicates that the carp genome has a considerably lower repeat coverage than zebrafish, whilst the average intron size is significantly smaller, making it comparable to the fugu genome. The quality of the scaffolding was confirmed by comparisons with RNA deep sequencing data sets and a manual analysis for synteny with the zebrafish, especially the Hox gene clusters. In the ten largest scaffolds analyzed, the synteny of genes is almost complete. Comparisons of predicted exons of common carp with those of the zebrafish revealed only few genes specific for either zebrafish or carp, most of these being of unknown function. This supports the hypothesis of an additional genome duplication event in the carp evolutionary history, which—due to a higher degree of compactness—did not result in a genome larger than that of zebrafish. PMID:22715948

  10. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  11. Role of Beam Spot Size in Heating Targets at Depth.

    PubMed

    Ross, E Victor; Childs, James

    2015-12-01

    Wavelength, fluence and pulse width are primary device parameters for the treatment of skin and hair conditions. Wavelength selection is based on tissue scatter and target chromophores. Pulse width is chosen to optimize target heating. Energy absorbed by a target is determined by fluence and spot size of the light source as well as the depth of the target. We conducted an in vitro skin study and simulations to compare heating of a target at a particular depth versus spot size. Porcine skin and fat tissue were prepared and separated to form a 2mm skin layer above a 1 cm thick fat layer. A 50 μm thermocouple was placed between the layers and centered beneath a 23 x 38 mm treatment window of an 805 nm diode laser device (Vectus, Cynosure, Westford, MA). Apertures provided various incident beam spot sizes and the temperature rise of the thermocouple was measured for a fixed fluence. The 2mm deep target's temperature rise versus treatment area showed two regimes with different positive slopes. The first regime up to approximately 1 cm(2) area has a greater temperature rise versus area than that for the regime greater than 1 cm(2). The slope in the second regime is nonetheless appreciable and provides a fluence reduction factor for skin safety. The same temperature rise in a target at 2 mm depth (typical hair bulb depth in some areas) is realized by increasing the area from 1 to 4 cm(2) while reducing the fluence by half. The role of spot size and in situ beam divergence is an important consideration to determine optimum fluence settings that increase skin safety when treating deeper targets.

  12. Silage Collected from Dairy Farms Harbors an Abundance of Listeriaphages with Considerable Host Range and Genome Size Diversity

    PubMed Central

    Vongkamjan, Kitiya; Switt, Andrea Moreno; den Bakker, Henk C.; Fortes, Esther D.

    2012-01-01

    Since the food-borne pathogen Listeria monocytogenes is common in dairy farm environments, it is likely that phages infecting this bacterium (“listeriaphages”) are abundant on dairy farms. To better understand the ecology and diversity of listeriaphages on dairy farms and to develop a diverse phage collection for further studies, silage samples collected on two dairy farms were screened for L. monocytogenes and listeriaphages. While only 4.5% of silage samples tested positive for L. monocytogenes, 47.8% of samples were positive for listeriaphages, containing up to >1.5 × 104 PFU/g. Host range characterization of the 114 phage isolates obtained, with a reference set of 13 L. monocytogenes strains representing the nine major serotypes and four lineages, revealed considerable host range diversity; phage isolates were classified into nine lysis groups. While one serotype 3c strain was not lysed by any phage isolates, serotype 4 strains were highly susceptible to phages and were lysed by 63.2 to 88.6% of phages tested. Overall, 12.3% of phage isolates showed a narrow host range (lysing 1 to 5 strains), while 28.9% of phages represented broad host range (lysing ≥11 strains). Genome sizes of the phage isolates were estimated to range from approximately 26 to 140 kb. The extensive host range and genomic diversity of phages observed here suggest an important role of phages in the ecology of L. monocytogenes on dairy farms. In addition, the phage collection developed here has the potential to facilitate further development of phage-based biocontrol strategies (e.g., in silage) and other phage-based tools. PMID:23042180

  13. Floral display size, conspecific density and florivory affect fruit set in natural populations of Phlox hirsuta, an endangered species

    PubMed Central

    Ruane, Lauren G.; Rotzin, Andrew T.; Congleton, Philip H.

    2014-01-01

    Background and Aims Natural variation in fruit and seed set may be explained by factors that affect the composition of pollen grains on stigmas. Self-incompatible species require compatible outcross pollen grains to produce seeds. The siring success of outcross pollen grains, however, can be hindered if self (or other incompatible) pollen grains co-occur on stigmas. This study identifies factors that determine fruit set in Phlox hirsuta, a self-sterile endangered species that is prone to self-pollination, and its associated fitness costs. Methods Multiple linear regressions were used to identify factors that explain variation in percentage fruit set within three of the five known populations of this endangered species. Florivorous beetle density, petal colour, floral display size, local conspecific density and pre-dispersal seed predation were quantified and their effects on the ability of flowers to produce fruits were assessed. Key Results In all three populations, percentage fruit set decreased as florivorous beetle density increased and as floral display size increased. The effect of floral display size on fruit set, however, often depended on the density of nearby conspecific plants. High local conspecific densities offset – even reversed – the negative effects of floral display size on percentage fruit set. Seed predation by mammals decreased fruit set in one population. Conclusions The results indicate that seed production in P. hirsuta can be maximized by selectively augmenting populations in areas containing isolated large plants, by reducing the population sizes of florivorous beetles and by excluding mammals that consume unripe fruits. PMID:24557879

  14. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  15. Considerations for throughfall chemistry sample-size determination

    Treesearch

    Pamela J. Edwards; Paul Mohai; Howard G. Halverson; David R. DeWalle

    1989-01-01

    Both the number of trees sampled per species and the number of sampling points under each tree are important throughfall sampling considerations. Chemical loadings obtained from an urban throughfall study were used to evaluate the relative importance of both of these sampling factors in tests for determining species' differences. Power curves for detecting...

  16. Scale considerations for ecosystem management

    Treesearch

    Jonathan B. Haufler; Thomas R. Crow; David Wilcove

    1999-01-01

    One of the difficult challenges facing ecosystem management is the determination of appropriate spatial and temporal scales to use. Scale in spatial sence includes considerations of both the size area or extent of an ecosystem management activity, as well as thedegree of resolution of mapped or measured data. In the temporal sense, scale concerns the duration of both...

  17. What Are the Safety Considerations for Insulin Control for Athletes?

    ERIC Educational Resources Information Center

    McDaniel, Larry W.; Olson, Sara; Gaudet, Laura; Jackson, Allen

    2010-01-01

    Athletes diagnosed with diabetes may have difficulty with their blood sugar levels fluctuating during intense exercise. Considerations for athletes with insulin concerns may range anywhere from exercise rehabilitation to the use of an automatic insulin pump. The automatic insulin pump is a small battery-operated device about the size of a pager.…

  18. On the size of sports fields

    NASA Astrophysics Data System (ADS)

    Darbois Texier, Baptiste; Cohen, Caroline; Dupeux, Guillaume; Quéré, David; Clanet, Christophe

    2014-03-01

    The size of sports fields considerably varies from a few meters for table tennis to hundreds of meters for golf. We first show that this size is mainly fixed by the range of the projectile, that is, by the aerodynamic properties of the ball (mass, surface, drag coefficient) and its maximal velocity in the game. This allows us to propose general classifications for sports played with a ball.

  19. Three-phase boundary length in solid-oxide fuel cells: A mathematical model

    NASA Astrophysics Data System (ADS)

    Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf

    A mathematical model to calculate the volume specific three-phase boundary length in the porous composite electrodes of solid-oxide fuel cell is presented. The model is exclusively based on geometrical considerations accounting for porosity, particle diameter, particle size distribution, and solids phase distribution. Results are presented for uniform particle size distribution as well as for non-uniform particle size distribution.

  20. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  1. Genetic variation in tree structure and its relation to size in Douglas-fir: I. Biomass partitioning, foliage efficiency, stem form, and wood density.

    Treesearch

    J.B. St. Clair

    1994-01-01

    Genetic variation and covariation among traits of tree size and structure were assessed in an 18-year-old Douglas-fir (Pseudotsuga menziesii var. menziesii (Mirb.) Franco) genetic test in the Coast Range of Oregon. Considerable genetic variation was found in size, biomass partitioning, and wood density, and genetic gains may be...

  2. Digital dental photography. Part 6: camera settings.

    PubMed

    Ahmad, I

    2009-07-25

    Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.

  3. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  4. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  5. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  6. Transport of Cryptosporidium oocysts in porous media: Role of straining and physicochemical filtration

    USGS Publications Warehouse

    Tufenkji, N.; Miller, G.F.; Ryan, J.N.; Harvey, R.W.; Elimelech, M.

    2004-01-01

    The transport and filtration behavior of Cryptosporidium parvum oocysts in columns packed with quartz sand was systematically examined under repulsive electrostatic conditions. An increase in solution ionic strength resulted in greater oocyst deposition rates despite theoretical predictions of a significant electrostatic energy barrier to deposition. Relatively high deposition rates obtained with both oocysts and polystyrene latex particles of comparable size at low ionic strength (1 mM) suggest that a physical mechanism may play a key role in oocyst removal. Supporting experiments conducted with latex particles of varying sizes, under very low ionic strength conditions where physicochemical filtration is negligible, clearly indicated that physical straining is an important capture mechanism. The results of this study indicate that irregularity of sand grain shape (verified by SEM imaging) contributes considerably to the straining potential of the porous medium. Hence, both straining and physicochemical filtration are expected to control the removal of C. parvum oocysts in settings typical of riverbank filtration, soil infiltration, and slow sand filtration. Because classic colloid filtration theory does not account for removal by straining, these observations have important implications with respect to predictions of oocyst transport.

  7. Small is beautiful: features of the smallest insects and limits to miniaturization.

    PubMed

    Polilov, Alexey A

    2015-01-07

    Miniaturization leads to considerable reorganization of structures in insects, affecting almost all organs and tissues. In the smallest insects, comparable in size to unicellular organisms, modifications arise not only at the level of organs, but also at the cellular level. Miniaturization is accompanied by allometric changes in many organ systems. The consequences of miniaturization displayed by different insect taxa include both common and unique changes. Because the smallest insects are among the smallest metazoans and have the most complex organization among organisms of the same size, their peculiar structural features and the factors that limit their miniaturization are of considerable theoretical interest to general biology.

  8. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  9. Effect of Feedstock Size and its Distribution on the Properties of Detonation Sprayed Coatings

    NASA Astrophysics Data System (ADS)

    Suresh Babu, P.; Rao, D. S.; Rao, G. V. N.; Sundararajan, G.

    2007-06-01

    The detonation spraying is one of the most promising thermal spray variants for depositing wear and corrosion resistant coatings. The ceramic (Al2O3), metallic (Ni-20 wt%Cr) , and cermets (WC-12 wt%Co) powders that are commercially available were separated into coarser and finer size ranges with relatively narrow size distribution by employing centrifugal air classifier. The coatings were deposited using detonation spray technique. The effect of particle size and its distribution on the coating properties were examined. The surface roughness and porosity increased with increasing powder particle size for all the coatings consistently. The feedstock size was also found to influence the phase composition of Al2O3 and WC-Co coatings; however does not influence the phase composition of Ni-Cr coatings. The associated phase change and %porosity of the coatings imparted considerable variation in the coating hardness, fracture toughness, and wear properties. The fine and narrow size range WC-Co coating exhibited superior wear resistance. The coarse and narrow size distribution Al2O3 coating exhibited better performance under abrasion and sliding wear modes however under erosion wear mode the as-received Al2O3 coating exhibited better performance. In the case of metallic (Ni-Cr) coatings, the coatings deposited using coarser powder exhibited marginally lower-wear rate under abrasion and sliding wear modes. However, under erosion wear mode, the coating deposited using finer particle size exhibited considerably lower-wear rate.

  10. Neural activity in the hippocampus predicts individual visual short-term memory capacity.

    PubMed

    von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter

    2013-07-01

    Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period  =  900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity  =  3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity  =  5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.

  11. Tariff Considerations for Micro-Grids in Sub-Saharan Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reber, Timothy J.; Booth, Samuel S.; Cutler, Dylan S.

    This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the namemore » of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the name of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance stakeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.takeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.« less

  12. Does precision decrease with set size?

    PubMed Central

    Mazyar, Helga; van den Berg, Ronald; Ma, Wei Ji

    2012-01-01

    The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations. PMID:22685337

  13. Effect of study design and setting on tuberculosis clustering estimates using Mycobacterial Interspersed Repetitive Units-Variable Number Tandem Repeats (MIRU-VNTR): a systematic review.

    PubMed

    Mears, Jessica; Abubakar, Ibrahim; Cohen, Theodore; McHugh, Timothy D; Sonnenberg, Pam

    2015-01-21

    To systematically review the evidence for the impact of study design and setting on the interpretation of tuberculosis (TB) transmission using clustering derived from Mycobacterial Interspersed Repetitive Units-Variable Number Tandem Repeats (MIRU-VNTR) strain typing. MEDLINE, EMBASE, CINHAL, Web of Science and Scopus were searched for articles published before 21st October 2014. Studies in humans that reported the proportion of clustering of TB isolates by MIRU-VNTR were included in the analysis. Univariable meta-regression analyses were conducted to assess the influence of study design and setting on the proportion of clustering. The search identified 27 eligible articles reporting clustering between 0% and 63%. The number of MIRU-VNTR loci typed, requiring consent to type patient isolates (as a proxy for sampling fraction), the TB incidence and the maximum cluster size explained 14%, 14%, 27% and 48% of between-study variation, respectively, and had a significant association with the proportion of clustering. Although MIRU-VNTR typing is being adopted worldwide there is a paucity of data on how study design and setting may influence estimates of clustering. We have highlighted study design variables for consideration in the design and interpretation of future studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Accurate and ergonomic method of registration for image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Henderson, Jaimie M.; Bucholz, Richard D.

    1994-05-01

    There has been considerable interest in the development of frameless stereotaxy based upon scalp mounted fiducials. In practice we have experienced difficulty in relating markers to the image data sets in our series of 25 frameless cases, as well as inaccuracy due to scalp movement and the size of the markers. We have developed an alternative system for accurately and conveniently achieving surgical registration for image-guided neurosurgery based on alignment and matching of patient forehead contours. The system consists of a laser contour digitizer which is used in the operating room to acquire forehead contours, editing software for extracting contours from patient image data sets, and a contour-match algorithm for aligning the two contours and performing data set registration. The contour digitizer is tracked by a camera array which relates its position with respect to light emitting diodes placed on the head clamp. Once registered, surgical instrument can be tracked throughout the procedure. Contours can be extracted from either CT or MRI image datasets. The system has proven to be robust in the laboratory setting. Overall error of registration is 1 - 2 millimeters in routine use. Image to patient registration can therefore be achieved quite easily and accurately, without the need for fixation of external markers to the skull, or manually finding markers on the scalp and image datasets. The system is unobtrusive and imposes little additional effort on the neurosurgeon, broadening the appeal of image-guided surgery.

  15. Understanding what matters: An exploratory study to investigate the views of the general public for priority setting criteria in health care.

    PubMed

    Ratcliffe, Julie; Lancsar, Emily; Walker, Ruth; Gu, Yuanyuan

    2017-06-01

    Health care policy makers internationally are increasingly expressing commitment to consultation with, and incorporation of, the views of the general public into the formulation of health policy and the process of setting health care priorities. In practice, however, there are relatively few opportunities for the general public to be involved in health care decision-making. In making resource allocation decisions, funders, tasked with managing scarce health care resources, are often faced with difficult decisions in balancing efficiency with equity considerations. A mixed methods (qualitative and quantitative) approach incorporating focus group discussions and a ranking exercise was utilised to develop a comprehensive list of potential criteria for setting priorities in health care formulated from the perspective of members of the general public in Australia. A strong level of congruence was found in terms of the rankings of the key criteria with the size of the health gain, clinical effectiveness, and the ability to provide quality of life improvements identified consistently as the three most important criteria for prioritising the funding of an intervention. Findings from this study will be incorporated into a novel DCE framework to explore how decision makers and members of the general public prioritize and trade off different types of health gain and to quantify the weights attached to specific efficiency and equity criteria in the priority setting process. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Efficiency and optimal size of hospitals: Results of a systematic search

    PubMed Central

    Guglielmo, Annamaria

    2017-01-01

    Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255

  17. Variability in the reported energy, total fat and saturated fat content in fast food products across ten countries

    PubMed Central

    Ziauddeen, Nida; Fitt, Emily; Edney, Louise; Dunford, Elizabeth; Neal, Bruce; Jebb, Susan A.

    2016-01-01

    Objective Fast foods are often energy dense and offered in large serving sizes. Observational data has linked the consumption of fast food to an increased risk of obesity and related diseases. Design We surveyed the reported energy, total fat and saturated fat contents, and serving sizes, of fast food items from five major chains across 10 countries, comparing product categories as well as specific food items available in most countries. Setting MRC Human Nutrition Research (HNR), Cambridge Subjects Data for 2961 food and drink products were collected, with most from Canada (n=550) and fewest from United Arab Emirates (n=106). Results There was considerable variability in energy and fat content of fast food across countries, reflecting both the portfolio of products, and serving size variability. Differences in total energy between countries were particularly noted for chicken dishes (649-1197kJ/100g) and sandwiches (552-1050kJ/100g). When comparing the same product between countries variations were consistently observed in total energy and fat content (g/100g) with extreme variation in McDonald’s Chicken McNuggets with 12g total fat (g/100g) in Germany compared to 21.1g in New Zealand. Conclusions These cross-country variations highlight the possibility for further product reformulation in many countries to reduce nutrients of concern and improve the nutritional profiles of fast food products around the world. Standardisation of serving sizes towards the lower end of the range would also help to reduce the risk of overconsumption. PMID:25702788

  18. Role of sediment size and biostratinomy on the development of biofilms in recent avian vertebrate remains

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph E.; Lenczewski, Melissa E.; Clawson, Steven R.; Warnock, Jonathan P.

    2017-04-01

    Microscopic soft tissues have been identified in fossil vertebrate remains collected from various lithologies. However, the diagenetic mechanisms to preserve such tissues have remained elusive. While previous studies have described infiltration of biofilms in Haversian and Volkmann’s canals, biostratinomic alteration (e.g., trampling), and iron derived from hemoglobin as playing roles in the preservation processes, the influence of sediment texture has not previously been investigated. This study uses a Kolmogorov Smirnov Goodness-of-Fit test to explore the influence of biostratinomic variability and burial media against the infiltration of biofilms in bone samples. Controlled columns of sediment with bone samples were used to simulate burial and subsequent groundwater flow. Sediments used in this study include clay-, silt-, and sand-sized particles modeled after various fluvial facies commonly associated with fossil vertebrates. Extant limb bone samples obtained from Gallus gallus domesticus (Domestic Chicken) buried in clay-rich sediment exhibit heavy biofilm infiltration, while bones buried in sands and silts exhibit moderate levels. Crushed bones exhibit significantly lower biofilm infiltration than whole bone samples. Strong interactions between biostratinomic alteration and sediment size are also identified with respect to biofilm development. Sediments modeling crevasse splay deposits exhibit considerable variability; whole-bone crevasse splay samples exhibit higher frequencies of high-level biofilm infiltration, and crushed-bone samples in modeled crevasse splay deposits display relatively high frequencies of low-level biofilm infiltration. These results suggest that sediment size, depositional setting, and biostratinomic condition play key roles in biofilm infiltration in vertebrate remains, and may influence soft tissue preservation in fossil vertebrates.

  19. Body growth and life history in wild mountain gorillas (Gorilla beringei beringei) from Volcanoes National Park, Rwanda.

    PubMed

    Galbany, Jordi; Abavandimwe, Didier; Vakiener, Meagan; Eckardt, Winnie; Mudakikwa, Antoine; Ndagijimana, Felix; Stoinski, Tara S; McFarlin, Shannon C

    2017-07-01

    Great apes show considerable diversity in socioecology and life history, but knowledge of their physical growth in natural settings is scarce. We characterized linear body size growth in wild mountain gorillas from Volcanoes National Park, Rwanda, a population distinguished by its extreme folivory and accelerated life histories. In 131 individuals (0.09-35.26 years), we used non-invasive parallel laser photogrammetry to measure body length, back width, arm length and two head dimensions. Nonparametric LOESS regression was used to characterize cross-sectional distance and velocity growth curves for males and females, and consider links with key life history milestones. Sex differences became evident between 8.5 and 10.0 years of age. Thereafter, female growth velocities declined, while males showed increased growth velocities until 10.0-14.5 years across dimensions. Body dimensions varied in growth; females and males reached 98% of maximum body length at 11.7 and 13.1 years, respectively. Females attained 95.3% of maximum body length by mean age at first birth. Neonates were 31% of maternal size, and doubled in size by mean weaning age. Males reached maximum body and arm length and back width before emigration, but experienced continued growth in head dimensions. While comparable data are scarce, our findings provide preliminary support for the prediction that mountain gorillas reach maximum body size at earlier ages compared to more frugivorous western gorillas. Data from other wild populations are needed to better understand comparative great ape development, and investigate links between trajectories of physical, behavioral, and reproductive maturation. © 2017 Wiley Periodicals, Inc.

  20. Processing statistics: an examination of focused and distributed attention using event related potentials.

    PubMed

    Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan

    2013-06-07

    Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Reduced-portion entrées in a worksite and restaurant setting: impact on food consumption and waste.

    PubMed

    Berkowitz, Sarah; Marquart, Len; Mykerezi, Elton; Degeneffe, Dennis; Reicks, Marla

    2016-11-01

    Large portion sizes in restaurants have been identified as a public health risk. The purpose of the present study was to determine whether customers in two different food-service operator segments (non-commercial worksite cafeteria and commercial upscale restaurant) would select reduced-portion menu items and the impact of selecting reduced-portion menu items on energy and nutrient intakes and plate waste. Consumption and plate waste data were collected for 5 weeks before and 7 weeks after introduction of five reduced-size entrées in a worksite lunch cafeteria and for 3 weeks before and 4 weeks after introduction of five reduced-size dinner entrées in a restaurant setting. Full-size entrées were available throughout the entire study periods. A worksite cafeteria and a commercial upscale restaurant in a large US Midwestern metropolitan area. Adult worksite employees and restaurant patrons. Reduced-size entrées accounted for 5·3-12·8 % and 18·8-31·3 % of total entrées selected in the worksite and restaurant settings, respectively. Food waste, energy intake and intakes of total fat, saturated fat, cholesterol, Na, fibre, Ca, K and Fe were significantly lower when both full- and reduced-size entrées were served in the worksite setting and in the restaurant setting compared with when only full-size entrées were served. A relatively small proportion of reduced-size entrées were selected but still resulted in reductions in overall energy and nutrient intakes. These outcomes could serve as the foundation for future studies to determine strategies to enhance acceptance of reduced-portion menu items in restaurant settings.

  2. Fluvial experiments using inertial sensors.

    NASA Astrophysics Data System (ADS)

    Maniatis, Georgios; Valyrakis, Manousos; Hodge, Rebecca; Drysdale, Tim; Hoey, Trevor

    2017-04-01

    During the last four years we have announced results on the development of a smart pebble that is constructed and calibrated specifically for capturing the dynamics of coarse sediment motion in river beds, at a grain scale. In this presentation we report details of our experimental validation across a range of flow regimes. The smart pebble contains Inertial Measurements Units (IMUs), which are sensors capable of recording the inertial acceleration and the angular velocity of the rigid bodies into which they are attached. IMUs are available across a range of performance levels, with commensurate increase in size, cost and performance as one progresses from integrated-circuit devices for use in commercial applications such as gaming and mobile phones, to larger brick-sized systems sometimes found in industrial applications such as vibration monitoring and quality control, or even the rack-mount equipment used in some aerospace and navigation applications (which can go as far as to include lasers and optical components). In parallel with developments in commercial and industrial settings, geomorphologists started recently to explore means of deploying IMUs in smart pebbles. The less-expensive, chip-scale IMUs have been shown to have adequate performance for this application, as well as offering a sufficiently compact form-factor. Four prototype sensors have been developed so far, and the latest (400 g acceleration range, 50-200 Hz sampling frequency) has been tested in fluvial laboratory experiments. We present results from three different experimental regimes designed for the evaluation of this sensor: a) an entrainment threshold experiment ; b) a bed impact experiment ; and c) a rolling experiment. All experiments used a 100 mm spherical sensor, and set a) were repeated using an equivalent size elliptical sensor. The experiments were conducted in the fluvial laboratory of the University of Glasgow (0.9 m wide flume) under different hydraulic conditions. The use of IMU results into direct parametrization of the inertial forces of grains which for the tested grain sizes were, as expected, always comparable to the independently measured hydrodynamic forces. However, the validity of IMU measurements is subjected to specific design, processing and experimental considerations, and we present the results of our analysis of these.

  3. 78 FR 43205 - Proposed Substances To Be Evaluated for Set 27 Toxicological Profiles

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-19

    .... The Set 27 nomination process includes consideration of all substances on ATSDR's Priority List of... No. ATSDR-2013-0002] Proposed Substances To Be Evaluated for Set 27 Toxicological Profiles AGENCY...). ACTION: Request for comments on the proposed substances to be evaluated for Set 27 toxicological profiles...

  4. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    NASA Astrophysics Data System (ADS)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  5. Measuring missing heritability: Inferring the contribution of common variants

    PubMed Central

    Golan, David; Lander, Eric S.; Rosset, Saharon

    2014-01-01

    Genome-wide association studies (GWASs), also called common variant association studies (CVASs), have uncovered thousands of genetic variants associated with hundreds of diseases. However, the variants that reach statistical significance typically explain only a small fraction of the heritability. One explanation for the “missing heritability” is that there are many additional disease-associated common variants whose effects are too small to detect with current sample sizes. It therefore is useful to have methods to quantify the heritability due to common variation, without having to identify all causal variants. Recent studies applied restricted maximum likelihood (REML) estimation to case–control studies for diseases. Here, we show that REML considerably underestimates the fraction of heritability due to common variation in this setting. The degree of underestimation increases with the rarity of disease, the heritability of the disease, and the size of the sample. Instead, we develop a general framework for heritability estimation, called phenotype correlation–genotype correlation (PCGC) regression, which generalizes the well-known Haseman–Elston regression method. We show that PCGC regression yields unbiased estimates. Applying PCGC regression to six diseases, we estimate the proportion of the phenotypic variance due to common variants to range from 25% to 56% and the proportion of heritability due to common variants from 41% to 68% (mean 60%). These results suggest that common variants may explain at least half the heritability for many diseases. PCGC regression also is readily applicable to other settings, including analyzing extreme-phenotype studies and adjusting for covariates such as sex, age, and population structure. PMID:25422463

  6. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  7. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  8. Novel Control Strategy for Multiple Run-of-the-River Hydro Power Plants to Provide Grid Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanpurkar, Manish; Luo, Yusheng; Hovsapian, Rob

    Electricity generated by Hydropower Plants (HPPs) contributes a considerable portion of bulk electricity generation and delivers it with a low carbon footprint. In fact, HPP electricity generation provides the largest share from renewable energy resources, which includes solar and wind energy. The increasing penetration of wind and solar penetration leads to a lowered inertia in the grid and hence poses stability challenges. In recent years, breakthrough in energy storage technologies have demonstrated the economic and technical feasibility of extensive deployments in power grids. Multiple ROR HPPs if integrated with scalable, multi time-step energy storage so that the total output canmore » be controlled. Although, the size of a single energy storage is far smaller than that of a typical reservoir, cohesively managing multiple sets of energy storage distributed in different locations is proposed. The ratings of storages and multiple ROR HPPs approximately equals the rating of a large, conventional HPP. The challenges associated with the system architecture and operation are described. Energy storage technologies such as supercapacitors, flywheels, batteries etc. can function as a dispatchable synthetic reservoir with a scalable size of energy storage will be integrated. Supercapacitors, flywheels, and battery are chosen to provide fast, medium, and slow responses to support grid requirements. Various dynamic and transient power grid conditions are simulated and performances of integrated ROR HPPs with energy storage is provided. The end goal of this research is to investigate the inertial equivalence of a large, conventional HPP with a unique set of multiple ROR HPPs and optimally rated energy storage systems.« less

  9. Genome-wide association identifies candidate genes for ovulation rate in swine

    USDA-ARS?s Scientific Manuscript database

    Litter size is an economically important trait to producers that is lowly heritable, observable only after considerable investment has been made in gilt development, and responds slowly to selection. Ovulation rate, a component trait of litter size, is moderately heritable, sex limited, and should r...

  10. A Typology of Mixed Methods Sampling Designs in Social Science Research

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  11. 76 FR 72461 - Proposed Extension of Existing Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... size, employee benefits and overhead. In addition, approximately 1,500 broker-dealers must comply with... work-year and multiplied by 2.93 to account for bonuses, firm size, employee benefits and overhead. The... techniques or other forms of information technology. Consideration will be given to comments and suggestions...

  12. Nonword Reading across Orthographies: How Flexible Is the Choice of Reading Units?

    ERIC Educational Resources Information Center

    Goswami, Usha; Ziegler, Johannes C.; Dalton, Louise; Schnieder, Wolfgang

    2003-01-01

    Used cross-language blocking experiments to test the hypothesis that children learning to read inconsistent orthographies would show considerable flexibility in making use of spelling-sound correspondences at different unit sizes, whereas children learning to read consistent orthographies should mainly employ small-size grapheme-phoneme…

  13. 50 CFR 635.20 - Size limits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... damaged by shark bites may be retained only if the length of the remainder of the fish is equal to or... after consideration of additional scientific information and fish measurement data, and will be made... otherwise adjusted. (e) Sharks. The following size limits change depending on the species being caught and...

  14. General linear model-predicted and observed toxicity of three organo-coated silver nanoparticles: Impacts of particle size, surface charge and dose

    EPA Science Inventory

    Intrinsic to the myriad of nano-enabled products are atomic-size multifunctional engineered nanomaterials, which upon release contaminate the environments, raising considerable health and safety concerns. Despite global research efforts, mechanism underlying nanotoxicity has rema...

  15. You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.

    PubMed

    McShane, Blakeley B; Böckenholt, Ulf

    2014-11-01

    Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.

  16. Drafting guidelines for occupational exposure to chemicals: the Dutch experience with the assessment of reproductive risks.

    PubMed

    Stijkel, A; van Eijndhoven, J C; Bal, R

    1996-12-01

    The Dutch procedure for standard setting for occupational exposure to chemicals, just like the European Union (EU) procedure, is characterized by an organizational separation between considerations of health on the one side, and of technology, economics, and policy on the other side. Health considerations form the basis for numerical guidelines. These guidelines are next combined with technical-economical considerations. Standards are then proposed, and are finally set by the Ministry of Social Affairs and Employment. An analysis of this procedure might be of relevance to the US, where other procedures are used and criticized. In this article we focus on the first stage of the standard-setting procedure. In this stage, the Dutch Expert Committee on Occupational Standards (DECOS) drafts a criteria document in which a health-based guideline is proposed. The drafting is based on a set of starting points for assessing toxicity. We raise the questions, "Does DECOS limit itself only to health considerations? And if not, what are the consequences of such a situation?" We discuss DECOS' starting points and analyze the relationships between those starting points, and then explore eight criteria documents where DECOS was considering reproductive risks as a possible critical effect. For various reasons, it will be concluded that the starting points leave much interpretative space, and that this space is widened further by the manner in which DECOS utilizes it. This is especially true in situations involving sex-specific risks and uncertainties in knowledge. Consequently, even at the first stage, where health considerations alone are intended to play a role, there is much room for other than health-related factors to influence decision making, although it is unavoidable that some interpretative space will remain. We argue that separating the various types of consideration should not be abandoned. Rather, through adjustments in the starting points and aspects of the procedure, clarity should be guaranteed about the way the interpretative space is being employed.

  17. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  18. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  19. ISPOR Code of Ethics 2017 (4th Edition).

    PubMed

    Santos, Jessica; Palumbo, Francis; Molsen-David, Elizabeth; Willke, Richard J; Binder, Louise; Drummond, Michael; Ho, Anita; Marder, William D; Parmenter, Louise; Sandhu, Gurmit; Shafie, Asrul A; Thompson, David

    2017-12-01

    As the leading health economics and outcomes research (HEOR) professional society, ISPOR has a responsibility to establish a uniform, harmonized international code for ethical conduct. ISPOR has updated its 2008 Code of Ethics to reflect the current research environment. This code addresses what is acceptable and unacceptable in research, from inception to the dissemination of its results. There are nine chapters: 1 - Introduction; 2 - Ethical Principles respect, beneficence and justice with reference to a non-exhaustive compilation of international, regional, and country-specific guidelines and standards; 3 - Scope HEOR definitions and how HEOR and the Code relate to other research fields; 4 - Research Design Considerations primary and secondary data related issues, e.g., participant recruitment, population and research setting, sample size/site selection, incentive/honorarium, administration databases, registration of retrospective observational studies and modeling studies; 5 - Data Considerations privacy and data protection, combining, verification and transparency of research data, scientific misconduct, etc.; 6 - Sponsorship and Relationships with Others (roles of researchers, sponsors, key opinion leaders and advisory board members, research participants and institutional review boards (IRBs) / independent ethics committees (IECs) approval and responsibilities); 7 - Patient Centricity and Patient Engagement new addition, with explanation and guidance; 8 - Publication and Dissemination; and 9 - Conclusion and Limitations. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  20. Lexical development in Korean: vocabulary size, lexical composition, and late talking.

    PubMed

    Rescorla, Leslie; Lee, Youn Mi Cathy; Lee, Youn Min Cathy; Oh, Kyung Ja; Kim, Young Ah

    2013-04-01

    In this study, the authors aimed to compare vocabulary size, lexical composition, and late talking in large samples of Korean and U.S. children ages 18-35 months. Data for 2,191 Korean children (211 children recruited "offline" through preschools, and 1,980 recruited "online" via the Internet) and 274 U.S. children were obtained using the Language Development Survey (LDS). Mean vocabulary size was slightly larger in the offline than the online group, but the groups were acquiring almost identical words. Mean vocabulary size did not differ by country; girls and older children had larger vocabularies in both countries. The Korean-U.S. Q correlations for percentage use of LDS words (.53 and .56) indicated considerable concordance across countries in lexical composition. Noun dominance was as large in Korean lexicons as in U.S. lexicons. About half of the most commonly reported words for the Korean and U.S. children were identical. Lexicons of late talkers resembled those of typically developing younger children in the same sample. Despite linguistic and discourse differences between Korean and English, LDS findings indicated considerable cross-linguistic similarity with respect to vocabulary size, lexical composition, and late talking.

  1. A harmonization effort for acceptable daily exposure application to pharmaceutical manufacturing - Operational considerations.

    PubMed

    Hayes, Eileen P; Jolly, Robert A; Faria, Ellen C; Barle, Ester Lovsin; Bercu, Joel P; Molnar, Lance R; Naumann, Bruce D; Olson, Michael J; Pecquet, Alison M; Sandhu, Reena; Shipp, Bryan K; Sussman, Robert G; Weideman, Patricia A

    2016-08-01

    A European Union (EU) regulatory guideline came into effect for all new pharmaceutical products on June 1st, 2015, and for all existing pharmaceutical products on December 1st, 2015. This guideline centers around the use of the Acceptable Daily Exposure (ADE) [synonymous with the Permitted Daily Exposure (PDE)] and operational considerations associated with implementation are outlined here. The EU guidance states that all active pharmaceutical ingredients (API) require an ADE; however, other substances such as starting materials, process intermediates, and cleaning agents may benefit from an ADE. Problems in setting ADEs for these additional substances typically relate to toxicological data limitations precluding the ability to establish a formal ADE. Established methodologies such as occupational exposure limits or bands (OELs or OEBs) and the threshold of toxicological concern (TTC) can be used or adjusted for use as interim ADEs when only limited data are available and until a more formal ADE can be established. Once formal ADEs are derived, it is important that the documents are routinely updated and that these updates are communicated to appropriate stakeholders. Another key operational consideration related to data-poor substances includes the use of maximum daily dose (MDD) in setting cross-contamination limits. The MDD is an important part of the maximum allowable/safe concentration (MAC/MSC) calculation and there are important considerations for its use and definition. Finally, other considerations discussed include operational aspects of setting ADEs for pediatrics, considerations for large molecules, and risk management in shared facilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Visual search for arbitrary objects in real scenes.

    PubMed

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  3. Method of particle trajectory recognition in particle flows of high particle concentration using a candidate trajectory tree process with variable search areas

    DOEpatents

    Shaffer, Franklin D.

    2013-03-12

    The application relates to particle trajectory recognition from a Centroid Population comprised of Centroids having an (x, y, t) or (x, y, f) coordinate. The method is applicable to visualization and measurement of particle flow fields of high particle. In one embodiment, the centroids are generated from particle images recorded on camera frames. The application encompasses digital computer systems and distribution mediums implementing the method disclosed and is particularly applicable to recognizing trajectories of particles in particle flows of high particle concentration. The method accomplishes trajectory recognition by forming Candidate Trajectory Trees and repeated searches at varying Search Velocities, such that initial search areas are set to a minimum size in order to recognize only the slowest, least accelerating particles which produce higher local concentrations. When a trajectory is recognized, the centroids in that trajectory are removed from consideration in future searches.

  4. Proteins as sponges: a statistical journey along protein structure organization principles.

    PubMed

    Paola, Luisa Di; Paci, Paola; Santoni, Daniele; Ruvo, Micol De; Giuliani, Alessandro

    2012-02-27

    The analysis of a large database of protein structures by means of topological and shape indexes inspired by complex network and fractal analysis shed light on some organizational principles of proteins. Proteins appear much more similar to "fractal" sponges than to closely packed spheres, casting doubts on the tenability of the hydrophobic core concept. Principal component analysis highlighted three main order parameters shaping the protein universe: (1) "size", with the consequent generation of progressively less dense and more empty structures at an increasing number of residues, (2) "microscopic structuring", linked to the existence of a spectrum going from the prevalence of heterologous (different hydrophobicity) to the prevalence of homologous (similar hydrophobicity) contacts, and (3) "fractal shape", an organizing protein data set along a continuum going from approximately linear to very intermingled structures. Perhaps the time has come for seriously taking into consideration the real relevance of time-honored principles like the hydrophobic core and hydrophobic effect.

  5. Aptamer-conjugated nanoparticles for cancer cell detection.

    PubMed

    Medley, Colin D; Bamrungsap, Suwussa; Tan, Weihong; Smith, Joshua E

    2011-02-01

    Aptamer-conjugated nanoparticles (ACNPs) have been used for a variety of applications, particularly dual nanoparticles for magnetic extraction and fluorescent labeling. In this type of assay, silica-coated magnetic and fluorophore-doped silica nanoparticles are conjugated to highly selective aptamers to detect and extract targeted cells in a variety of matrixes. However, considerable improvements are required in order to increase the selectivity and sensitivity of this two-particle assay to be useful in a clinical setting. To accomplish this, several parameters were investigated, including nanoparticle size, conjugation chemistry, use of multiple aptamer sequences on the nanoparticles, and use of multiple nanoparticles with different aptamer sequences. After identifying the best-performing elements, the improvements made to this assay's conditional parameters were combined to illustrate the overall enhanced sensitivity and selectivity of the two-particle assay using an innovative multiple aptamer approach, signifying a critical feature in the advancement of this technique.

  6. Designing a podiatry service to meet the needs of the population: a service simulation.

    PubMed

    Campbell, Jackie A

    2007-02-01

    A model of a podiatry service has been developed which takes into consideration the effect of changing access criteria, skill mix and staffing levels (among others) given fixed local staffing budgets and the foot-health characteristics of the local community. A spreadsheet-based deterministic model was chosen to allow maximum transparency of programming. This work models a podiatry service in England, but could be adapted for other settings and, with some modification, for other community-based services. This model enables individual services to see the effect on outcome parameters such as number of patients treated, number discharged and size of waiting lists of various service configurations, given their individual local data profile. The process of designing the model has also had spin-off benefits for the participants in making explicit many of the implicit rules used in managing their services.

  7. Improved equivalent circuit for twin slot terahertz receivers

    NASA Technical Reports Server (NTRS)

    McGrath, W. R.

    2002-01-01

    Series-fed coplanar waveguide embedding circuits are being developed for terahertz mixers using, in particular, submicron-sized superconducting devices, such as hot electron bolometers as the nonlinear element. Although these mixers show promising performance, they usually also show a considerable downward shift in the center frequency, when compared with simulations obtained by using simplified models. This makes it very difficult to design low-noise mixers for a given THz frequency. This shiftis principally caused by parasitics due to the extremely small details (in terms of wavelength) of the device, and by the electrical properties of the RF choke filter in the DC/IF line. In this paper, we present an improved equivalent network model of such mixer circuits which agrees with measured results at THz frequencies and we propose a new set of THz bolometric mixers that have been fabricated and are currently being tested.

  8. How is the instrumental color of meat measured?

    PubMed

    Tapp, W N; Yancey, J W S; Apple, J K

    2011-09-01

    Peer-reviewed journal articles (n=1068) were used to gather instrumental color measurement information in meat science research. The majority of articles, published in 10 peer-reviewed journals, originated from European countries (44.8%) and North America (38.5%). The predominant species was pork (44.2%), and most researchers used Minolta (60.0%) over Hunter (31.6%) colorimeters. Much of the research was done using illuminant D65 (32.3%); nevertheless, almost half (48.9%) of the articles did not report the illuminant. Moreover, a majority of the articles did not report aperture size (73.6%) or the number of readings per sample (52.4%). Many factors influence meat color, and a considerable proportion of the peer-reviewed, published research articles failed to include information necessary to replicate and/or interpret instrumental color results; therefore, a standardized set of minimum reportable parameters for meat color evaluation should be identified. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. THE CASE FOR A TYPHOID VACCINE PROBE STUDY AND OVERVIEW OF DESIGN ELEMENTS

    PubMed Central

    Halloran, M. Elizabeth; Khan, Imran

    2015-01-01

    Recent advances in typhoid vaccine, and consideration of support from Gavi, the Vaccine Alliance, raise the possibility that some endemic countries will introduce typhoid vaccine into public immunization programs. This decision, however, is limited by lack of definitive information on disease burden. We propose use of a vaccine probe study approach. This approach would more clearly assess the total burden of typhoid across different syndromic groups and account for lack of access to care, poor diagnostics, incomplete laboratory testing, lack of mortality and intestinal perforation surveillance, and increasing antibiotic resistance. We propose a cluster randomized trial design using a mass immunization campaign among all age groups, with monitoring over a 4-year period of a variety of outcomes. The primary outcome would be the vaccine preventable disease incidence of prolonged fever hospitalization. Sample size calculations suggest that such a study would be feasible over a reasonable set of assumptions. PMID:25912286

  10. [Innovative teleradiology network: concept and experience report].

    PubMed

    Kämmerer, M; Bethge, O T; Antoch, G

    2014-04-01

    (DICOM E-MAIL provides a standardized way for exchanging DICOM objects (Digital Imaging and Communications in Medicine) and further relevant patient data for the treatment context reliably and securely via encrypted e-mails. The current version of the DICOM E-MAIL standard recommendations of the"Deutsche Röntgengesellschaft" (DRG, German Röntgen Society) defines for the first time options for setting up a special directory service for the provision and distribution of communication data of all participants in a network. By using such"telephone books", networks of any size can be operated independent of the provider. Compared to a Cross-Enterprise Document Sharing (XDS) scenario, the required infrastructure is considerably less complex and quicker to realize. Critical success factors are, in addition to the technology and an effective support, that the participants themselves contribute to the further development of the network and in this way, the network approach can be practiced.

  11. Polymeric Micelles and Alternative Nanonized Delivery Vehicles for Poorly Soluble Drugs

    PubMed Central

    Lu, Ying; Park, Kinam

    2013-01-01

    Poorly soluble drugs often encounter low bioavailability and erratic absorption patterns in the clinical setting. Due to the rising number of compounds having solubility issues, finding ways to enhance the solubility of drugs is one of the major challenges in the pharmaceutical industry today. Polymeric micelles, which form upon self-assembly of amphiphilic macromolecules, can act as solubilizing agents for delivery of poorly soluble drugs. This manuscript examines the fundamentals of polymeric micelles through reviews of representative literature and demonstrates possible applications through recent examples of clinical trial developments. In particular, the potential of polymeric micelles for delivery of poorly water-soluble drugs, especially in the areas of oral delivery and in cancer therapy, is discussed. Key considerations in utilizing polymeric micelles’ advantages and overcoming potential disadvantages have been highlighted. Lastly, other possible strategies related to particle size reduction for enhancing solubilization of poorly water-soluble drugs are introduced. PMID:22944304

  12. Optimally combining dynamical decoupling and quantum error correction.

    PubMed

    Paz-Silva, Gerardo A; Lidar, D A

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization.

  13. Optimally combining dynamical decoupling and quantum error correction

    PubMed Central

    Paz-Silva, Gerardo A.; Lidar, D. A.

    2013-01-01

    Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization. PMID:23559088

  14. Ecosystem vulnerability to climate change in the southeastern United States

    USGS Publications Warehouse

    Cartwright, Jennifer M.; Costanza, Jennifer

    2016-08-11

    Two recent investigations of climate-change vulnerability for 19 terrestrial, aquatic, riparian, and coastal ecosystems of the southeastern United States have identified a number of important considerations, including potential for changes in hydrology, disturbance regimes, and interspecies interactions. Complementary approaches using geospatial analysis and literature synthesis integrated information on ecosystem biogeography and biodiversity, climate projections, vegetation dynamics, soil and water characteristics, anthropogenic threats, conservation status, sea-level rise, and coastal flooding impacts. Across a diverse set of ecosystems—ranging in size from dozens of square meters to thousands of square kilometers—quantitative and qualitative assessments identified types of climate-change exposure, evaluated sensitivity, and explored potential adaptive capacity. These analyses highlighted key gaps in scientific understanding and suggested priorities for future research. Together, these studies help create a foundation for ecosystem-level analysis of climate-change vulnerability to support effective biodiversity conservation in the southeastern United States.

  15. Experimental stimulation of bone healing with teriparatide: histomorphometric and microhardness analysis in a mouse model of closed fracture.

    PubMed

    Mognetti, Barbara; Marino, Silvia; Barberis, Alessandro; Martin, Anne-Sophie Bravo; Bala, Yohann; Di Carlo, Francesco; Boivin, Georges; Barbos, Michele Portigliatti

    2011-08-01

    Fracture consolidation is a crucial goal to achieve as early as possible, but pharmacological stimulation has been neglected so far. Teriparatide has been considered for this purpose for its anabolic properties. We set up a murine model of closed tibial fracture on which different doses of teriparatide were tested. Closed fracture treatment avoids any bias introduced by surgical manipulations. Teriparatide's effect on callus formation was monitored during the first 4 weeks from fracture. Callus evolution was determined by histomorphometric and microhardness assessment. Daily administration of 40 μg/kg of teriparatide accelerated callus mineralization from day 9 onward without significant increase of sizes, and at day 15 the microhardness properties of treated callus were similar to those of bone tissue. Teriparatide considerably improved callus consolidation in the very early phases of bone healing.

  16. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    PubMed

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  17. Application of spatially gridded temperature and land cover data sets for urban heat island analysis

    USGS Publications Warehouse

    Gallo, Kevin; Xian, George Z.

    2014-01-01

    Two gridded data sets that included (1) daily mean temperatures from 2006 through 2011 and (2) satellite-derived impervious surface area, were combined for a spatial analysis of the urban heat-island effect within the Dallas-Ft. Worth Texas region. The primary advantage of using these combined datasets included the capability to designate each 1 × 1 km grid cell of available temperature data as urban or rural based on the level of impervious surface area within the grid cell. Generally, the observed differences in urban and rural temperature increased as the impervious surface area thresholds used to define an urban grid cell were increased. This result, however, was also dependent on the size of the sample area included in the analysis. As the spatial extent of the sample area increased and included a greater number of rural defined grid cells, the observed urban and rural differences in temperature also increased. A cursory comparison of the spatially gridded temperature observations with observations from climate stations suggest that the number and location of stations included in an urban heat island analysis requires consideration to assure representative samples of each (urban and rural) environment are included in the analysis.

  18. Numerical Investigation of the Influence of the Configuration Parameters of a Supersonic Passenger Aircraft on the Intensity of Sonic Boom

    NASA Astrophysics Data System (ADS)

    Volkov, V. F.; Mazhul', I. I.

    2018-01-01

    Results of calculations of the sonic boom produced by a supersonic passenger aircraft in a cruising regime of flight at the Mach number M = 2.03 are presented. Consideration is given to the influence of the lateral dihedral of the wings and the angle of their setting, and also of different locations of the aircraft engine nacelles on the wing. An analysis of parametric calculations has shown that the intensities of sonic boom generated by a configuration with a dihedral rear wing and by a configuration with set wings remain constant, in practice, and correspond to the intensity level created by the optimum configuration. Comparative assessments of sonic boom for tandem configurations with different locations of the engine nacelles on the wing surface have shown that the intensity of sonic boom generated by the configuration with an engine nacelle on the windward side can be reduced by 14% compared to the configuration without engine nacelles. In the case of the configuration with engine nacelles on the leeward size of the wing, the profile of the sonic-boom wave degenerates into an N-wave, in which the intensity of the bow shock is significantly reduced.

  19. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  20. Thinking within the box: The relational processing style elicited by counterfactual mind-sets.

    PubMed

    Kray, Laura J; Galinsky, Adam D; Wong, Elaine M

    2006-07-01

    By comparing reality to what might have been, counterfactuals promote a relational processing style characterized by a tendency to consider relationships and associations among a set of stimuli. As such, counterfactual mind-sets were expected to improve performance on tasks involving the consideration of relationships and associations but to impair performance on tasks requiring novel ideas that are uninfluenced by salient associations. The authors conducted several experiments to test this hypothesis. In Experiments 1a and 1b, the authors determined that counterfactual mind-sets increase mental states and preferences for thinking styles consistent with relational thought. Experiment 2 demonstrated a facilitative effect of counterfactual mind-sets on an analytic task involving logical relationships; Experiments 3 and 4 demonstrated that counterfactual mind-sets structure thought and imagination around salient associations and therefore impaired performance on creative generation tasks. In Experiment 5, the authors demonstrated that the detrimental effect of counterfactual mind-sets is limited to creative tasks involving novel idea generation; in a creative association task involving the consideration of relationships between task stimuli, counterfactual mind-sets improved performance. Copyright 2006 APA, all rights reserved.

  1. The influence of perceptual load on age differences in selective attention.

    PubMed

    Maylor, E A; Lavie, N

    1998-12-01

    The effect of perceptual load on age differences in visual selective attention was examined in 2 studies. In Experiment 1, younger and older adults made speeded choice responses indicating which of 2 target letters was present in a relevant set of letters in the center of the display while they attempted to ignore an irrelevant distractor in the periphery. The perceptual load of relevant processing was manipulated by varying the central set size. When the relevant set size was small, the adverse effect of an incompatible distractor was much greater for the older participants than for the younger ones. However, with larger relevant set sizes, this was no longer the case, with the distractor effect decreasing for older participants at lower levels of perceptual load than for younger ones. In Experiment 2, older adults were tested with the empty locations in the central set either unmarked (as in Experiment 1) or marked by small circles to form a group of 6 items irrespective of set size; the 2 conditions did not differ markedly, ruling out an explanation based entirely on perceptual grouping.

  2. Measuring the effect of attention on simple visual search.

    PubMed

    Palmer, J; Ames, C T; Lindsey, D T

    1993-02-01

    Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.

  3. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  4. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  5. Late-life depression in the primary care setting: Challenges, collaborative care, and prevention

    PubMed Central

    Hall, Charles A.; Reynolds, Charles F.

    2014-01-01

    Late-life depression is highly prevalent worldwide. In addition to being a debilitating illness, it is a risk factor for excess morbidity and mortality. Older adults with depression are at risk for dementia, coronary heart disease, stroke, cancer and suicide. Individuals with late-life depression often have significant medical comorbidity and, poor treatment adherence. Furthermore, psychosocial considerations such as gender, ethnicity, stigma and bereavement are necessary to understand the full context of late-life depression. The fact that most older adults seek treatment for depression in primary care settings led to the development of collaborative care interventions for depression. These interventions have consistently demonstrated clinically meaningful effectiveness in the treatment of late-life depression. We describe three pivotal studies detailing the management of depression in primary care settings in both high and low-income countries. Beyond effectively treating depression, collaborative care models address additional challenges associated with late-life depression. Although depression treatment interventions are effective compared to usual care, they exhibit relatively low remission rates and small to medium effect sizes. Several studies have demonstrated that depression prevention is possible and most effective in at-risk older adults. Given the relatively modest effects of treatment in averting years lived with disability, preventing late-life depression at the primary care level should be highly prioritized as a matter of health policy. PMID:24996484

  6. Threading dynamics of a polymer through parallel pores: Potential applications to DNA size separation

    NASA Astrophysics Data System (ADS)

    Åkerman, Björn

    1997-04-01

    DNA orientation measurements by linear dichroism (LD) spectroscopy and single molecule imaging by fluorescence microscopy are used to investigate the effect of DNA size (71-740 kilo base pairs) and field strength E (1-5.9 V/cm) on the conformation dynamics during the field-driven threading of DNA molecules through a set of parallel pores in agarose gels, with average pore radii between 380 Å and 1400 Å. Locally relaxed but globally oriented DNA molecules are subjected to a perpendicular field, and the observed LD time profile is compared with a recent theory for the threading [D. Long and J.-L. Viovy, Phys. Rev. E 53, 803 (1996)] which assumes the same initial state. As predicted the DNA is driven by the ends into a U-form, leading to an overshoot in the LD. The overshoot-time scales as E-(1.2-1.4) as predicted, but grows more slowly with DNA size than the predicted linear dependence. For long molecules loops form initially in the threading process but are finally consumed by the ends, and the process of transfer of DNA segments, from the loops to the arms of the U, leads to a shoulder in the LD as predicted. The critical size below which loops do not form (as indicated by the LD shoulder being absent) is between 71 and 105 kbp (0.5% agarose, 5.9 V/cm), and considerably larger than predicted because in the initial state the DNA molecules are housed in gel cavities with effective pore sizes about four times larger than the average pore size. From the data, the separation of DNA by exploiting the threading dynamics in pulsed fields [D. Long et al., CR Acad. Sci. Paris, Ser. IIb 321, 239 (1995)] is shown to be feasible in principle in an agarose-based system.

  7. Graphical Methods for Reducing, Visualizing and Analyzing Large Data Sets Using Hierarchical Terminologies

    PubMed Central

    Jing, Xia; Cimino, James J.

    2011-01-01

    Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119

  8. Dissection of two QTL on SSC2 identifies candidate genes for ovulation rate in swine

    USDA-ARS?s Scientific Manuscript database

    Litter size is an economically important trait to producers that is lowly heritable, observable only after considerable investment has been made in gilt development and responds slowly to selection. Ovulation rate, a component trait of litter size, is moderately heritable, sex limited and should res...

  9. The Importance of Teaching Power in Statistical Hypothesis Testing

    ERIC Educational Resources Information Center

    Olinsky, Alan; Schumacher, Phyllis; Quinn, John

    2012-01-01

    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…

  10. Statistical Significance and Effect Size: Two Sides of a Coin.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  11. Determination by ray-tracing of the regions where mid-latitude whistlers exit from the lower ionosphere

    NASA Astrophysics Data System (ADS)

    Strangeways, H. J.

    1981-03-01

    The size and position of the regions in the bottomside ionosphere through which downcoming whistlers emerge are estimated using ray-tracing calculations in both summer day and winter night models of the magnetospheric plasma. Consideration is given to the trapping of upgoing whistler-mode waves through both the base and the side of ducts. It is found that for downcoming rays which were trapped in the duct in the summer day model, the limited range of wave-normal angles which can be transmitted from the lower ionosphere to free space below causes the size of the exit point to be considerably smaller than the region of incidence. The exit point is found to be approximately 100 km in size, which agrees with ground-based observations of fairly narrow trace whistlers. For rays trapped in the duct in the winter night model, it is found that the size of the exit point is more nearly the same as the range of final latitudes of the downcoming rays in the lower ionosphere.

  12. Two Echelon Supply Chain Integrated Inventory Model for Similar Products: A Case Study

    NASA Astrophysics Data System (ADS)

    Parjane, Manoj Baburao; Dabade, Balaji Marutirao; Gulve, Milind Bhaskar

    2017-06-01

    The purpose of this paper is to develop a mathematical model towards minimization of total cost across echelons in a multi-product supply chain environment. The scenario under consideration is a two-echelon supply chain system with one manufacturer, one retailer and M products. The retailer faces independent Poisson demand for each product. The retailer and the manufacturer are closely coupled in the sense that the information about any depletion in the inventory of a product at a retailer's end is immediately available to the manufacturer. Further, stock-out is backordered at the retailer's end. Thus the costs incurred at the retailer's end are the holding costs and the backorder costs. The manufacturer has only one processor which is time shared among the M products. Production changeover from one product to another entails a fixed setup cost and a fixed set up time. Each unit of a product has a production time. Considering the cost components, and assuming transportation time and cost to be negligible, the objective of the study is to minimize the expected total cost considering both the manufacturer and retailer. In the process two aspects are to be defined. Firstly, every time a product is taken up for production, how much of it (production batch size, q) should be produced. Considering a large value of q favors the manufacturer while a small value of q suits the retailers. Secondly, for a given batch size q, at what level of retailer's inventory (production queuing point), the batch size S of a product be taken up for production by the manufacturer. A higher value of S incurs more holding cost whereas a lower value of S increases the chance of backorder. A tradeoff between the holding and backorder cost must be taken into consideration while choosing an optimal value of S. It may be noted that due to multiple products and single processor, a product `taken' up for production may not get the processor immediately, and may have to wait in a queue. The `S' should factor in the probability of waiting time in the queue.

  13. 28 CFR 2.15 - Petition for consideration of parole prior to date set at hearing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... hearing. When a prisoner has served the minimum term of imprisonment required by law, the Bureau of... extraordinary circumstances that would warrant consideration of early parole. [42 FR 39809, Aug. 5, 1977, as...

  14. 28 CFR 2.15 - Petition for consideration of parole prior to date set at hearing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... hearing. When a prisoner has served the minimum term of imprisonment required by law, the Bureau of... extraordinary circumstances that would warrant consideration of early parole. [42 FR 39809, Aug. 5, 1977, as...

  15. 28 CFR 2.15 - Petition for consideration of parole prior to date set at hearing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... hearing. When a prisoner has served the minimum term of imprisonment required by law, the Bureau of... extraordinary circumstances that would warrant consideration of early parole. [42 FR 39809, Aug. 5, 1977, as...

  16. 28 CFR 2.15 - Petition for consideration of parole prior to date set at hearing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... hearing. When a prisoner has served the minimum term of imprisonment required by law, the Bureau of... extraordinary circumstances that would warrant consideration of early parole. [42 FR 39809, Aug. 5, 1977, as...

  17. Implementation of Size-Dependent Local Diagnostic Reference Levels for CT Angiography.

    PubMed

    Boere, Hub; Eijsvoogel, Nienke G; Sailer, Anna M; Wildberger, Joachim E; de Haan, Michiel W; Das, Marco; Jeukens, Cecile R L P N

    2018-05-01

    Diagnostic reference levels (DRLs) are established for standard-sized patients; however, patient dose in CT depends on patient size. The purpose of this study was to introduce a method for setting size-dependent local diagnostic reference levels (LDRLs) and to evaluate these LDRLs in comparison with size-independent LDRLs and with respect to image quality. One hundred eighty-four aortic CT angiography (CTA) examinations performed on either a second-generation or third-generation dual-source CT scanner were included; we refer to the second-generation dual-source CT scanner as "CT1" and the third-generation dual-source CT scanner as "CT2." The volume CT dose index (CTDI vol ) and patient diameter (i.e., the water-equivalent diameter) were retrieved by dose-monitoring software. Size-dependent DRLs based on a linear regression of the CTDI vol versus patient size were set by scanner type. Size-independent DRLs were set by the 5th and 95th percentiles of the CTDI vol values. Objective image quality was assessed using the signal-to-noise ratio (SNR), and subjective image quality was assessed using a 4-point Likert scale. The CTDI vol depended on patient size and scanner type (R 2 = 0.72 and 0.78, respectively; slope = 0.05 and 0.02 mGy/mm; p < 0.001). Of the outliers identified by size-independent DRLs, 30% (CT1) and 67% (CT2) were adequately dosed when considering patient size. Alternatively, 30% (CT1) and 70% (CT2) of the outliers found with size-dependent DRLs were not identified using size-independent DRLs. A negative correlation was found between SNR and CTDI vol (R 2 = 0.36 for CT1 and 0.45 for CT2). However, all outliers had a subjective image quality score of sufficient or better. We introduce a method for setting size-dependent LDRLs in CTA. Size-dependent LDRLs are relevant for assessing the appropriateness of the radiation dose for an individual patient on a specific CT scanner.

  18. Costs of storing colour and complex shape in visual working memory: Insights from pupil size and slow waves.

    PubMed

    Kursawe, Michael A; Zimmer, Hubert D

    2015-06-01

    We investigated the impact of perceptual processing demands on visual working memory of coloured complex random polygons during change detection. Processing load was assessed by pupil size (Exp. 1) and additionally slow wave potentials (Exp. 2). Task difficulty was manipulated by presenting different set sizes (1, 2, 4 items) and by making different features (colour, shape, or both) task-relevant. Memory performance in the colour condition was better than in the shape and both condition which did not differ. Pupil dilation and the posterior N1 increased with set size independent of type of feature. In contrast, slow waves and a posterior P2 component showed set size effects but only if shape was task-relevant. In the colour condition slow waves did not vary with set size. We suggest that pupil size and N1 indicates different states of attentional effort corresponding to the number of presented items. In contrast, slow waves reflect processes related to encoding and maintenance strategies. The observation that their potentials vary with the type of feature (simple colour versus complex shape) indicates that perceptual complexity already influences encoding and storage and not only comparison of targets with memory entries at the moment of testing. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. From the point-of-purchase perspective: a qualitative study of the feasibility of interventions aimed at portion-size.

    PubMed

    Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C

    2009-04-01

    Food portion-sizes might be a promising starting point for interventions targeting obesity. The purpose of this qualitative study was to assess how representatives of point-of-purchase settings perceived the feasibility of interventions aimed at portion-size. Semi-structured interviews were conducted with 22 representatives of various point-of-purchase settings. Constructs derived from the diffusion of innovations theory were incorporated into the interview guide. Each interview was recorded and transcribed verbatim. Data were coded and analysed with Atlas.ti 5.2 using the framework approach. According to the participants, offering a larger variety of portion-sizes had the most relative advantages, and reducing portions was the most disadvantageous. The participants also considered portion-size reduction and linear pricing of portion-sizes to be risky. Lastly, a larger variety of portion-sizes, pricing strategies and portion-size labelling were seen as the most complex interventions. In general, participants considered offering a larger variety of portion-sizes, portion-size labelling and, to a lesser extent, pricing strategies with respect to portion-sizes as most feasible to implement. Interventions aimed at portion-size were seen as innovative by most participants. Developing adequate communication strategies about portion-size interventions with both decision-makers in point-of-purchase settings and the general public is crucial for successful implementation.

  20. Moments of catchment storm area

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.; Wang, Q.

    1985-01-01

    The portion of a catchment covered by a stationary rainstorm is modeled by the common area of two overlapping circles. Given that rain occurs within the catchment and conditioned by fixed storm and catchment sizes, the first two moments of the distribution of the common area are derived from purely geometrical considerations. The variance of the wetted fraction is shown to peak when the catchment size is equal to the size of the predominant storm. The conditioning on storm size is removed by assuming a probability distribution based upon the observed fractal behavior of cloud and rainstorm areas.

  1. Effect of limb regeneration on size increase at molt of the shore crabs Hemigrapsus oregonensis and Pachygrapsus crassipes.

    PubMed

    Kuris, A M; Mager, M

    1975-09-01

    Size increase at molt is reduced following multiple limb regeneration in the shore crabs, Hemigrapsus oregonensis and Pachygrapsus crassipes. Limb loss per se does not influence postmolt size. Effect of increasing number of regenerating limbs is additive. Postmolt size is programmed early in the premolt period of the preceding instar and is probably not readily influenced by water uptake mechanics at ecdysis. A simple model for growth, molting, and regeneration in heavily calcified Crustacea is developed from the viewpoint of adaptive strategies and energetic considerations.

  2. Computer Series, 107.

    ERIC Educational Resources Information Center

    Birk, James P., Ed.

    1989-01-01

    Presented is a simple laboratory set-up for teaching microprocessor-controlled data acquisition as a part of an instrumental analysis course. Discussed are the experimental set-up, experimental procedures, and technical considerations for this technique. (CW)

  3. Adopting Cut Scores: Post-Standard-Setting Panel Considerations for Decision Makers

    ERIC Educational Resources Information Center

    Geisinger, Kurt F.; McCormick, Carina M.

    2010-01-01

    Standard-setting studies utilizing procedures such as the Bookmark or Angoff methods are just one component of the complete standard-setting process. Decision makers ultimately must determine what they believe to be the most appropriate standard or cut score to use, employing the input of the standard-setting panelists as one piece of information…

  4. Animal social networks as substrate for cultural behavioural diversity.

    PubMed

    Whitehead, Hal; Lusseau, David

    2012-02-07

    We used individual-based stochastic models to examine how social structure influences the diversity of socially learned behaviour within a non-human population. For continuous behavioural variables we modelled three forms of dyadic social learning, averaging the behavioural value of the two individuals, random transfer of information from one individual to the other, and directional transfer from the individual with highest behavioural value to the other. Learning had potential error. We also examined the transfer of categorical behaviour between individuals with random directionality and two forms of error, the adoption of a randomly chosen existing behavioural category or the innovation of a new type of behaviour. In populations without social structuring the diversity of culturally transmitted behaviour increased with learning error and population size. When the populations were structured socially either by making individuals members of permanent social units or by giving them overlapping ranges, behavioural diversity increased with network modularity under all scenarios, although the proportional increase varied considerably between continuous and categorical behaviour, with transmission mechanism, and population size. Although functions of the form e(c)¹(m)⁻(c)² + (c)³(Log(N)) predicted the mean increase in diversity with modularity (m) and population size (N), behavioural diversity could be highly unpredictable both between simulations with the same set of parameters, and within runs. Errors in social learning and social structuring generally promote behavioural diversity. Consequently, social learning may be considered to produce culture in populations whose social structure is sufficiently modular. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Effects of geomorphology, habitat, and spatial location on fish assemblages in a watershed in Ohio, USA.

    PubMed

    D'Ambrosio, Jessica L; Williams, Lance R; Witter, Jonathan D; Ward, Andy

    2009-01-01

    In this paper, we evaluate relationships between in-stream habitat, water chemistry, spatial distribution within a predominantly agricultural Midwestern watershed and geomorphic features and fish assemblage attributes and abundances. Our specific objectives were to: (1) identify and quantify key environmental variables at reach and system wide (watershed) scales; and (2) evaluate the relative influence of those environmental factors in structuring and explaining fish assemblage attributes at reach scales to help prioritize stream monitoring efforts and better incorporate all factors that influence aquatic biology in watershed management programs. The original combined data set consisted of 31 variables measured at 32 sites, which was reduced to 9 variables through correlation and linear regression analysis: stream order, percent wooded riparian zone, drainage area, in-stream cover quality, substrate quality, gradient, cross-sectional area, width of the flood prone area, and average substrate size. Canonical correspondence analysis (CCA) and variance partitioning were used to relate environmental variables to fish species abundance and assemblage attributes. Fish assemblages and abundances were explained best by stream size, gradient, substrate size and quality, and percent wooded riparian zone. Further data are needed to investigate why water chemistry variables had insignificant relationships with IBI scores. Results suggest that more quantifiable variables and consideration of spatial location of a stream reach within a watershed system should be standard data incorporated into stream monitoring programs to identify impairments that, while biologically limiting, are not fully captured or elucidated using current bioassessment methods.

  6. Lattice Boltzmann simulation of the gas-solid adsorption process in reconstructed random porous media.

    PubMed

    Zhou, L; Qu, Z G; Ding, T; Miao, J Y

    2016-04-01

    The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.

  7. Lattice Boltzmann simulation of the gas-solid adsorption process in reconstructed random porous media

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.

    2016-04-01

    The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.

  8. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  9. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  10. Child t-shirt size data set from 3D body scanner anthropometric measurements and a questionnaire.

    PubMed

    Pierola, A; Epifanio, I; Alemany, S

    2017-04-01

    A dataset of a fit assessment study in children is presented. Anthropometric measurements of 113 children were obtained using a 3D body scanner. Children tested a t-shirt of different sizes and a different model for boys and girls, and their fit was assessed by an expert. This expert labeled the fit as 0 (correct), -1 (if the garment was small for that child), or 1 (if the garment was large for that child) in an ordered factor called Size-fit. Moreover, the fit was numerically assessed from 1 (very poor fit) to 10 (perfect fit) in a variable called Expert evaluation. This data set contains the differences between the reference mannequin of the evaluated size and the child׳s anthropometric measurements for 27 variables. Besides these variables, in the data set, we can also find the gender, the size evaluated, and the size recommended by the expert, including if an intermediate, but nonexistent size between two consecutive sizes would have been the right size. In total, there are 232 observations. The analysis of these data can be found in Pierola et al. (2016) [2].

  11. One portion size of foods frequently consumed by Korean adults

    PubMed Central

    Choi, Mi-Kyeong; Hyun, Wha-Jin; Lee, Sim-Yeol; Park, Hong-Ju; Kim, Se-Na

    2010-01-01

    This study aimed to define a one portion size of food items frequently consumed for convenient use by Koreans in food selection, diet planning, and nutritional evaluation. We analyzed using the original data on 5,436 persons (60.87%) aged 20 ~ 64 years among 8,930 persons to whom NHANES 2005 and selected food items consumed by the intake frequency of 30 or higher among the 500 most frequently consumed food items. A total of 374 varieties of food items of regular use were selected. And the portion size of food items was set on the basis of the median (50th percentile) of the portion size for a single intake by a single person was analyzed. In cereals, the portion size of well polished rice was 80 g. In meats, the portion size of Korean beef cattle was 25 g. Among vegetable items, the portion size of Baechukimchi was 40 g. The portion size of the food items of regular use set in this study will be conveniently and effectively used by general consumers in selecting food items for a nutritionally balanced diet. In addition, these will be used as the basic data in setting the serving size in meal planning. PMID:20198213

  12. Consideration of Materials for Aircraft Brakes

    NASA Technical Reports Server (NTRS)

    Peterson, M. B.; Ho, T.

    1972-01-01

    An exploratory investigation was conducted concerning materials and their properties for use in aircraft brakes. Primary consideration was given to the heat dissipation and the frictional behavior of materials. Used brake pads and rotors were analyzed as part of the investigation. A simple analysis was conducted in order to determine the most significant factors which affect surface temperatures. It was found that where size and weight restrictions are necessary, the specific heat of the material, and maintaining uniform contact area are the most important factors. A criterion was suggested for optimum sizing of the brake disks. Bench friction tests were run with brake materials. It was found that there is considerable friction variation due to the formation and removal of surface oxide films. Other causes of friction variations are surface softening and melting. The friction behavior at high temperature was found to be more characteristic of the steel surface rather than the copper brake material. It is concluded that improved brake materials are feasible.

  13. 16 CFR 1012.1 - General policy considerations; scope.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Policy, sets forth requirements for advance public notice, public attendance, and recordkeeping for... 16 Commercial Practices 2 2010-01-01 2010-01-01 false General policy considerations; scope. 1012.1 Section 1012.1 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION GENERAL MEETINGS POLICY-MEETINGS...

  14. Setting conservation priorities.

    PubMed

    Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P

    2009-04-01

    A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.

  15. Survival of white-tailed deer neonates in Minnesota and South Dakota

    USGS Publications Warehouse

    Grovenburg, T.W.; Swanson, C.C.; Jacques, C.N.; Klaver, R.W.; Brinkman, T.J.; Burris, B.M.; Deperno, C.S.; Jenks, J.A.

    2011-01-01

    Understanding the influence of intrinsic (e.g., age, birth mass, and sex) and habitat factors on survival of neonate white-tailed deer improves understanding of population ecology. During 2002–2004, we captured and radiocollared 78 neonates in eastern South Dakota and southwestern Minnesota, of which 16 died before 1 September. Predation accounted for 80% of mortality; the remaining 20% was attributed to starvation. Canids (coyotes [Canis latrans], domestic dogs) accounted for 100% of predation on neonates. We used known fate analysis in Program MARK to estimate survival rates and investigate the influence of intrinsic and habitat variables on survival. We developed 2 a priori model sets, including intrinsic variables (model set 1) and habitat variables (model set 2; forested cover, wetlands, grasslands, and croplands). For model set 1, model {Sage-interval} had the lowest AICc (Akaike's information criterion for small sample size) value, indicating that age at mortality (3-stage age-interval: 0–2 weeks, 2–8 weeks, and >8 weeks) best explained survival. Model set 2 indicated that habitat variables did not further influence survival in the study area; β-estimates and 95% confidence intervals for habitat variables in competing models encompassed zero; thus, we excluded these models from consideration. Overall survival rate using model {Sage-interval} was 0.87 (95% CI = 0.83–0.91); 61% of mortalities occurred at 0–2 weeks of age, 26% at 2–8 weeks of age, and 13% at >8 weeks of age. Our results indicate that variables influencing survival may be area specific. Region-specific data are needed to determine influences of intrinsic and habitat variables on neonate survival before wildlife managers can determine which habitat management activities influence neonate populations.

  16. Growth promotion in plants by rice necrosis mosaic virus.

    PubMed

    Ghosh, S K

    1982-08-01

    Ludwigia perennis L. infected with rice necrosis mosaic virus (RNMV) showed an increase in both shoot growth and leaf size, along with characteristic chlorotic lesions on leaves. The promotion of growth over the controls extended over a considerable period of time (70 d). Inoculation with RNMV resulted in increased plant height, leaf size, stem diameter, and number and size of fiber bundles in Corchorus olitorius L., C. capsularis L., Hibiscus sabdariffa L. and H. cannabinus L.

  17. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  18. Planning Community-Based Assessments of HIV Educational Intervention Programs in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Kelcey, Ben; Shen, Zuchao

    2017-01-01

    A key consideration in planning studies of community-based HIV education programs is identifying a sample size large enough to ensure a reasonable probability of detecting program effects if they exist. Sufficient sample sizes for community- or group-based designs are proportional to the correlation or similarity of individuals within communities.…

  19. 29 CFR 579.5 - Determining the amount of the penalty and assessing the penalty.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... evidence of the violation or violations and will take into consideration the size of the business of the... penalty to the size of the business of the person charged with the violation or violations, taking into...-days of hired farm labor used in pertinent calendar quarters), dollar volume of sales or business done...

  20. A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models

    ERIC Educational Resources Information Center

    Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.

    2013-01-01

    The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…

  1. A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region

    NASA Technical Reports Server (NTRS)

    Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.

    2016-01-01

    The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.

  2. Determination of the Thermal Properties of Sands as Affected by Water Content, Drainage/Wetting, and Porosity Conditions for Sands With Different Grain Sizes

    NASA Astrophysics Data System (ADS)

    Smits, K. M.; Sakaki, T.; Limsuwat, A.; Illangasekare, T. H.

    2009-05-01

    It is widely recognized that liquid water, water vapor and temperature movement in the subsurface near the land/atmosphere interface are strongly coupled, influencing many agricultural, biological and engineering applications such as irrigation practices, the assessment of contaminant transport and the detection of buried landmines. In these systems, a clear understanding of how variations in water content, soil drainage/wetting history, porosity conditions and grain size affect the soil's thermal behavior is needed, however, the consideration of all factors is rare as very few experimental data showing the effects of these variations are available. In this study, the effect of soil moisture, drainage/wetting history, and porosity on the thermal conductivity of sandy soils with different grain sizes was investigated. For this experimental investigation, several recent sensor based technologies were compiled into a Tempe cell modified to have a network of sampling ports, continuously monitoring water saturation, capillary pressure, temperature, and soil thermal properties. The water table was established at mid elevation of the cell and then lowered slowly. The initially saturated soil sample was subjected to slow drainage, wetting, and secondary drainage cycles. After liquid water drainage ceased, evaporation was induced at the surface to remove soil moisture from the sample to obtain thermal conductivity data below the residual saturation. For the test soils studied, thermal conductivity increased with increasing moisture content, soil density and grain size while thermal conductivity values were similar for soil drying/wetting behavior. Thermal properties measured in this study were then compared with independent estimates made using empirical models from literature. These soils will be used in a proposed set of experiments in intermediate scale test tanks to obtain data to validate methods and modeling tools used for landmine detection.

  3. A cross-platform survey of CT image quality and dose from routine abdomen protocols and a method to systematically standardize image quality

    PubMed Central

    Favazza, Christopher P.; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M.; Bruesewitz, Michael R.; McCollough, Cynthia H.

    2015-01-01

    Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice’s routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2 ± 0.2 mm using GE’s “Plus” mode reconstruction setting and 5.0 ± 0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24 ± 0.37, 6.20 ± 0.34, and 7.84 ± 0.70 lp/cm, respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5–13.3 HU (noise) and 4.8–13.3 mGy (CTDIvol) for the smallest phantom; 9.1–18.4 HU and 9.3–28.8 mGy for the medium phantom; and 7.8–23.4 HU and 16.0–48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes. PMID:26459751

  4. The Physics of Protoplanetary Dust Agglomerates. X. High-velocity Collisions between Small and Large Dust Agglomerates as a Growth Barrier

    NASA Astrophysics Data System (ADS)

    Schräpler, Rainer; Blum, Jürgen; Krijt, Sebastiaan; Raabe, Jan-Hendrik

    2018-01-01

    In a protoplanetary disk, dust aggregates in the μm to mm size range possess mean collision velocities of 10–60 m s‑1 with respect to dm- to m-sized bodies. We performed laboratory collision experiments to explore this parameter regime and found a size- and velocity-dependent threshold between erosion and growth. By using a local Monte Carlo coagulation calculation and along with a simple semi-analytical timescale approach, we show that erosion considerably limits particle growth in protoplanetary disks and leads to a steady-state dust-size distribution from μm- to dm-sized particles.

  5. Some controversial multiple testing problems in regulatory applications.

    PubMed

    Hung, H M James; Wang, Sue-Jane

    2009-01-01

    Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.

  6. Use of fees to fund local public health services in Western Massachusetts.

    PubMed

    Shila Waritu, A; Bulzacchelli, Maria T; Begay, Michael E

    2015-01-01

    Recent budget cuts have forced many local health departments (LHDs) to cut staff and services. Setting fees that cover the cost of service provision is one option for continuing to fund certain activities. To describe the use of fees by LHDs in Western Massachusetts and determine whether fees charged cover the cost of providing selected services. A cross-sectional descriptive analysis was used to identify the types of services for which fees are charged and the fee amounts charged. A comparative cost analysis was conducted to compare fees charged with estimated costs of service provision. Fifty-nine LHDs in Western Massachusetts. Number of towns charging fees for selected types of services; minimum, maximum, and mean fee amounts; estimated cost of service provision; number of towns experiencing a surplus or deficit for each service; and average size of deficits experienced. Enormous variation exists both in the types of services for which fees are charged and fee amounts charged. Fees set by most health departments did not cover the cost of service provision. Some fees were set as much as $600 below estimated costs. These results suggest that considerations other than costs of service provision factor into the setting of fees by LHDs in Western Massachusetts. Given their limited and often uncertain funding, LHDs could benefit from examining their fee schedules to ensure that the fee amounts charged cover the costs of providing the services. Cost estimates should include at least the health agent's wage and time spent performing inspections and completing paperwork, travel expenses, and cost of necessary materials.

  7. Thyroid Surgery in a Resource-Limited Setting.

    PubMed

    Jafari, Aria; Campbell, David; Campbell, Bruce H; Ngoitsi, Henry Nono; Sisenda, Titus M; Denge, Makaya; James, Benjamin C; Cordes, Susan R

    2017-03-01

    Objective The present study reviews a series of patients who underwent thyroid surgery in Eldoret, Kenya, to demonstrate the feasibility of conducting long-term (>1 year) outcomes research in a resource-limited setting, impact on the quality of life of the recipient population, and inform future humanitarian collaborations. Study Design Case series with chart review. Setting Tertiary public referral hospital in Eldoret, Kenya. Subjects and Methods Twenty-one patients were enrolled during the study period. A retrospective chart review was performed for all adult patients who underwent thyroid surgery during humanitarian trips (2010-2015). Patients were contacted by mobile telephone. Medical history and physical examination, including laryngoscopy, were performed, and the SF-36 was administered (a quality-of-life questionnaire). Laboratory measurements of thyroid function and neck ultrasound were obtained. Results The mean follow-up was 33.6 ± 20.2 months after surgery: 37.5% of subtotal thyroidectomy patients and 15.4% of lobectomy patients were hypothyroid postoperatively according to serologic studies. There were no cases of goiter recurrence or malignancy. All patients reported postoperative symptomatic improvement and collectively showed positive pre- and postoperative score differences on the SF-36. Conclusion Although limited by a small sample size and the retrospective nature, our study demonstrates the feasibility of long-term surgical and quality-of-life outcomes research in a resource-limited setting. The low complication rates suggest minimal adverse effects of performing surgery in this context. Despite a considerable rate of postoperative hypothyroidism, it is in accordance with prior studies and emphasizes the need for individualized, longitudinal, and multidisciplinary care. Quality-of-life score improvements suggest benefit to the recipient population.

  8. Fast maximum intensity projections of large medical data sets by exploiting hierarchical memory architectures.

    PubMed

    Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen

    2006-04-01

    Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.

  9. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  10. Power Distribution System Planning with GIS Consideration

    NASA Astrophysics Data System (ADS)

    Wattanasophon, Sirichai; Eua-Arporn, Bundhit

    This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.

  11. Hydraulic Performance of Set-Back Curb Inlets

    DOT National Transportation Integrated Search

    1998-06-01

    The objective of this study was to develop hydraulic design charts for the location and sizing of set-back curb inlets. An extensive program of hydraulic model testing was conducted to evaluate the performance of various inlet opening sizes. The grad...

  12. General proactive interference and the N450 response.

    PubMed

    Tays, William J; Dywan, Jane; Segalowitz, Sidney J

    2009-10-25

    Strategic repetition of verbal stimuli can effectively produce proactive interference (PI) effects in the Sternberg working memory task. Unique fronto-cortical activation to PI-eliciting letter probes has been interpreted as reflecting brain responses to PI. However, the use of only a small set of stimuli (e.g., letters and digits) requires constant repetition of stimuli in both PI and baseline trials, potentially creating a general PI effect in all conditions. We used event-related potentials to examine general PI effects by contrasting the interference-related frontal N450 response in two Sternberg tasks using a small versus large set size. We found that the N450 response differed significantly from baseline during the small set-size task only for response-conflict PI trials but not when PI was created solely from stimulus repetition. During the large set-size task N450 responses in both the familiarity-based and response-conflict PI conditions differed from baseline but not from each other. We conclude that the general stimulus repetition inherent in small set-size conditions can mask effects of familiarity-based PI and complicate the interpretation of any associated neural response.

  13. Generating a taxonomy of spatially cued attention for visual discrimination: Effects of judgment precision and set size on attention

    PubMed Central

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-01-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234

  14. Generating a taxonomy of spatially cued attention for visual discrimination: effects of judgment precision and set size on attention.

    PubMed

    Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin

    2014-11-01

    Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.

  15. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    NASA Technical Reports Server (NTRS)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  16. Density effect on great tit (Parus major) clutch size intensifies in a polluted environment.

    PubMed

    Eeva, Tapio; Lehikoinen, Esa

    2013-12-01

    Long-term data on a great tit (Parus major) population breeding in a metal-polluted zone around a copper-nickel smelter indicate that, against expectations, the clutch size of this species is decreasing even though metal emissions in the area have decreased considerably over the past two decades. Here, we document long-term population-level changes in the clutch size of P. major and explore if changes in population density, population numbers of competing species, timing of breeding, breeding habitat, or female age distribution can explain decreasing clutch sizes. Clutch size of P. major decreased by one egg in the polluted zone during the past 21 years, while there was no significant change in clutch size in the unpolluted reference zone over this time period. Density of P. major nests was similar in both environments but increased threefold during the study period in both areas (from 0.8 to 2.4 nest/ha). In the polluted zone, clutch size has decreased as a response to a considerable increase in population density, while a corresponding density change in the unpolluted zone did not have such an effect. The other factors studied did not explain the clutch size trend. Fledgling numbers in the polluted environment have been relatively low since the beginning of the study period, and they do not show a corresponding decrease to that noted for the clutch size over the same time period. Our study shows that responses of commonly measured life-history parameters to anthropogenic pollution depend on the structure of the breeding population. Interactions between pollution and intrinsic population characters should therefore be taken into account in environmental studies.

  17. THE EFFECT OF PROJECTION ON DERIVED MASS-SIZE AND LINEWIDTH-SIZE RELATIONSHIPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shetty, Rahul; Kauffmann, Jens; Goodman, Alyssa A.

    2010-04-01

    Power-law mass-size and linewidth-size correlations, two of 'Larson's laws', are often studied to assess the dynamical state of clumps within molecular clouds. Using the result of a hydrodynamic simulation of a molecular cloud, we investigate how geometric projection may affect the derived Larson relationships. We find that large-scale structures in the column density map have similar masses and sizes to those in the three-dimensional simulation (position-position-position, PPP). Smaller scale clumps in the column density map are measured to be more massive than the PPP clumps, due to the projection of all emitting gas along lines of sight. Further, due tomore » projection effects, structures in a synthetic spectral observation (position-position-velocity, PPV) may not necessarily correlate with physical structures in the simulation. In considering the turbulent velocities only, the linewidth-size relationship in the PPV cube is appreciably different from that measured from the simulation. Including thermal pressure in the simulated line widths imposes a minimum line width, which results in a better agreement in the slopes of the linewidth-size relationships, though there are still discrepancies in the offsets, as well as considerable scatter. Employing commonly used assumptions in a virial analysis, we find similarities in the computed virial parameters of the structures in the PPV and PPP cubes. However, due to the discrepancies in the linewidth-size and mass-size relationships in the PPP and PPV cubes, we caution that applying a virial analysis to observed clouds may be misleading due to geometric projection effects. We speculate that consideration of physical processes beyond kinetic and gravitational pressure would be required for accurately assessing whether complex clouds, such as those with highly filamentary structure, are bound.« less

  18. A category-specific advantage for numbers in verbal short-term memory: evidence from semantic dementia.

    PubMed

    Jefferies, Elizabeth; Patterson, Karalyn; Jones, Roy W; Bateman, David; Lambon Ralph, Matthew A

    2004-01-01

    This study explored possible reasons for the striking difference between digit span and word span in patients with semantic dementia. Immediate serial recall (ISR) of number and non-number words was examined in four patients. For every case, the recall of single-digit numbers was normal whereas the recall of non-number words was impaired relative to controls. This difference extended to multi-digit numbers, and remained even when frequency, imageability, word length, set size and size of semantic category were matched for the numbers and words. The advantage for number words also applied to the patients' reading performance. Previous studies have suggested that semantic memory plays a critical role in verbal short-term memory (STM) and reading: patients with semantic dementia show superior recall and reading of words that are still relatively well known compared to previously known but now semantically degraded words. Additional assessments suggested that this semantic locus was the basis of the patients' category-specific advantage for numbers. Comprehension was considerably better for number than non-number words. Number knowledge may be relatively preserved in semantic dementia because the cortical atrophy underlying the condition typically spares the areas of the parietal lobes thought to be crucial in numerical cognition but involves the inferolateral temporal-lobes known to support general conceptual knowledge.

  19. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  20. Radio for hidden-photon dark matter detection

    DOE PAGES

    Chaudhuri, Saptarshi; Graham, Peter W.; Irwin, Kent; ...

    2015-10-08

    We propose a resonant electromagnetic detector to search for hidden-photon dark matter over an extensive range of masses. Hidden-photon dark matter can be described as a weakly coupled “hidden electric field,” oscillating at a frequency fixed by the mass, and able to penetrate any shielding. At low frequencies (compared to the inverse size of the shielding), we find that the observable effect of the hidden photon inside any shielding is a real, oscillating magnetic field. We outline experimental setups designed to search for hidden-photon dark matter, using a tunable, resonant LC circuit designed to couple to this magnetic field. Ourmore » “straw man” setups take into consideration resonator design, readout architecture and noise estimates. At high frequencies, there is an upper limit to the useful size of a single resonator set by 1/ν. However, many resonators may be multiplexed within a hidden-photon coherence length to increase the sensitivity in this regime. Hidden-photon dark matter has an enormous range of possible frequencies, but current experiments search only over a few narrow pieces of that range. As a result, we find the potential sensitivity of our proposal is many orders of magnitude beyond current limits over an extensive range of frequencies, from 100 Hz up to 700 GHz and potentially higher.« less

  1. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur

    PubMed Central

    Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E. J.; Shefelbine, S. J.; Hutchinson, J. R.

    2012-01-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  2. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Critchlow, Terence J.; Abdulla, Ghaleb; Becla, Jacek

    Data management is the organization of information to support efficient access and analysis. For data intensive computing applications, the speed at which relevant data can be accessed is a limiting factor in terms of the size and complexity of computation that can be performed. Data access speed is impacted by the size of the relevant subset of the data, the complexity of the query used to define it, and the layout of the data relative to the query. As the underlying data sets become increasingly complex, the questions asked of it become more involved as well. For example, geospatial datamore » associated with a city is no longer limited to the map data representing its streets, but now also includes layers identifying utility lines, key points, locations and types of businesses within the city limits, tax information for each land parcel, satellite imagery, and possibly even street-level views. As a result, queries have gone from simple questions, such as "how long is Main Street?", to much more complex questions such as "taking all other factors into consideration, are the property values of houses near parks higher than those under power lines, and if so, by what percentage". Answering these questions requires a coherent infrastructure, integrating the relevant data into a format optimized for the questions being asked.« less

  4. Overcoming the winner's curse: estimating penetrance parameters from case-control data.

    PubMed

    Zollner, Sebastian; Pritchard, Jonathan K

    2007-04-01

    Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.

  5. Educational And Public Outreach Software On Planet Detection For The Macintosh (TM)

    NASA Technical Reports Server (NTRS)

    Koch, David; Brady, Victoria; Cannara, Rachel; Witteborn, Fred C. (Technical Monitor)

    1996-01-01

    The possibility of extra-solar planets has been a very popular topic with the general public for years. Considerable media coverage of recent detections has only heightened the interest in the topic. School children are particularly interested in learning about space. Astronomers have the knowledge and responsibility to present this information in both an understandable and interesting format. Since most classrooms and homes are now equipped with computers this media can be utilized to provide more than a traditional "flat" presentation. An interactive "stack" has been developed using Hyperstudio (TM). The major topics include: "1996 - The Break Through Year In Planet Detection"; "What Determines If A Planet Is Habitable?"; "How Can We Find Other Planets (Search Methods)"; "All About the Kepler Mission: How To Find Earth-Sized Planets"; and "A Mission Simulator". Using the simulator, the student records simulated observations and then analyzes and interprets the data within the program stacks to determine the orbit and planet size, the planet's temperature and surface gravity, and finally determines if the planet is habitable. Additional related sections are also included. Many of the figures are animated to assist in comprehension of the material. A set of a dozen lesson plans for the middle school has also been drafted.

  6. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  7. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    PubMed

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Analyzing coastal environments by means of functional data analysis

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.

    2017-07-01

    Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.

  9. Is High Resolution Melting Analysis (HRMA) Accurate for Detection of Human Disease-Associated Mutations? A Meta Analysis

    PubMed Central

    Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao

    2011-01-01

    Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806

  10. Generalized index for spatial data sets as a measure of complete spatial randomness

    NASA Astrophysics Data System (ADS)

    Hackett-Jones, Emily J.; Davies, Kale J.; Binder, Benjamin J.; Landman, Kerry A.

    2012-06-01

    Spatial data sets, generated from a wide range of physical systems can be analyzed by counting the number of objects in a set of bins. Previous work has been limited to equal-sized bins, which are inappropriate for some domains (e.g., circular). We consider a nonequal size bin configuration whereby overlapping or nonoverlapping bins cover the domain. A generalized index, defined in terms of a variance between bin counts, is developed to indicate whether or not a spatial data set, generated from exclusion or nonexclusion processes, is at the complete spatial randomness (CSR) state. Limiting values of the index are determined. Using examples, we investigate trends in the generalized index as a function of density and compare the results with those using equal size bins. The smallest bin size must be much larger than the mean size of the objects. We can determine whether a spatial data set is at the CSR state or not by comparing the values of a generalized index for different bin configurations—the values will be approximately the same if the data is at the CSR state, while the values will differ if the data set is not at the CSR state. In general, the generalized index is lower than the limiting value of the index, since objects do not have access to the entire region due to blocking by other objects. These methods are applied to two applications: (i) spatial data sets generated from a cellular automata model of cell aggregation in the enteric nervous system and (ii) a known plant data distribution.

  11. Fish movement and habitat use depends on water body size and shape

    USGS Publications Warehouse

    Woolnough, D.A.; Downing, J.A.; Newton, T.J.

    2009-01-01

    Home ranges are central to understanding habitat diversity, effects of fragmentation and conservation. The distance that an organism moves yields information on life history, genetics and interactions with other organisms. Present theory suggests that home range is set by body size of individuals. Here, we analyse estimates of home ranges in lakes and rivers to show that body size of fish and water body size and shape influence home range size. Using 71 studies including 66 fish species on five continents, we show that home range estimates increased with increasing water body size across water body shapes. This contrasts with past studies concluding that body size sets home range. We show that water body size was a consistently significant predictor of home range. In conjunction, body size and water body size can provide improved estimates of home range than just body size alone. As habitat patches are decreasing in size worldwide, our findings have implications for ecology, conservation and genetics of populations in fragmented ecosystems. ?? 2008 Blackwell Munksgaard.

  12. Introduction to SIMRAND: Simulation of research and development project

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1982-01-01

    SIMRAND: SIMulation of Research ANd Development Projects is a methodology developed to aid the engineering and management decision process in the selection of the optimal set of systems or tasks to be funded on a research and development project. A project may have a set of systems or tasks under consideration for which the total cost exceeds the allocated budget. Other factors such as personnel and facilities may also enter as constraints. Thus the project's management must select, from among the complete set of systems or tasks under consideration, a partial set that satisfies all project constraints. The SIMRAND methodology uses analytical techniques and probability theory, decision analysis of management science, and computer simulation, in the selection of this optimal partial set. The SIMRAND methodology is truly a management tool. It initially specifies the information that must be generated by the engineers, thus providing information for the management direction of the engineers, and it ranks the alternatives according to the preferences of the decision makers.

  13. Determination of hydrogen abundance in selected lunar soils

    NASA Technical Reports Server (NTRS)

    Bustin, Roberta

    1987-01-01

    Hydrogen was implanted in lunar soil through solar wind activity. In order to determine the feasibility of utilizing this solar wind hydrogen, it is necessary to know not only hydrogen abundances in bulk soils from a variety of locations but also the distribution of hydrogen within a given soil. Hydrogen distribution in bulk soils, grain size separates, mineral types, and core samples was investigated. Hydrogen was found in all samples studied. The amount varied considerably, depending on soil maturity, mineral types present, grain size distribution, and depth. Hydrogen implantation is definitely a surface phenomenon. However, as constructional particles are formed, previously exposed surfaces become embedded within particles, causing an enrichment of hydrogen in these species. In view of possibly extracting the hydrogen for use on the lunar surface, it is encouraging to know that hydrogen is present to a considerable depth and not only in the upper few millimeters. Based on these preliminary studies, extraction of solar wind hydrogen from lunar soil appears feasible, particulary if some kind of grain size separation is possible.

  14. Grays Harbor and Chehalis River Improvements to Navigation Environmental Studies. Grays Harbor Ocean Disposal Study. Literature Review and Preliminary Benthic Sampling,

    DTIC Science & Technology

    1980-05-01

    transects extending approximately 16 kilometers from the mouth of Grays Harbor. Sub- samples were taken for grain size analysis and wood content. The...samples were thert was".d on a 1.0 mm screen to separate benthic organisms from non-living materials. Consideration of the grain size analysis ...Nutrients 17 B. Field Study 18 Methods 18 Grain Size Analysis 18 Wood Analysis 21 Wood Fragments 21 Sediment Types 21 Discussion 24 IV. BIOLOGICAL

  15. Preparing the Production of a New Product in Small and Medium-Sized Enterprises by Using the Method of Projects Management

    NASA Astrophysics Data System (ADS)

    Bijańska, Jolanta; Wodarski, Krzysztof; Wójcik, Janusz

    2016-06-01

    Efficient and effective preparation the production of new products is important requirement for a functioning and development of small and medium-sized enterprises. One of the methods, which support the fulfilment of this condition is project management. This publication presents the results of considerations, which are aimed at developing a project management model of preparation the production of a new product, adopted to specificity of small and medium-sized enterprises.

  16. Primary lithium batteries, some consumer considerations

    NASA Technical Reports Server (NTRS)

    Bro, P.

    1983-01-01

    In order to determine whether larger size lithium batteries would be commercially marketable, the performance of several D size lithium batteries was compared with that of an equivalent alkaline manganese battery, and the relative costs of the different systems were compared. It is concluded that opportunities exist in the consumer market for the larger sizes of the low rate and moderate rate lithium batteries, and that the high rate lithium batteries need further improvements before they can be recommended for consumer applications.

  17. Influence of multidroplet size distribution on icing collection efficiency

    NASA Technical Reports Server (NTRS)

    Chang, H.-P.; Kimble, K. R.; Frost, W.; Shaw, R. J.

    1983-01-01

    Calculation of collection efficiencies of two-dimensional airfoils for a monodispersed droplet icing cloud and a multidispersed droplet is carried out. Comparison is made with the experimental results reported in the NACA Technical Note series. The results of the study show considerably improved agreement with experiment when multidroplet size distributions are employed. The study then investigates the effect of collection efficiency on airborne particle droplet size sampling instruments. The biased effect introduced due to sampling from different collection volumes is predicted.

  18. Electrical Sensing Zone Particle Analyzer for Measuring Germination of Fungal Spores in the Presence of Other Particles1

    PubMed Central

    Santoro, T.; Stotzky, G.; Rem, L. T.

    1967-01-01

    Microscopic, respirometric, and electronic sizing methods for measuring germination of fungal spores were compared. With the electronic sizing method, early stages of germination (i.e., spore swelling) were detected long before germ tube emergence or significant changes in respiratory rates were observed. This method, which is rapid, easy, sensitive, and reproducible, also permits measuring the germination of spores when similar-size particles are present in concentrations considerably in excess of the number of spores. PMID:6069161

  19. 13 CFR 121.412 - What are the size procedures for partial small business set-asides?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Requirements for Government Procurement § 121.412 What are the size procedures for partial small business set... portion of a procurement, and is not required to qualify as a small business for the unrestricted portion. ...

  20. Rational Risk-Benefit Decision-Making in the Setting of Military Mefloquine Policy.

    PubMed

    Nevin, Remington L

    2015-01-01

    Mefloquine is an antimalarial drug that has been commonly used in military settings since its development by the US military in the late 1980s. Owing to the drug's neuropsychiatric contraindications and its high rate of inducing neuropsychiatric symptoms, which are contraindications to the drug's continued use, the routine prescribing of mefloquine in military settings may be problematic. Due to these considerations and to recent concerns of chronic and potentially permanent psychiatric and neurological sequelae arising from drug toxicity, military prescribing of mefloquine has recently decreased. In settings where mefloquine remains available, policies governing prescribing should reflect risk-benefit decision-making informed by the drug's perceived benefits and by consideration both of the risks identified in the drug's labeling and of specific military risks associated with its use. In this review, these risks are identified and recommendations are made for the rational prescribing of the drug in light of current evidence.

  1. Molecular Transporters for Desalination Applications

    DTIC Science & Technology

    2014-08-02

    Collaborative and commercially available state-of-the-art test  Zeolite template based synthesis II. Summary of key results and challenges For the...size setting CNT diameter. The tightest distribution of SWCNTs reported (Lu group, Duke Univ.) was achieved by loading catalyst into zeolite with the...pore size nominally acting to set the size of catalyst on the surface. However nano particles and CNTs grow on the surface of the zeolite , thus

  2. Recognition-induced forgetting is not due to category-based set size.

    PubMed

    Maxcey, Ashleigh M

    2016-01-01

    What are the consequences of accessing a visual long-term memory representation? Previous work has shown that accessing a long-term memory representation via retrieval improves memory for the targeted item and hurts memory for related items, a phenomenon called retrieval-induced forgetting. Recently we found a similar forgetting phenomenon with recognition of visual objects. Recognition-induced forgetting occurs when practice recognizing an object during a two-alternative forced-choice task, from a group of objects learned at the same time, leads to worse memory for objects from that group that were not practiced. An alternative explanation of this effect is that category-based set size is inducing forgetting, not recognition practice as claimed by some researchers. This alternative explanation is possible because during recognition practice subjects make old-new judgments in a two-alternative forced-choice task, and are thus exposed to more objects from practiced categories, potentially inducing forgetting due to set-size. Herein I pitted the category-based set size hypothesis against the recognition-induced forgetting hypothesis. To this end, I parametrically manipulated the amount of practice objects received in the recognition-induced forgetting paradigm. If forgetting is due to category-based set size, then the magnitude of forgetting of related objects will increase as the number of practice trials increases. If forgetting is recognition induced, the set size of exemplars from any given category should not be predictive of memory for practiced objects. Consistent with this latter hypothesis, additional practice systematically improved memory for practiced objects, but did not systematically affect forgetting of related objects. These results firmly establish that recognition practice induces forgetting of related memories. Future directions and important real-world applications of using recognition to access our visual memories of previously encountered objects are discussed.

  3. Parietal blood oxygenation level-dependent response evoked by covert visual search reflects set-size effect in monkeys.

    PubMed

    Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P

    2014-03-01

    Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. 75 FR 1418 - Implementation of Open Government Directive

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-11

    ... three high-value data sets by January 22, 2010, and an Open Government Plan by April 7, 2010. While the... and publish high-value data sets and draft an Open Government Plan, and the NRC is now inviting public...-value data sets as soon as possible to assure consideration for purposes of the Open Government...

  5. In-situ High Temperature Phase Transformations in Ceramics

    DTIC Science & Technology

    2009-07-28

    microscopy - SEM and transmission electron microscopy - TEM), have identified important microstructural considerations, such as the critical ...particularly with judicial design of the critical particle size and microstructure.12, 47, 48 Likewise, preliminary work indicates the possibility of high...toughening of fiber reinforced, fibrous monolithic or laminated ceramic matrix composites.49, 50 enstatite was above a 7 μm critical grain size

  6. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  7. Limits of optical transmission measurements with application to particle sizing techniques.

    PubMed

    Swanson, N L; Billard, B D; Gennaro, T L

    1999-09-20

    Considerable confusion exists regarding the applicability limits of the Bouguer-Lambert-Beer law of optical transmission. We review the derivation of the law and discuss its application to the optical thickness of the light-scattering medium. We demonstrate the range of applicability by presenting a method for determining particle size by measuring optical transmission at two wavelengths.

  8. Dissection of genetic factors underlying wheat kernel shape and size in an elite x nonadapted cross using a high density SNP linkage map

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel shape and size has been under selection since early domestication. Kernel morphology is a major consideration in wheat breeding, as it impacts grain yield and quality. A population of 160 recombinant inbred lines (RIL), developed using an elite (ND 705) and a nonadapted genotype (PI 414...

  9. High Energy Colliders

    NASA Astrophysics Data System (ADS)

    Palmer, R. B.; Gallardo, J. C.

    INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION

  10. Genetic variation in tree structure and its relation to size in Douglas-fir: II. crown form, branch characters, and foliage characters.

    Treesearch

    J.B. St. Clair

    1994-01-01

    Genetic variation and covariation among traits of tree size and structure were assessed in an 18-year-old Douglas-fir (Pseudotsuga menziesii var. menziesii (Mirb.) Franco) genetic test in the Coast Range of Oregon. Considerable genetic variation was found for relative crown width; stem increment per crown projection area; leaf...

  11. Modulation aware cluster size optimisation in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Sriram Naik, M.; Kumar, Vinay

    2017-07-01

    Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.

  12. LABORATORY DESIGN CONSIDERATIONS FOR SAFETY.

    ERIC Educational Resources Information Center

    National Safety Council, Chicago, IL. Campus Safety Association.

    THIS SET OF CONSIDERATIONS HAS BEEN PREPARED TO PROVIDE PERSONS WORKING ON THE DESIGN OF NEW OR REMODELED LABORATORY FACILITIES WITH A SUITABLE REFERENCE GUIDE TO DESIGN SAFETY. THERE IS NO DISTINCTION BETWEEN TYPES OF LABORATORY AND THE EMPHASIS IS ON GIVING GUIDES AND ALTERNATIVES RATHER THAN DETAILED SPECIFICATIONS. AREAS COVERED INCLUDE--(1)…

  13. The Social Meaning of Leisure in Uganda and America.

    ERIC Educational Resources Information Center

    Crandall, Rich; Thompson, Richard W.

    1978-01-01

    This paper analyzes cross-culturally the importance of social contact for leisure. The general findings of considerable similarity in evaluating preferences and the importance of social considerations provide a basis for preliminary comparisons and suggest that similar factors can affect leisure preferences in different cultural settings.…

  14. Seven Measures of the Ways That Deciders Frame Their Career Decisions.

    ERIC Educational Resources Information Center

    Cochran, Larry

    1983-01-01

    Illustrates seven different measures of the ways people structure a career decision. Given sets of occupational alternatives and considerations, the career grid is a decisional balance sheet that indicates the way each occupation is judged on each consideration. It can be used to correct faulty decision schemes. (JAC)

  15. Safety in the Chemical Laboratory: Handling of Oxygen in Research Experiments.

    ERIC Educational Resources Information Center

    Burnett, R. J.; Cole, J. E., Jr.

    1985-01-01

    Examines some of the considerations involved in setting up a typical oxygen/organic reaction. These considerations (including protection for personnel/equipment, adequate ventilation, reactor design, maximum reactor charge, operating procedures, and others) influence how the reaction is to be conducted and what compromises the scientist must…

  16. NASA Super Pressure Balloon

    NASA Technical Reports Server (NTRS)

    Fairbrother, Debbie

    2017-01-01

    NASA is in the process of qualifying the mid-size Super Pressure Balloon (SPB) to provide constant density altitude flight for science investigations at polar and mid-latitudes. The status of the development of the 18.8 million cubic foot SPB capable of carrying one-tone of science to 110,000 feet, will be given. In addition, the operating considerations such as launch sites, flight safety considerations, and recovery will be discussed.

  17. NASA Super Pressure Balloon

    NASA Technical Reports Server (NTRS)

    Fairbrother, Debbie

    2016-01-01

    NASA is in the process of qualifying the mid-size Super Pressure Balloon (SPB) to provide constant density altitude flight for science investigations at polar and mid-latitudes. The status of the development of the 18.8 million cubic foot SPB capable of carrying one-tonne of science to 110,000 feet, will be given. In addition, the operating considerations such as launch sites, flight safety considerations, and recovery will be discussed.

  18. Conducting Field Research in a Primary School Setting: Methodological Considerations for Maximizing Response Rates, Data Quality and Quantity

    ERIC Educational Resources Information Center

    Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen

    2012-01-01

    Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…

  19. Limited capacity for contour curvature in iconic memory.

    PubMed

    Sakai, Koji

    2006-06-01

    We measured the difference threshold for contour curvature in iconic memory by using the cued discrimination method. The study stimulus consisting of 2 to 6 curved contours was briefly presented in the fovea, followed by two lines as cues. Subjects discriminated the curvature of two cued curves. The cue delays were 0 msec. and 300 msec. in Exps. 1 and 2, respectively, and 50 msec. before the study offset in Exp. 3. Analysis of data from Exps. 1 and 2 showed that the Weber fraction rose monotonically with the increase in set size. Clear set-size effects indicate that iconic memory has a limited capacity. Moreover, clear set-size effect in Exp. 3 indicates that perception itself has a limited capacity. Larger set-size effects in Exp. 1 than in Exp. 3 suggest that iconic memory after perceptual process has limited capacity. These properties of iconic memory at threshold level are contradictory to the traditional view that iconic memory has a high capacity both at suprathreshold and categorical levels.

  20. Conspicuity of renal calculi at unenhanced CT: effects of calculus composition and size and CT technique.

    PubMed

    Tublin, Mitchell E; Murphy, Michael E; Delong, David M; Tessler, Franklin N; Kliewer, Mark A

    2002-10-01

    To determine the effects of calculus size, composition, and technique (kilovolt and milliampere settings) on the conspicuity of renal calculi at unenhanced helical computed tomography (CT). The authors performed unenhanced CT of a phantom containing 188 renal calculi of varying size and chemical composition (brushite, cystine, struvite, weddellite, whewellite, and uric acid) at 24 combinations of four kilovolt (80-140 kV) and six milliampere (200-300 mA) levels. Two radiologists, who were unaware of the location and number of calculi, reviewed the CT images and recorded where stones were detected. These observations were compared with the known positions of calculi to generate true-positive and false-positive rates. Logistic regression analysis was performed to investigate the effects of stone size, composition, and technique and to generate probability estimates of detection. Interobserver agreement was estimated with kappa statistics. Interobserver agreement was high: the mean kappa value for the two observers was 0.86. The conspicuity of stone fragments increased with increasing kilovolt and milliampere levels for all stone types. At the highest settings (140 kV and 300 mA), the detection threshold size (ie, the size of calculus that had a 50% probability of being detected) ranged from 0.81 mm + 0.03 (weddellite) to 1.3 mm + 0.1 (uric acid). Detection threshold size for each type of calculus increased up to 1.17-fold at lower kilovolt settings and up to 1.08-fold at lower milliampere settings. The conspicuity of small renal calculi at CT increases with higher kilovolt and milliampere settings, with higher kilovolts being particularly important. Small uric acid calculi may be imperceptible, even with maximal CT technique.

  1. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  2. Trends in access of plant biodiversity data revealed by Google Analytics

    PubMed Central

    Baxter, David G.; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E.

    2014-01-01

    Abstract The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development. PMID:25425933

  3. Towards the hand-held mass spectrometer: design considerations, simulation, and fabrication of micrometer-scaled cylindrical ion traps

    NASA Astrophysics Data System (ADS)

    Blain, Matthew G.; Riter, Leah S.; Cruz, Dolores; Austin, Daniel E.; Wu, Guangxiang; Plass, Wolfgang R.; Cooks, R. Graham

    2004-08-01

    Breakthrough improvements in simplicity and reductions in the size of mass spectrometers are needed for high-consequence fieldable applications, including error-free detection of chemical/biological warfare agents, medical diagnoses, and explosives and contraband discovery. These improvements are most likely to be realized with the reconceptualization of the mass spectrometer, rather than by incremental steps towards miniaturization. Microfabricated arrays of mass analyzers represent such a conceptual advance. A massively parallel array of micrometer-scaled mass analyzers on a chip has the potential to set the performance standard for hand-held sensors due to the inherit selectivity, sensitivity, and universal applicability of mass spectrometry as an analytical method. While the effort to develop a complete micro-MS system must include innovations in ultra-small-scale sample introduction, ion sources, mass analyzers, detectors, and vacuum and power subsystems, the first step towards radical miniaturization lies in the design, fabrication, and characterization of the mass analyzer itself. In this paper we discuss design considerations and results from simulations of ion trapping behavior for a micrometer scale cylindrical ion trap (CIT) mass analyzer (internal radius r0 = 1 [mu]m). We also present a description of the design and microfabrication of a 0.25 cm2 array of 106 one-micrometer CITs, including integrated ion detectors, constructed in tungsten on a silicon substrate.

  4. Fractal Characteristics of the Pore Network in Diatomites Using Mercury Porosimetry and Image Analysis

    NASA Astrophysics Data System (ADS)

    Stańczak, Grażyna; Rembiś, Marek; Figarska-Warchoł, Beata; Toboła, Tomasz

    The complex pore space considerably affects the unique properties of diatomite and its significant potential for many industrial applications. The pore network in the diatomite from the Lower Miocene strata of the Skole nappe (the Jawornik deposit, SE Poland) has been investigated using a fractal approach. The fractal dimension of the pore-space volume was calculated using the Menger sponge as a model of a porous body and the mercury porosimetry data in a pore-throat diameter range between 10,000 and 10 nm. Based on the digital analyses of the two-dimensional images from thin sections taken under a scanning electron microscope at the backscattered electron mode at different magnifications, the authors tried to quantify the pore spaces of the diatomites using the box counting method. The results derived from the analyses of the pore-throat diameter distribution using mercury porosimetry have revealed that the pore space of the diatomite has the bifractal structure in two separated ranges of the pore-throat diameters considerably smaller than the pore-throat sizes corresponding to threshold pressures. Assuming that the fractal dimensions identified for the ranges of the smaller pore-throat diameters characterize the overall pore-throat network in the Jawornik diatomite, we can set apart the distribution of the pore-throat volume (necks) and the pore volume from the distribution of the pore-space volume (pores and necks together).

  5. Trends in access of plant biodiversity data revealed by Google Analytics.

    PubMed

    Jones, Timothy Mark; Baxter, David G; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E

    2014-01-01

    The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development.

  6. Quantifying ADHD classroom inattentiveness, its moderators, and variability: a meta-analytic review.

    PubMed

    Kofler, Michael J; Rapport, Mark D; Alderson, R Matt

    2008-01-01

    Most classroom observation studies have documented significant deficiencies in the classroom attention of children with attention-deficit/hyperactivity disorder (ADHD) compared to their typically developing peers. The magnitude of these differences, however, varies considerably and may be influenced by contextual, sampling, diagnostic, and observational differences. Meta-analysis of 23 between-group classroom observation studies using weighted regression, publication bias, goodness of fit, best case, and original metric analyses. Across studies, a large effect size (ES = .73) was found prior to consideration of potential moderators. Weighted regression, best case, and original metric estimation indicate that this effect may be an underestimation of the classroom visual attention deficits of children with ADHD. Several methodological factors-classroom environment, sample characteristics, diagnostic procedures, and observational coding schema-differentially affect observed rates of classroom attentive behavior for children with ADHD and typically developing children. After accounting for these factors, children with ADHD were on-task approximately 75% of the time compared to 88% for their classroom peers (ES = 1.40). Children with ADHD were also more variable in their attentive behavior across studies. The present study confirmed that children with ADHD exhibit deficient and more variable visual attending to required stimuli in classroom settings and provided an aggregate estimation of the magnitude of these deficits at the group level. It also demonstrated the impact of situational, sampling, diagnostic, and observational variables on observed rates of on-task behavior.

  7. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  8. Maintenance of Velocity and Power With Cluster Sets During High-Volume Back Squats.

    PubMed

    Tufano, James J; Conlon, Jenny A; Nimphius, Sophia; Brown, Lee E; Seitz, Laurent B; Williamson, Bryce D; Haff, G Gregory

    2016-10-01

    To compare the effects of a traditional set structure and 2 cluster set structures on force, velocity, and power during back squats in strength-trained men. Twelve men (25.8 ± 5.1 y, 1.74 ± 0.07 m, 79.3 ± 8.2 kg) performed 3 sets of 12 repetitions at 60% of 1-repetition maximum using 3 different set structures: traditional sets (TS), cluster sets of 4 (CS4), and cluster sets of 2 (CS2). When averaged across all repetitions, peak velocity (PV), mean velocity (MV), peak power (PP), and mean power (MP) were greater in CS2 and CS4 than in TS (P < .01), with CS2 also resulting in greater values than CS4 (P < .02). When examining individual sets within each set structure, PV, MV, PP, and MP decreased during the course of TS (effect sizes 0.28-0.99), whereas no decreases were noted during CS2 (effect sizes 0.00-0.13) or CS4 (effect sizes 0.00-0.29). These results demonstrate that CS structures maintain velocity and power, whereas TS structures do not. Furthermore, increasing the frequency of intraset rest intervals in CS structures maximizes this effect and should be used if maximal velocity is to be maintained during training.

  9. Rapid Increase in Genome Size as a Consequence of Transposable Element Hyperactivity in Wood-White (Leptidea) Butterflies

    PubMed Central

    Talla, Venkat; Suh, Alexander; Kalsoom, Faheema; Dincă, Vlad; Vila, Roger; Friberg, Magne; Wiklund, Christer

    2017-01-01

    Abstract Characterizing and quantifying genome size variation among organisms and understanding if genome size evolves as a consequence of adaptive or stochastic processes have been long-standing goals in evolutionary biology. Here, we investigate genome size variation and association with transposable elements (TEs) across lepidopteran lineages using a novel genome assembly of the common wood-white (Leptidea sinapis) and population re-sequencing data from both L. sinapis and the closely related L. reali and L. juvernica together with 12 previously available lepidopteran genome assemblies. A phylogenetic analysis confirms established relationships among species, but identifies previously unknown intraspecific structure within Leptidea lineages. The genome assembly of L. sinapis is one of the largest of any lepidopteran taxon so far (643 Mb) and genome size is correlated with abundance of TEs, both in Lepidoptera in general and within Leptidea where L. juvernica from Kazakhstan has considerably larger genome size than any other Leptidea population. Specific TE subclasses have been active in different Lepidoptera lineages with a pronounced expansion of predominantly LINEs, DNA elements, and unclassified TEs in the Leptidea lineage after the split from other Pieridae. The rate of genome expansion in Leptidea in general has been in the range of four Mb/Million year (My), with an increase in a particular L. juvernica population to 72 Mb/My. The considerable differences in accumulation rates of specific TE classes in different lineages indicate that TE activity plays a major role in genome size evolution in butterflies and moths. PMID:28981642

  10. Hypothermia after cardiac arrest: expanding the therapeutic scope.

    PubMed

    Bernard, Stephen

    2009-07-01

    Therapeutic hypothermia for 12 to 24 hrs following resuscitation from out-of-hospital cardiac arrest is now recommended by the American Heart Association for the treatment of neurological injury when the initial cardiac rhythm is ventricular fibrillation. However, the role of therapeutic hypothermia is uncertain when the initial cardiac rhythm is asystole or pulseless electrical activity, or when the cardiac arrest is primarily due to a noncardiac cause, such as asphyxia or drug overdose. Given that survival rate in these latter conditions is very low, it is unlikely that clinical trials will be undertaken to test the efficacy of therapeutic hypothermia in this setting because of the very large sample size that would be required to detect a significant difference in outcomes. Therefore, in patients with anoxic brain injury after nonventricular fibrillation cardiac arrest, clinicians will need to balance the possible benefit of therapeutic hypothermia with the possible side effects of this therapy. Given that the side effects of therapeutic hypothermia are generally easily managed in the critical care setting, and there is benefit for anoxic brain injury demonstrated in laboratory studies, consideration may be given to treat comatose post-cardiac arrest patients with therapeutic hypothermia in this setting. Because the induction of therapeutic hypothermia has become more feasible with the development of simple intravenous cooling techniques and specialized equipment for improved temperature control in the critical care unit, it is expected that therapeutic hypothermia will become more widely used in the management of anoxic neurological injury whatever the presenting cardiac rhythm.

  11. Band selection method based on spectrum difference in targets of interest in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohan; Yang, Guang; Yang, Yongbo; Huang, Junhua

    2016-10-01

    While hyperspectral data shares rich spectrum information, it has numbers of bands with high correlation coefficients, causing great data redundancy. A reasonable band selection is important for subsequent processing. Bands with large amount of information and low correlation should be selected. On this basis, according to the needs of target detection applications, the spectral characteristics of the objects of interest are taken into consideration in this paper, and a new method based on spectrum difference is proposed. Firstly, according to the spectrum differences of targets of interest, a difference matrix which represents the different spectral reflectance of different targets in different bands is structured. By setting a threshold, the bands satisfying the conditions would be left, constituting a subset of bands. Then, the correlation coefficients between bands are calculated and correlation matrix is given. According to the size of the correlation coefficient, the bands can be set into several groups. At last, the conception of normalized variance is used on behalf of the information content of each band. The bands are sorted by the value of its normalized variance. Set needing number of bands, and the optimum band combination solution can be get by these three steps. This method retains the greatest degree of difference between the target of interest and is easy to achieve by computer automatically. Besides, false color image synthesis experiment is carried out using the bands selected by this method as well as other 3 methods to show the performance of method in this paper.

  12. Correlation between human maternal-fetal placental transfer and molecular weight of PCB and dioxin congeners/isomers.

    PubMed

    Mori, Chisato; Nakamura, Noriko; Todaka, Emiko; Fujisaki, Takeyoshi; Matsuno, Yoshiharu; Nakaoka, Hiroko; Hanazato, Masamichi

    2014-11-01

    Establishing methods for the assessment of fetal exposure to chemicals is important for the prevention or prediction of the child's future disease risk. In the present study, we aimed to determine the influence of molecular weight on the likelihood of chemical transfer from mother to fetus via the placenta. The correlation between molecular weight and placental transfer rates of congeners/isomers of polychlorinated biphenyls (PCBs) and dioxins was examined. Twenty-nine sample sets of maternal blood, umbilical cord, and umbilical cord blood were used to measure PCB concentration, and 41 sample sets were used to analyze dioxins. Placental transfer rates were calculated using the concentrations of PCBs, dioxins, and their congeners/isomers within these sample sets. Transfer rate correlated negatively with molecular weight for PCB congeners, normalized using wet and lipid weights. The transfer rates of PCB or dioxin congeners differed from those of total PCBs or dioxins. The transfer rate for dioxin congeners did not always correlate significantly with molecular weight, perhaps because of the small sample size or other factors. Further improvement of the analytical methods for dioxin congeners is required. The findings of the present study suggested that PCBs, dioxins, or their congeners with lower molecular weights are more likely to be transferred from mother to fetus via the placenta. Consideration of chemical molecular weight and transfer rate could therefore contribute to the assessment of fetal exposure. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    PubMed

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.

  14. Female reproductive success decreases with display size in monkshood, Aconitum kusnezoffii (Ranunculaceae)

    PubMed Central

    Liao, Wan-Jin; Hu, Yi; Zhu, Bi-Ru; Zhao, Xia-Qing; Zeng, Yan-Fei; Zhang, Da-Yong

    2009-01-01

    Background and Aims Reduction in female fitness in large clones can occur as a result of increased geitonogamous self-fertilization and its influence through inbreeding depression. This possibility was investigated in the self-compatible, bee-pollinated perennial herb Aconitum kusnezoffii which varies in clone size. Methods Field investigations were conducted on pollinator behaviour, flowering phenology and variation in seed set. The effects of self-pollination following controlled self- and cross-pollination were also examined. Selfing rates of differently sized clones were assessed using allozyme markers. Key Results High rates of geitonogamous pollination were associated with large display size. Female fitness at the ramet level decreased with clone size. Fruit and seed set under cross-pollination were significantly higher than those under self-pollination. The pre-dispersal inbreeding depression was estimated as 0·502 based on the difference in seed set per flower between self- and cross-pollinated flowers. Selfing rates of differently sized clones did not differ. Conclusions It is concluded that in A. kusnezoffii the negative effects of self-pollination causing reduced female fertility with clone size arise primarily from a strong early-acting inbreeding depression leading to the abortion of selfed embryos prior to seed maturation. PMID:19767308

  15. Sexual attitudes, preferences and infections in Ancient Greece: has antiquity anything useful for us today?

    PubMed Central

    Morton, R S

    1991-01-01

    Modern society bears a heavy burden of medico-social pathology particularly amongst its young. The size, nature and costs of the sexually transmitted disease element is now considerable and dwarfs such successes as have been achieved. In the belief that the structure of a society and the way that structure functions determines the size of its STD problem, a review of Ancient Greek society has been undertaken. Greek society, not least concerning all aspects of sex, was well ordered, frank and tolerant. Some of the areas of Greek society's structure and functioning which differ most markedly from ours, and seem to have determined a modest STD problem, are highlighted and discussed. Greek ideas that might be adapted to match today's needs are presented for consideration. PMID:1916781

  16. Large-volume protein crystal growth for neutron macromolecular crystallography.

    PubMed

    Ng, Joseph D; Baird, James K; Coates, Leighton; Garcia-Ruiz, Juan M; Hodge, Teresa A; Huang, Sijay

    2015-04-01

    Neutron macromolecular crystallography (NMC) is the prevailing method for the accurate determination of the positions of H atoms in macromolecules. As neutron sources are becoming more available to general users, finding means to optimize the growth of protein crystals to sizes suitable for NMC is extremely important. Historically, much has been learned about growing crystals for X-ray diffraction. However, owing to new-generation synchrotron X-ray facilities and sensitive detectors, protein crystal sizes as small as in the nano-range have become adequate for structure determination, lessening the necessity to grow large crystals. Here, some of the approaches, techniques and considerations for the growth of crystals to significant dimensions that are now relevant to NMC are revisited. These include experimental strategies utilizing solubility diagrams, ripening effects, classical crystallization techniques, microgravity and theoretical considerations.

  17. A constant radius of curvature model for the organization of DNA in toroidal condensates.

    PubMed Central

    Hud, N V; Downing, K H; Balhorn, R

    1995-01-01

    Toroidal DNA condensates have received considerable attention for their possible relationship to the packaging of DNA in viruses and in general as a model of ordered DNA condensation. A spool-like model has primarily been supported for DNA organization within toroids. However, our observations suggest that the actual organization may be considerably different. We present an alternate model in which DNA for a given toroid is organized within a series of equally sized contiguous loops that precess about the toroid axis. A related model for the toroid formation process is also presented. This kinetic model predicts a distribution of toroid sizes for DNA condensed from solution that is in good agreement with experimental data. Images Fig. 1 Fig. 2 Fig. 3 Fig. 5 PMID:7724602

  18. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  19. Summary of: radiation protection in dental X-ray surgeries--still rooms for improvement.

    PubMed

    Walker, Anne

    2013-03-01

    To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.

  20. Radiation protection in dental X-ray surgeries--still rooms for improvement.

    PubMed

    Hart, G; Dugdale, M

    2013-03-01

    To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.

  1. Set size manipulations reveal the boundary conditions of perceptual ensemble learning.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-11-01

    Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Predicting phenolic acid absorption in Caco-2 cells: a theoretical permeability model and mechanistic study.

    PubMed

    Farrell, Tracy L; Poquet, Laure; Dew, Tristan P; Barber, Stuart; Williamson, Gary

    2012-02-01

    There is a considerable need to rationalize the membrane permeability and mechanism of transport for potential nutraceuticals. The aim of this investigation was to develop a theoretical permeability equation, based on a reported descriptive absorption model, enabling calculation of the transcellular component of absorption across Caco-2 monolayers. Published data for Caco-2 permeability of 30 drugs transported by the transcellular route were correlated with the descriptors 1-octanol/water distribution coefficient (log D, pH 7.4) and size, based on molecular mass. Nonlinear regression analysis was used to derive a set of model parameters a', β', and b' with an integrated molecular mass function. The new theoretical transcellular permeability (TTP) model obtained a good fit of the published data (R² = 0.93) and predicted reasonably well (R² = 0.86) the experimental apparent permeability coefficient (P(app)) for nine non-training set compounds reportedly transported by the transcellular route. For the first time, the TTP model was used to predict the absorption characteristics of six phenolic acids, and this original investigation was supported by in vitro Caco-2 cell mechanistic studies, which suggested that deviation of the P(app) value from the predicted transcellular permeability (P(app)(trans)) may be attributed to involvement of active uptake, efflux transporters, or paracellular flux.

  3. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species.

    PubMed

    Irizarry, Kristopher J L; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L; Barrett, Gini; Barr, Margaret C

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  4. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species

    PubMed Central

    Irizarry, Kristopher J. L.; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L.; Barrett, Gini; Barr, Margaret C.

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management. PMID:27376076

  5. Expertise for upright faces improves the precision but not the capacity of visual working memory.

    PubMed

    Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank

    2014-10-01

    Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.

  6. Effects of the built environment on physical activity of adults living in rural settings.

    PubMed

    Frost, Stephanie S; Goins, R Turner; Hunter, Rebecca H; Hooker, Steven P; Bryant, Lucinda L; Kruger, Judy; Pluto, Delores

    2010-01-01

    To conduct a systematic review of the literature to examine the influence of the built environment (BE) on the physical activity (PA) of adults in rural settings. Key word searches of Academic Search Premier, PubMed, CINAHL, Web of Science, and Sport Discus were conducted. Studies published prior to June 2008 were included if they assessed one or more elements of the BE, examined relationships between the BE and PA, and focused on rural locales. Studies only reporting descriptive statistics or assessing the reliability of measures were excluded. Objective(s), sample size, sampling technique, geographic location, and definition of rural were extracted from each study. Methods of assessment and outcomes were extracted from the quantitative literature, and overarching themes were identified from the qualitative literature. Key characteristics and findings from the data are summarized in Tables 1 through 3. Twenty studies met inclusion and exclusion criteria. Positive associations were found among pleasant aesthetics, trails, safety/crime, parks, and walkable destinations. Research in this area is limited. Associations among elements of the BE and PA among adults appear to differ between rural and urban areas. Considerations for future studies include identifying parameters used to define rural, longitudinal research, and more diverse geographic sampling. Development and refinement of BE assessment tools specific to rural locations are also warranted.

  7. How Does One "Open" Science? Questions of Value in Biological Research.

    PubMed

    Levin, Nadine; Leonelli, Sabina

    2017-03-01

    Open Science policies encourage researchers to disclose a wide range of outputs from their work, thus codifying openness as a specific set of research practices and guidelines that can be interpreted and applied consistently across disciplines and geographical settings. In this paper, we argue that this "one-size-fits-all" view of openness sidesteps key questions about the forms, implications, and goals of openness for research practice. We propose instead to interpret openness as a dynamic and highly situated mode of valuing the research process and its outputs, which encompasses economic as well as scientific, cultural, political, ethical, and social considerations. This interpretation creates a critical space for moving beyond the economic definitions of value embedded in the contemporary biosciences landscape and Open Science policies, and examining the diversity of interests and commitments that affect research practices in the life sciences. To illustrate these claims, we use three case studies that highlight the challenges surrounding decisions about how--and how best--to make things open. These cases, drawn from ethnographic engagement with Open Science debates and semistructured interviews carried out with UK-based biologists and bioinformaticians between 2013 and 2014, show how the enactment of openness reveals judgments about what constitutes a legitimate intellectual contribution, for whom, and with what implications.

  8. How Does One “Open” Science? Questions of Value in Biological Research

    PubMed Central

    Levin, Nadine

    2016-01-01

    Open Science policies encourage researchers to disclose a wide range of outputs from their work, thus codifying openness as a specific set of research practices and guidelines that can be interpreted and applied consistently across disciplines and geographical settings. In this paper, we argue that this “one-size-fits-all” view of openness sidesteps key questions about the forms, implications, and goals of openness for research practice. We propose instead to interpret openness as a dynamic and highly situated mode of valuing the research process and its outputs, which encompasses economic as well as scientific, cultural, political, ethical, and social considerations. This interpretation creates a critical space for moving beyond the economic definitions of value embedded in the contemporary biosciences landscape and Open Science policies, and examining the diversity of interests and commitments that affect research practices in the life sciences. To illustrate these claims, we use three case studies that highlight the challenges surrounding decisions about how––and how best––to make things open. These cases, drawn from ethnographic engagement with Open Science debates and semistructured interviews carried out with UK-based biologists and bioinformaticians between 2013 and 2014, show how the enactment of openness reveals judgments about what constitutes a legitimate intellectual contribution, for whom, and with what implications. PMID:28232768

  9. Anesthesia Practices for Interventional Radiology in Europe.

    PubMed

    Vari, Alessandra; Gangi, Afshin

    2017-06-01

    The Cardiovascular and Interventional Radiological Society of Europe (CIRSE) prompted an initiative to frame the current European status of anesthetic practices for interventional radiology, in consideration of the current variability of IR suite settings, staffing and anesthetic practices reported in the literature and of the growing debate on sedation administered by non-anesthesiologists, in Europe. Anonymous online survey available to all European CIRSE members to assess IR setting, demographics, peri-procedural care, anesthetic management, resources and staffing, pain management, data collection, safety, management of emergencies and personal opinions on the role CIRSE should have in promoting anesthetic care for interventional radiology. Predictable differences between countries and national regulations were confirmed, showing how significantly many "local" factors (type and size of centers, the availability of dedicated inpatient bed, availability of anesthesia staff) can affect the routine practice and the expansion of IR as a subspecialty. In addition, the perception of the need for IR to acquire more sedation-related skills is definitely stronger for those who practice with the lowest availability of anesthesia care. Significant country variations and regulations along with a controversial position of the anesthesia community on the issue of sedation administered by non-anesthesiologists substantially represent the biggest drawbacks for the expansion of peri-procedural anesthetic care for IR and for potential initiatives at an European level.

  10. Reliability-Related Issues in the Context of Student Evaluations of Teaching in Higher Education

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2015-01-01

    Student evaluations of teaching (SET) have been the principal instrument to elicit students' opinions in higher education institutions. Many decisions, including high-stake ones, are made based on SET scores reported by students. In this respect, reliability of SET scores is of considerable importance. This paper has an argument that there are…

  11. 48 CFR 970.1504-1-9 - Special considerations: Cost-plus-award-fee.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....e., nuclear energy processing, industrial environmental cleanup); (iii) Construction of facilities... industrial/DOE settings (i.e., nuclear energy, chemical or petroleum processing, industrial environmental... industrial/DOE settings (i.e., nuclear energy, chemical processing, industrial environmental cleanup); (ii...

  12. 48 CFR 970.1504-1-9 - Special considerations: Cost-plus-award-fee.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....e., nuclear energy processing, industrial environmental cleanup); (iii) Construction of facilities... industrial/DOE settings (i.e., nuclear energy, chemical or petroleum processing, industrial environmental... industrial/DOE settings (i.e., nuclear energy, chemical processing, industrial environmental cleanup); (ii...

  13. 48 CFR 970.1504-1-9 - Special considerations: Cost-plus-award-fee.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....e., nuclear energy processing, industrial environmental cleanup); (iii) Construction of facilities... industrial/DOE settings (i.e., nuclear energy, chemical or petroleum processing, industrial environmental... industrial/DOE settings (i.e., nuclear energy, chemical processing, industrial environmental cleanup); (ii...

  14. 48 CFR 970.1504-1-9 - Special considerations: Cost-plus-award-fee.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....e., nuclear energy processing, industrial environmental cleanup); (iii) Construction of facilities... industrial/DOE settings (i.e., nuclear energy, chemical or petroleum processing, industrial environmental... industrial/DOE settings (i.e., nuclear energy, chemical processing, industrial environmental cleanup); (ii...

  15. 48 CFR 970.1504-1-9 - Special considerations: Cost-plus-award-fee.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....e., nuclear energy processing, industrial environmental cleanup); (iii) Construction of facilities... industrial/DOE settings (i.e., nuclear energy, chemical or petroleum processing, industrial environmental... industrial/DOE settings (i.e., nuclear energy, chemical processing, industrial environmental cleanup); (ii...

  16. Urbanism from the Perspective of Ecological Psychologists

    ERIC Educational Resources Information Center

    Gump, Paul V.; Adelgerg, Bettina

    1978-01-01

    A perception of ecological psychology on urbanism is presented, with emphasis on the following premises: (1) Urban milieu consists of complexes of behavior settings; and (2) Size of a city should be understood in relationship to the number and variety of its settings, rather than in terms of population size. (Author/MA)

  17. Reduction of Marine Magnetic Data for Modeling the Main Field of the Earth

    NASA Technical Reports Server (NTRS)

    Baldwin, R. T.; Ridgway, J. R.; Davis, W. M.

    1992-01-01

    The marine data set archived at the National Geophysical Data Center (NGDC) consists of shipborne surveys conducted by various institutes worldwide. This data set spans four decades (1953, 1958, 1960-1987), and contains almost 13 million total intensity observations. These are often less than 1 km apart. These typically measure seafloor spreading anomalies with amplitudes of several hundred nanotesla (nT) which, since they originate in the crust, interfere with main field modeling. The source for these short wavelength features are confined within the magnetic crust (i.e., sources above the Curie isotherm). The main field, on the other hand, is of much longer wavelengths and originates within the earth's core. It is desirable to extract the long wavelength information from the marine data set for use in modeling the main field. This can be accomplished by averaging the data along the track. In addition, those data which are measured during periods of magnetic disturbance can be identified and eliminated. Thus, it should be possible to create a data set which has worldwide data distribution, spans several decades, is not contaminated with short wavelengths of the crustal field or with magnetic storm noise, and which is limited enough in size to be manageable for the main field modeling. The along track filtering described above has proved to be an effective means of condensing large numbers of shipborne magnetic data into a manageable and meaningful data set for main field modeling. Its simplicity and ability to adequately handle varying spatial and sampling constraints has outweighed consideration of more sophisticated approaches. This filtering technique also provides the benefits of smoothing out short wavelength crustal anomalies, discarding data recorded during magnetically noisy periods, and assigning reasonable error estimates to be used in the least square modeling. A useful data set now exists which spans 1953-1987.

  18. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  19. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  20. Pocket-Size Interferometric Systems

    NASA Astrophysics Data System (ADS)

    Waters, James P.; Fernald, Mark R.

    1990-04-01

    Optical sensors have the intrinsic advantages over electronic sensors of complete safety in hazardous areas and absolute immunity from both transmitting or picking up electromagnetic radiation. However, adoption of optical sensors in real-world applications requires a sensor design which has a sensitivity, resolution, and dynamic range comparable to an equivalent electronic sensor and at the same time must fulfill the practical considerations of small size and low cost. While sensitivity, resolution and dynamic range can be easily achieved with optical heterodyne sensors, the practical considerations make their near-term adoption unlikely. Significant improvements to optical heterodyne vibration and velocity sensors (flexibility, reliability and environmental immunity) have been realized with the use of semiconductor lasers, optical fibers and fiber-optic components. In fact, all of the discrete optical components in a heterodyne interferometer have been replaced with much smaller and more rugged devices except for the optical frequency shifter, acousto-optic modulator (AOM). The AOM and associated power supply, however, account for a substantial portion of both the size and cost. Previous work has shown that an integrated-optic, serrodyne phase modulator with an inexpensive drive circuit can be used for single sideband heterodyne detection. This paper describes the next step, design and implementation of a heterodyne interferometer using integrated-Optic technology to provide the polarization maintaining couplers and phase modulator. The couplers were made using a proton exchange process which produced devices with an extinction ratio of better than 40 dB. The serrodyne phase modulator had the advantage over an AOM of being considerably smaller and having a drive power of less than a milliwatt. The results of this work show that this technology is an effective way of reducing the size of the system and the cost of multiple units without sacarifying performance.

  1. A standardized sampling protocol for channel catfish in prairie streams

    USGS Publications Warehouse

    Vokoun, Jason C.; Rabeni, Charles F.

    2001-01-01

    Three alternative gears—an AC electrofishing raft, bankpoles, and a 15-hoop-net set—were used in a standardized manner to sample channel catfish Ictalurus punctatus in three prairie streams of varying size in three seasons. We compared these gears as to time required per sample, size selectivity, mean catch per unit effort (CPUE) among months, mean CPUE within months, effect of fluctuating stream stage, and sensitivity to population size. According to these comparisons, the 15-hoop-net set used during stable water levels in October had the most desirable characteristics. Using our catch data, we estimated the precision of CPUE and size structure by varying sample sizes for the 15-hoop-net set. We recommend that 11–15 repetitions of the 15-hoop-net set be used for most management activities. This standardized basic unit of effort will increase the precision of estimates and allow better comparisons among samples as well as increased confidence in management decisions.

  2. Patterns of natural and human-caused mortality factors of a rare forest carnivore, the fisher (Pekania pennanti) in California

    Treesearch

    Mourad W. Gabriel; Leslie W. Woods; Greta M. Wengert; Nicole Stephenson; J. Mark Higley; Craig Thompson; Sean M. Matthews; Rick A. Sweitzer; Kathryn Purcell; Reginald H. Barrett; Stefan M. Keller; Patricia Gaffney; Megan Jones; Robert Poppenga; Janet E. Foley; Richard N. Brown; Deana L. Clifford; Benjamin N. Sacks

    2015-01-01

    Wildlife populations of conservation concern are limited in distribution, population size and persistence by various factors, including mortality. The fisher (Pekania pennanti), a North American mid-sized carnivore whose range in the western Pacific United States has retracted considerably in the past century, was proposed for threatened status...

  3. The Effects of Class Size in Online College Courses: Experimental Evidence. CEPA Working Paper No. 15-14

    ERIC Educational Resources Information Center

    Bettinger, Eric; Doss, Christopher; Loeb, Susanna; Taylor, Eric

    2015-01-01

    Class size is a first-order consideration in the study of education production and education costs. How larger or smaller classes affect student outcomes is especially relevant to the growth and design of online classes. We study a field experiment in which college students were quasi-randomly assigned to either a large or a small class. All…

  4. Statistical Criteria for Setting Thresholds in Medical School Admissions

    ERIC Educational Resources Information Center

    Albanese, Mark A.; Farrell, Philip; Dottl, Susan

    2005-01-01

    In 2001, Dr. Jordan Cohen, President of the AAMC, called for medical schools to consider using an Medical College Admission Test (MCAT) threshold to eliminate high-risk applicants from consideration and then to use non-academic qualifications for further consideration. This approach would seem to be consistent with the recent Supreme Court ruling…

  5. Challenge Activities for the Physical Education Classroom: Considerations

    ERIC Educational Resources Information Center

    McKenzie, Emily; Tapps, Tyler; Fink, Kevin; Symonds, Matthew L.

    2018-01-01

    The purpose of this article is to provide physical education teachers with the tools to develop and implement challenge course-like activities in their physical education classes. The article also covers environmental considerations for teachers who have the desire to create a challenge-based classroom setting in order to reach a wider and more…

  6. Early Childhood Development Cultural Considerations--Commonalities, Variables, and Local Community Determinants for Program Modules.

    ERIC Educational Resources Information Center

    Taylor, Anne P.; Warren, Dave

    The paper discusses cultural commonality and variability considerations of the Native American populations served by the Federation of Rocky Mountain States Educational Technical Development (ETD) Project. Section I explores important factors to consider when setting up an Early Childhood Development program module for Indian people, such as…

  7. 17 CFR 201.431 - Commission consideration of actions made pursuant to delegated authority.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Commission Review § 201.431 Commission consideration of actions made pursuant to delegated authority. (a) Scope of review. The Commission may affirm, reverse, modify, set aside or remand for further proceedings... of this chapter. (b) Standards for granting review pursuant to a petition for review—(1) Mandatory...

  8. Investigating motivating factors for sound hospital waste management.

    PubMed

    Ali, Mustafa; Wang, Wenping; Chaudhry, Nawaz

    2016-08-01

    Sustainable management of hospital waste requires an active involvement of all key players. This study aims to test the hypothesis that three motivating factors, namely, Reputation, Liability, and Expense, influence hospital waste management. The survey for this study was conducted in two phases, with the pilot study used for exploratory factor analysis and the subsequent main survey used for cross-validation using confirmatory factor analysis. The hypotheses were validated through one-sample t tests. Correlations were established between the three motivating factors and organizational characteristics of hospital type, location, category, and size. The hypotheses were validated, and it was found that the factors of Liability and Expense varied considerably with respect to location and size of a hospital. The factor of Reputation, however, did not exhibit significant variation. In conclusion, concerns about the reputation of a facility and an apprehension of liability act as incentives for sound hospital waste management, whereas concerns about financial costs and perceived overburden on staff act as disincentives. This paper identifies the non economic motivating factors that can be used to encourage behavioral changes regarding waste management at hospitals in resource constrained environments. This study discovered that organizational characteristics such as hospital size and location cause the responses to vary among the subjects. Hence a policy maker must take into account the institutional setting before introducing a change geared towards better waste management outcomes across hospitals. This study covers a topic that has hitherto been neglected in resource constrained countries. Thus it can be used as one of the first steps to highlight and tackle the issue.

  9. Lack of evidence and standardization in care pathway documents for patients with ST-elevated myocardial infarction.

    PubMed

    Aeyels, Daan; Van Vugt, Stijn; Sinnaeve, Peter R; Panella, Massimiliano; Van Zelm, Ruben; Sermeus, Walter; Vanhaecht, Kris

    2016-04-01

    Clinical practice variation and the subsequent burden on health care quality has been documented for patients with ST-elevated myocardial infarction (STEMI). Reduction of clinical practice variation is possible by increasing guideline adherence. Care pathway documents can increase guideline adherence by implementing evidence-based key interventions and quality indicators in daily practice. This study aims to examine guideline adherence of care pathway documents for patients with STEMI. Lay-out, size and timeframe of submitted care pathways documents were analysed. Two independent reviewers used a checklist to systematically assess the guideline adherence of care pathway documents. The checklist comprised a set of key interventions and quality indicators extracted from evidence and international guidelines. The checklist distinguished the evidence level for each item and was validated by expert consensus. Results were verified by inviting participating hospitals to provide feedback. Fifteen out of 25 invited hospitals submitted care pathway documents for STEMI. The care pathway documents differed in timeframe, lay-out and size. Analysis of the care pathway documents showed important variation in formalizing adherence to evidence: between hospitals, inclusion of 24 key interventions in care pathway documents varied from 13 to 97%. Inclusion of 11 essential quality indicators varied from 0 to 40%. Care pathway documents for patients with STEMI differ considerably in lay-out, timeframe and size. This study showed variation in, and suboptimal inclusion of, evidence-based key interventions and quality indicators in care pathway documents. The use of these care pathway documents might result in suboptimal quality of care for STEMI patients. © The European Society of Cardiology 2015.

  10. A fast three-dimensional gamma evaluation using a GPU utilizing texture memory for on-the-fly interpolations.

    PubMed

    Persoon, Lucas C G G; Podesta, Mark; van Elmpt, Wouter J C; Nijsten, Sebastiaan M J J G; Verhaegen, Frank

    2011-07-01

    A widely accepted method to quantify differences in dose distributions is the gamma (gamma) evaluation. Currently, almost all gamma implementations utilize the central processing unit (CPU). Recently, the graphics processing unit (GPU) has become a powerful platform for specific computing tasks. In this study, we describe the implementation of a 3D gamma evaluation using a GPU to improve calculation time. The gamma evaluation algorithm was implemented on an NVIDIA Tesla C2050 GPU using the compute unified device architecture (CUDA). First, several cubic virtual phantoms were simulated. These phantoms were tested with varying dose cube sizes and set-ups, introducing artificial dose differences. Second, to show applicability in clinical practice, five patient cases have been evaluated using the 3D dose distribution from a treatment planning system as the reference and the delivered dose determined during treatment as the comparison. A calculation time comparison between the CPU and GPU was made with varying thread-block sizes including the option of using texture or global memory. A GPU over CPU speed-up of 66 +/- 12 was achieved for the virtual phantoms. For the patient cases, a speed-up of 57 +/- 15 using the GPU was obtained. A thread-block size of 16 x 16 performed best in all cases. The use of texture memory improved the total calculation time, especially when interpolation was applied. Differences between the CPU and GPU gammas were negligible. The GPU and its features, such as texture memory, decreased the calculation time for gamma evaluations considerably without loss of accuracy.

  11. Noise pollution mapping approach and accuracy on landscape scales.

    PubMed

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Auditory proactive interference in monkeys: The role of stimulus set size and intertrial interval

    PubMed Central

    Bigelow, James; Poremba, Amy

    2013-01-01

    We conducted two experiments to examine the influence of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (non-match trials). In Experiment 1, we randomly assigned a stimulus set size of 2, 4, 8, 16, 32, 64, or 192 (trial unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect “same” responses on non-match trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved by increasing the ITI from 5 to 10 s, but the 10 and 20 s conditions were the same. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false “match” responses on non-match trials. Taken together, Experiments 1 and 2 show that auditory short-term memory in monkeys is highly susceptible to PI caused by stimulus repetition. Additional analyses from Experiment 1 suggest that monkeys may make same/different judgments based on a familiarity criterion that is adjusted by error-related feedback. PMID:23526232

  13. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  14. Transcriptional mapping of rabies virus in vivo. [UV radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flamand, A.; Delagneau, J.F.

    1978-11-01

    Synthesis of the proteins of rabies virus was studied in hamster cell infected with uv-irradiated virus. The uv target size of genes L, N, M/sub 1/, and M/sub 2/ was measured during primary transcription. Except for N, the target size of the remaining genes was considerably larger than that of their physical sizes. The data fit the hypothesis that four genes occupy a single transcriptional unit and that transcription of rabies virus proceeds in the order N, M/sub 1/, M/sub 2/, and L.

  15. Guidance on Selecting Age Groups for Monitoring and Assessing Childhood Exposures to Environmental Contaminants

    EPA Pesticide Factsheets

    This document recommends a set of age groupings based on current understanding of differences in lifestage behavior and anatomy and physiology that can serve as a starting set for consideration by Agency risk assessors and researchers.

  16. Using Large Data Sets to Study College Education Trajectories

    ERIC Educational Resources Information Center

    Oseguera, Leticia; Hwang, Jihee

    2014-01-01

    This chapter presents various considerations researchers undertook to conduct a quantitative study on low-income students using a national data set. Specifically, it describes how a critical quantitative scholar approaches guiding frameworks, variable operationalization, analytic techniques, and result interpretation. Results inform how…

  17. PREDICTING ER BINDING AFFINITY FOR EDC RANKING AND PRIORITIZATION: MODEL I

    EPA Science Inventory

    A Common Reactivity Pattern (COREPA) model, based on consideration of multiple energetically reasonable conformations of flexible chemicals was developed using a training set of 232 rat estrogen receptor (rER) relative binding affinity (RBA) measurements. The training set include...

  18. Class-Size Effects in Secondary School

    ERIC Educational Resources Information Center

    Krassel, Karl Fritjof; Heinesen, Eskil

    2014-01-01

    We analyze class-size effects on academic achievement in secondary school in Denmark exploiting an institutional setting where pupils cannot predict class size prior to enrollment, and where post-enrollment responses aimed at affecting realized class size are unlikely. We identify class-size effects combining a regression discontinuity design with…

  19. Effect of attention on the detection and identification of masked spatial patterns.

    PubMed

    Põder, Endel

    2005-01-01

    The effect of attention on the detection and identification of vertically and horizontally oriented Gabor patterns in the condition of simultaneous masking with obliquely oriented Gabors was studied. Attention was manipulated by varying the set size in a visual-search experiment. In the first experiment, small target Gabors were presented on the background of larger masking Gabors. In the detection task, the effect of set size was as predicted by unlimited-capacity signal detection theory. In the orientation identification task, increasing the set size from 1 to 8 resulted in a much larger decline in performance. The results of the additional experiments suggest that attention can reduce the crowding effect of maskers.

  20. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  1. Strain Localization on Different Scales and their Related Microstructures - Comparison of Microfabrics of Calcite Mylonites from Naxos (Greece) and Helvetic Nappes (Switzerland)

    NASA Astrophysics Data System (ADS)

    Ebert, A.; Herwegh, M.; Karl, R.; Edwin, G.; Decrouez, D.

    2007-12-01

    In the upper crust, shear zones are widespread and appear at different scales. Although deformation conditions, shear zone history, and displacements vary in time and space between shear zones and also within them, in all shear zones similar trends in the evolution of large- to micro-scale fabrics can be observed. The microstructural analyses of calcite mylonites from Naxos and various Helvetic nappes show that microstructures from different metamorphic zones vary considerably on the outcrop- and even on the sample- scale. However, grain sizes tend to increase with metamorphic degree in case of Naxos and the Helvetic nappes. Although deformation conditions (e.g. deformation temperature, strain rate, and shear zone geometry, i.e. shear zone width and rock type above/below thrust) vary between the different tectonic settings, microstructural trends (e.g. grain size) correlate with each other. This is in contrast to many previous studies, where no corrections for second phase contents have been applied. In an Arrhenius-type diagram, the grain growth trends of calcite of all studied shear zones fit on a single trend, independent of the dimensions of localized large-scale structures, which is in the dm to m- and km-range in case of the Helvetic thrusts and the marble suite of Naxos, respectively. The calcite grain size increases continuously from few μm to >2mm with a temperature increase from <300°C to >700°C. In a field geologist's point of view, this is an important observation because it shows that natural dynamically stabilized steady state microfabrics can be used to estimate temperature conditions during deformation, although the tectonic settings are different (e.g. strain rate, fluid flow). The reason for this agreement might be related to a scale-dependence of the shear zone dimensions, where the widths increase with increasing metamorphic conditions. In this sense, the deformation volumes affected by localization must closely be linked to the strength of the affected rocks. In comparison to experiments, similar microstructural trends are observed. Here, however, shifts of these trends occur due to the higher strain rates.

  2. Phenotypic integration among trabecular and cortical bone traits establishes mechanical functionality of inbred mouse vertebrae.

    PubMed

    Tommasini, Steven M; Hu, Bin; Nadeau, Joseph H; Jepsen, Karl J

    2009-04-01

    Conventional approaches to identifying quantitative trait loci (QTLs) regulating bone mass and fragility are limited because they examine cortical and trabecular traits independently. Prior work examining long bones from young adult mice and humans indicated that skeletal traits are functionally related and that compensatory interactions among morphological and compositional traits are critical for establishing mechanical function. However, it is not known whether trait covariation (i.e., phenotypic integration) also is important for establishing mechanical function in more complex, corticocancellous structures. Covariation among trabecular, cortical, and compositional bone traits was examined in the context of mechanical functionality for L(4) vertebral bodies across a panel of 16-wk-old female AXB/BXA recombinant inbred (RI) mouse strains. The unique pattern of randomization of the A/J and C57BL/6J (B6) genome among the RI panel provides a powerful tool that can be used to measure the tendency for different traits to covary and to study the biology of complex traits. We tested the hypothesis that genetic variants affecting vertebral size and mass are buffered by changes in the relative amounts of cortical and trabecular bone and overall mineralization. Despite inheriting random sets of A/J and B6 genomes, the RI strains inherited nonrandom sets of cortical and trabecular bone traits. Path analysis, which is a multivariate analysis that shows how multiple traits covary simultaneously when confounding variables like body size are taken into consideration, showed that RI strains that tended to have smaller vertebrae relative to body size achieved mechanical functionality by increasing mineralization and the relative amounts of cortical and trabecular bone. The interdependence among corticocancellous traits in the vertebral body indicated that variation in trabecular bone traits among inbred mouse strains, which is often thought to arise from genetic factors, is also determined in part by the adaptive response to variation in traits describing the cortical shell. The covariation among corticocancellous traits has important implications for genetic analyses and for interpreting the response of bone to genetic and environmental perturbations.

  3. How the Assumed Size Distribution of Dust Minerals Affects the Predicted Ice Forming Nuclei

    NASA Technical Reports Server (NTRS)

    Perlwitz, Jan P.; Fridlind, Ann M.; Garcia-Pando, Carlos Perez; Miller, Ron L.; Knopf, Daniel A.

    2015-01-01

    The formation of ice in clouds depends on the availability of ice forming nuclei (IFN). Dust aerosol particles are considered the most important source of IFN at a global scale. Recent laboratory studies have demonstrated that the mineral feldspar provides the most efficient dust IFN for immersion freezing and together with kaolinite for deposition ice nucleation, and that the phyllosilicates illite and montmorillonite (a member of the smectite group) are of secondary importance.A few studies have applied global models that simulate mineral specific dust to predict the number and geographical distribution of IFN. These studies have been based on the simple assumption that the mineral composition of soil as provided in data sets from the literature translates directly into the mineral composition of the dust aerosols. However, these tables are based on measurements of wet-sieved soil where dust aggregates are destroyed to a large degree. In consequence, the size distribution of dust is shifted to smaller sizes, and phyllosilicates like illite, kaolinite, and smectite are only found in the size range 2 m. In contrast, in measurements of the mineral composition of dust aerosols, the largest mass fraction of these phyllosilicates is found in the size range 2 m as part of dust aggregates. Conversely, the mass fraction of feldspar is smaller in this size range, varying with the geographical location. This may have a significant effect on the predicted IFN number and its geographical distribution.An improved mineral specific dust aerosol module has been recently implemented in the NASA GISS Earth System ModelE2. The dust module takes into consideration the disaggregated state of wet-sieved soil, on which the tables of soil mineral fractions are based. To simulate the atmospheric cycle of the minerals, the mass size distribution of each mineral in aggregates that are emitted from undispersed parent soil is reconstructed. In the current study, we test the null-hypothesis that simulating the presence of a large mass fraction of phyllosilicates in dust aerosols in the size range 2 m, in comparison to a simple model assumption where this is neglected, does not yield a significant effect on the magnitude and geographical distribution of the predicted IFN number. Results from sensitivity experiments are presented as well.

  4. SDTM - SYSTEM DESIGN TRADEOFF MODEL FOR SPACE STATION FREEDOM RELEASE 1.1

    NASA Technical Reports Server (NTRS)

    Chamberlin, R. G.

    1994-01-01

    Although extensive knowledge of space station design exists, the information is widely dispersed. The Space Station Freedom Program (SSFP) needs policies and procedures that ensure the use of consistent design objectives throughout its organizational hierarchy. The System Design Tradeoff Model (SDTM) produces information that can be used for this purpose. SDTM is a mathematical model of a set of possible designs for Space Station Freedom. Using the SDTM program, one can find the particular design which provides specified amounts of resources to Freedom's users at the lowest total (or life cycle) cost. One can also compare alternative design concepts by changing the set of possible designs, while holding the specified user services constant, and then comparing costs. Finally, both costs and user services can be varied simultaneously when comparing different designs. SDTM selects its solution from a set of feasible designs. Feasibility constraints include safety considerations, minimum levels of resources required for station users, budget allocation requirements, time limitations, and Congressional mandates. The total, or life cycle, cost includes all of the U.S. costs of the station: design and development, purchase of hardware and software, assembly, and operations throughout its lifetime. The SDTM development team has identified, for a variety of possible space station designs, the subsystems that produce the resources to be modeled. The team has also developed formulas for the cross consumption of resources by other resources, as functions of the amounts of resources produced. SDTM can find the values of station resources, so that subsystem designers can choose new design concepts that further reduce the station's life cycle cost. The fundamental input to SDTM is a set of formulas that describe the subsystems which make up a reference design. Most of the formulas identify how the resources required by each subsystem depend upon the size of the subsystem. Some of the formulas describe how the subsystem costs depend on size. The formulas can be complicated and nonlinear (if nonlinearity is needed to describe how designs change with size). SDTM's outputs are amounts of resources, life-cycle costs, and marginal costs. SDTM will run on IBM PC/XTs, ATs, and 100% compatibles with 640K of RAM and at least 3Mb of fixed-disk storage. A printer which can print in 132-column mode is also required, and a mathematics co-processor chip is highly recommended. This code is written in Turbo C 2.0. However, since the developers used a modified version of the proprietary Vitamin C source code library, the complete source code is not available. The executable is provided, along with all non-proprietary source code. This program was developed in 1989.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, G.A.; Commer, M.

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less

  6. Phonon bottleneck identification in disordered nanoporous materials

    NASA Astrophysics Data System (ADS)

    Romano, Giuseppe; Grossman, Jeffrey C.

    2017-09-01

    Nanoporous materials are a promising platform for thermoelectrics in that they offer high thermal conductivity tunability while preserving good electrical properties, a crucial requirement for high-efficiency thermal energy conversion. Understanding the impact of the pore arrangement on thermal transport is pivotal to engineering realistic materials, where pore disorder is unavoidable. Although there has been considerable progress in modeling thermal size effects in nanostructures, it has remained a challenge to screen such materials over a large phase space due to the slow simulation time required for accurate results. We use density functional theory in connection with the Boltzmann transport equation to perform calculations of thermal conductivity in disordered porous materials. By leveraging graph theory and regressive analysis, we identify the set of pores representing the phonon bottleneck and obtain a descriptor for thermal transport, based on the sum of the pore-pore distances between such pores. This approach provide a simple tool to estimate phonon suppression in realistic porous materials for thermoelectric applications and enhance our understanding of heat transport in disordered materials.

  7. Fundamental differences between wildlife and biomedical research.

    PubMed

    Sikes, Robert S; Paul, Ellen

    2013-01-01

    Non-human animals have starred in countless productions of biological research. Whether they play the lead or supporting role depends on the nature of the investigation. These differences in the roles of animals affect nearly every facet of animal involvement, including: the choice of species, the sample size, the source of individuals, and the settings in which the animals are used. These roles establish different baselines for animal use that require substantially different ethical considerations. Efficient and appropriate oversight of wildlife research benefits the animals and their investigators. Toward that end, Institutional Animal Care and Use Committee (IACUCs) must appreciate the profound differences between biomedical and wildlife research and recognize the value of the state and federal permitting processes required for wildlife studies. These processes assure us that potential impacts beyond the level of the individual are minimal or are justified. Most importantly, IACUCs must recognize that they, and their investigators, have an obligation to use appropriate guidelines for evaluating wildlife research.

  8. Clustering Categorical Data Using Community Detection Techniques

    PubMed Central

    2017-01-01

    With the advent of the k-modes algorithm, the toolbox for clustering categorical data has an efficient tool that scales linearly in the number of data items. However, random initialization of cluster centers in k-modes makes it hard to reach a good clustering without resorting to many trials. Recently proposed methods for better initialization are deterministic and reduce the clustering cost considerably. A variety of initialization methods differ in how the heuristics chooses the set of initial centers. In this paper, we address the clustering problem for categorical data from the perspective of community detection. Instead of initializing k modes and running several iterations, our scheme, CD-Clustering, builds an unweighted graph and detects highly cohesive groups of nodes using a fast community detection technique. The top-k detected communities by size will define the k modes. Evaluation on ten real categorical datasets shows that our method outperforms the existing initialization methods for k-modes in terms of accuracy, precision, and recall in most of the cases. PMID:29430249

  9. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  10. Nonsurgical Medical Penile Girth Augmentation: Experience-Based Recommendations.

    PubMed

    Oates, Jayson; Sharp, Gemma

    2017-10-01

    Penile augmentation is increasingly sought by men who are dissatisfied with the size and/or appearance of their penis. However, augmentation procedures are still considered to be highly controversial with no standardized recommendations reported in the medical literature and limited outcome data. Nevertheless, these procedures continue to be performed in increasing numbers in private settings. Therefore, there is a need for safe, effective, and minimally invasive procedures to be developed, evaluated, and reported in the research literature. In this article, we focus particularly on girth enhancement procedures rather than lengthening procedures as penile girth appears to be particularly important for sexual satisfaction. We discuss the advantages and disadvantages of the common techniques to date, with a focus on the minimally invasive injectable girth augmentation techniques. Based on considerable operative experience, we offer our own suggestions for patient screening, technique selection, and perioperative care. © 2017 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  11. Forensic analysis of mtDNA haplotypes from two rural communities in Haiti reflects their population history.

    PubMed

    Wilson, Jamie L; Saint-Louis, Vertus; Auguste, Jensen O; Jackson, Bruce A

    2012-11-01

    Very little genetic data exist on Haitians, an estimated 1.2 million of whom, not including illegal immigrants, reside in the United States. The absence of genetic data on a population of this size reduces the discriminatory power of criminal and missing-person DNA databases in the United States and Caribbean. We present a forensic population study that provides the first genetic data set for Haiti. This study uses hypervariable segment one (HVS-1) mitochondrial DNA (mtDNA) nucleotide sequences from 291 subjects primarily from rural areas of northern and southern Haiti, where admixture would be minimal. Our results showed that the African maternal genetic component of Haitians had slightly higher West-Central African admixture than African-Americans and Dominicans, but considerably less than Afro-Brazilians. These results lay the foundation for further forensic genetics studies in the Haitian population and serve as a model for forensic mtDNA identification of individuals in other isolated or rural communities. © 2012 American Academy of Forensic Sciences.

  12. Correlations and flow of information between the New York Times and stock markets

    NASA Astrophysics Data System (ADS)

    García-Medina, Andrés; Sandoval, Leonidas; Bañuelos, Efraín Urrutia; Martínez-Argüello, A. M.

    2018-07-01

    We use Random Matrix Theory (RMT) and information theory to analyze the correlations and flow of information between 64,939 news from The New York Times and 40 world financial indices during 10 months along the period 2015-2016. The set of news is quantified and transformed into daily polarity time series using tools from sentiment analysis. The results show that a common factor influences the world indices and news, which even share the same dynamics. Furthermore, the global correlation structure is found to be preserved when adding white noise, what indicates that correlations are not due to sample size effects. Likewise, we find a considerable amount of information flowing from news to world indices for some specific delay. This is of practical interest for trading purposes. Our results suggest a deep relationship between news and world indices, and show a situation where news drive world market movements, giving a new evidence to support behavioral finance as the current economic paradigm.

  13. 22.7% efficient PERL silicon solar cell module with a textured front surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, J.; Wang, A.; Campbell, P.

    1997-12-31

    This paper describes a solar cell module efficiency of 22.7% independently measured at Sandia National Laboratories. This is the highest ever confirmed efficiency for a photovoltaic module of this size achieved by cells made from any material. This 778-cm{sup 2} module used 40 large-area double layer antireflection coated PERL (passivated emitter, rear locally-diffused) silicon cells of average efficiency of 23.1%. A textured front module surface considerably improve the module efficiency. Also reported is an independently confirmed efficiency of 23.7% for a 21.6 cm{sup 2} cell of the type used in the module. Using these PERL cells in the 1996 Worldmore » Solar Challenge solar car race from Darwin to Adelaide across Australia, Honda`s Dream and Aisin Seiki`s Aisol III were placed first and third, respectively. Honda also set a new record by reaching Adelaide in four days with an average speed of 90km/h over the 3010 km course.« less

  14. From Cleanroom to Desktop: Emerging Micro-Nanofabrication Technology for Biomedical Applications

    PubMed Central

    Wang, Wei

    2010-01-01

    This review is motivated by the growing demand for low-cost, easy-to-use, compact-size yet powerful micro-nanofabrication technology to address emerging challenges of fundamental biology and translational medicine in regular laboratory settings. Recent advancements in the field benefit considerably from rapidly expanding material selections, ranging from inorganics to organics and from nanoparticles to self-assembled molecules. Meanwhile a great number of novel methodologies, employing off-the-shelf consumer electronics, intriguing interfacial phenomena, bottom-up self-assembly principles, etc., have been implemented to transit micro-nanofabrication from a cleanroom environment to a desktop setup. Furthermore, the latest application of micro-nanofabrication to emerging biomedical research will be presented in detail, which includes point-of-care diagnostics, on-chip cell culture as well as bio-manipulation. While significant progresses have been made in the rapidly growing field, both apparent and unrevealed roadblocks will need to be addressed in the future. We conclude this review by offering our perspectives on the current technical challenges and future research opportunities. PMID:21161384

  15. Theoretical study of mixing in liquid clouds – Part 1: Classical concepts

    DOE PAGES

    Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...

    2016-07-28

    The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less

  16. Complete mitochondrial genome of the brown alga Sargassum fusiforme (Sargassaceae, Phaeophyceae): genome architecture and taxonomic consideration.

    PubMed

    Liu, Feng; Pang, Shaojun; Luo, Minbo

    2016-01-01

    Sargassum fusiforme (Harvey) Setchell (=Hizikia fusiformis (Harvey) Okamura) is one of the most important economic seaweeds for mariculture in China. In this study, we present the complete mitochondrial genome of S. fusiforme. The genome is 34,696 bp in length with circular organization, encoding the standard set of three ribosomal RNA genes (rRNA), 25 transfer RNA genes (tRNA), 35 protein-coding genes, and two conserved open reading frames (ORFs). Its total AT content is 62.47%, lower than other brown algae except Pylaiella littoralis. The mitogenome carries 1571 bp of intergenic region constituting 4.53% of the genome, and 13 pairs of overlapping genes with the overlap size from 1 to 90 bp. The phylogenetic analyses based on 35 protein-coding genes reveal that S. fusiforme has a closer evolutionary relationship with Sargassum muticum than Sargassum horneri, indicating Hizikia are not distinct evolutionary entity and should be reduced to synonymy with Sargassum.

  17. Model reduction of multiscale chemical langevin equations: a numerical case study.

    PubMed

    Sotiropoulos, Vassilios; Contou-Carrere, Marie-Nathalie; Daoutidis, Prodromos; Kaznessis, Yiannis N

    2009-01-01

    Two very important characteristics of biological reaction networks need to be considered carefully when modeling these systems. First, models must account for the inherent probabilistic nature of systems far from the thermodynamic limit. Often, biological systems cannot be modeled with traditional continuous-deterministic models. Second, models must take into consideration the disparate spectrum of time scales observed in biological phenomena, such as slow transcription events and fast dimerization reactions. In the last decade, significant efforts have been expended on the development of stochastic chemical kinetics models to capture the dynamics of biomolecular systems, and on the development of robust multiscale algorithms, able to handle stiffness. In this paper, the focus is on the dynamics of reaction sets governed by stiff chemical Langevin equations, i.e., stiff stochastic differential equations. These are particularly challenging systems to model, requiring prohibitively small integration step sizes. We describe and illustrate the application of a semianalytical reduction framework for chemical Langevin equations that results in significant gains in computational cost.

  18. Noble Gas Inventory of Micrometeorites Collected at the Transantarctic Mountains (TAM) and Indications for Their Provenance

    NASA Technical Reports Server (NTRS)

    Ott, U.; Baecker, B.; Folco, L.; Cordier, C.

    2016-01-01

    A variety of processes have been considered possibly contributing the volatiles including noble gases to the atmospheres of the terrestrial planets (e.g., [1-3]). Special consideration has been given to the concept of accretion of volatile-rich materials by the forming planets. This might include infalling planetesimals and dust, and could include material from the outer asteroid belt, as well as cometary material from the outer solar system. Currently, the dominant source of extraterrestrial material accreted by the Earth is represented by micrometeorites (MMs) with sizes mostly in the 100-300 micron range [3, 4]). Their role has been assessed by [3], who conclude that accretion of early micrometeorites played a major role in the formation of the terrestrial atmosphere and oceans. We have therefore set out to investigate in more detail the inventory of noble gases in MMs. Here we summarize some of our results obtained on MMs collected in micrometeorite traps of the Transantarctic Mountains [5].

  19. Possible consequences of severe accidents at the Lubiatowo site, Poland

    NASA Astrophysics Data System (ADS)

    Seibert, Petra; Philipp, Anne; Hofman, Radek; Gufler, Klaus; Sholly, Steven

    2014-05-01

    The construction of a nuclear power plant is under consideration in Poland. One of the sites under discussion is near Lubiatowo, located on the cost of the Baltic Sea northwest of Gdansk. An assessment of possible environmental consequences is carried out for 88 real meteorological cases with the Lagrangian particle dispersion model FLEXPART. Based on literature research, three reactor designs (ABWR, EPR, AP 1000) were identified as being under discussion in Poland. For each of the designs, a set of accident scenarios was evaluated and two source terms per reactor design were selected for analysis. One of the selected source terms was a relatively large release while the second one was a severe accident with an intact containment. Considered endpoints of the calculations are ground contamination with Cs-137 and time-integrated concentrations of I-131 in air as well as committed doses. They are evaluated on a grid of ca. 3 km mesh size covering eastern Central Europe.

  20. Independent data monitoring committees: Preparing a path for the future

    PubMed Central

    Hess, Connie N.; Roe, Matthew T.; Gibson, C. Michael; Temple, Robert J.; Pencina, Michael J.; Zarin, Deborah A.; Anstrom, Kevin J.; Alexander, John H.; Sherman, Rachel E.; Fiedorek, Fred T.; Mahaffey, Kenneth W.; Lee, Kerry L.; Chow, Shein-Chung; Armstrong, Paul W.; Califf, Robert M.

    2014-01-01

    Independent data monitoring committees (IDMCs) were introduced to monitor patient safety and study conduct in randomized clinical trials (RCTs), but certain challenges regarding the utilization of IDMCs have developed. First, the roles and responsibilities of IDMCs are expanding, perhaps due to increasing trial complexity and heterogeneity regarding medical, ethical, legal, regulatory, and financial issues. Second, no standard for IDMC operating procedures exists, and there is uncertainty about who should determine standards and whether standards should vary with trial size and design. Third, considerable variability in communication pathways exist across IDMC interfaces with regulatory agencies, academic coordinating centers, and sponsors. Finally, there has been a substantial increase in the number of RCTs using IDMCs, yet there is no set of qualifications to help guide the training and development of the next generation of IDMC members. Recently, an expert panel of representatives from government, industry, and academia assembled at the Duke Clinical Research Institute to address these challenges and to develop recommendations for the future utilization of IDMCs in RCTs. PMID:25066551

Top