Sample records for factoring large numbers

  1. The factorization of large composite numbers on the MPP

    NASA Technical Reports Server (NTRS)

    Mckurdy, Kathy J.; Wunderlich, Marvin C.

    1987-01-01

    The continued fraction method for factoring large integers (CFRAC) was an ideal algorithm to be implemented on a massively parallel computer such as the Massively Parallel Processor (MPP). After much effort, the first 60 digit number was factored on the MPP using about 6 1/2 hours of array time. Although this result added about 10 digits to the size number that could be factored using CFRAC on a serial machine, it was already badly beaten by the implementation of Davis and Holdridge on the CRAY-1 using the quadratic sieve, an algorithm which is clearly superior to CFRAC for large numbers. An algorithm is illustrated which is ideally suited to the single instruction multiple data (SIMD) massively parallel architecture and some of the modifications which were needed in order to make the parallel implementation effective and efficient are described.

  2. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  3. Estimating large numbers.

    PubMed

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-07-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions predict a log-to-linear shift: People will either place numbers linearly or will place numbers according to a compressive logarithmic or power-shaped function (Barth & Paladino, ; Siegler & Opfer, ). While about half of people did estimate numbers linearly over this range, nearly all the remaining participants placed 1 million approximately halfway between 1 thousand and 1 billion, but placed numbers linearly across each half, as though they believed that the number words "thousand, million, billion, trillion" constitute a uniformly spaced count list. Participants in this group also tended to be optimistic in evaluations of largely ineffective political strategies, relative to linear number-line placers. The results indicate that the surface structure of number words can heavily influence processes for dealing with numbers in this range, and it can amplify the possibility that analogous surface regularities are partially responsible for parallel phenomena in children. In addition, these results have direct implications for lawmakers and scientists hoping to communicate effectively with the public. Copyright © 2013 Cognitive Science Society, Inc.

  4. High Resolution Mapping of Genetic Factors Affecting Abdominal Bristle Number in Drosophila Melanogaster

    PubMed Central

    Long, A. D.; Mullaney, S. L.; Reid, L. A.; Fry, J. D.; Langley, C. H.; Mackay, TFC.

    1995-01-01

    Factors responsible for selection response for abdominal bristle number and correlated responses in sternopleural bristle number were mapped to the X and third chromosome of Drosophila melanogaster. Lines divergent for high and low abdominal bristle number were created by 25 generations of artificial selection from a large base population, with an intensity of 25 individuals of each sex selected from 100 individuals of each sex scored per generation. Isogenic chromosome substitution lines in which the high (H) X or third chromosome were placed in an isogenic low (L) background were derived from the selection lines and from the 93 recombinant isogenic (RI) HL X and 67 RI chromosome 3 lines constructed from them. Highly polymorphic neutral r00 transposable elements were hybridized in situ to the polytene chromosomes of the RI lines to create a set of cytogenetic markers. These techniques yielded a dense map with an average spacing of 4 cM between informative markers. Factors affecting bristle number, and relative viability of the chromosome 3 RI lines, were mapped using a multiple regression interval mapping approach, conditioning on all markers >/=10 cM from the tested interval. Two factors with large effects on abdominal bristle number were mapped on the X chromosome and five factors on the third chromosome. One factor with a large effect on sternopleural bristle number was mapped to the X and two were mapped to the third chromosome; all factors with sternopleural effects corresponded to those with effects on abdominal bristle number. Two of the chromosome 3 factors with large effects on abdominal bristle number were also associated with reduced viability. Significant sex-specific effects and epistatic interactions between mapped factors of the same order of magnitude as the additive effects were observed. All factors mapped to the approximate positions of likely candidate loci (ASC, bb, emc, h, mab, Dl and E(spl)), previously characterized by mutations with large

  5. Theoretical and experimental study of a new algorithm for factoring numbers

    NASA Astrophysics Data System (ADS)

    Tamma, Vincenzo

    The security of codes, for example in credit card and government information, relies on the fact that the factorization of a large integer N is a rather costly process on a classical digital computer. Such a security is endangered by Shor's algorithm which employs entangled quantum systems to find, with a polynomial number of resources, the period of a function which is connected with the factors of N. We can surely expect a possible future realization of such a method for large numbers, but so far the period of Shor's function has been only computed for the number 15. Inspired by Shor's idea, our work aims to methods of factorization based on the periodicity measurement of a given continuous periodic "factoring function" which is physically implementable using an analogue computer. In particular, we have focused on both the theoretical and the experimental analysis of Gauss sums with continuous arguments leading to a new factorization algorithm. The procedure allows, for the first time, to factor several numbers by measuring the periodicity of Gauss sums performing first-order "factoring" interfer ence processes. We experimentally implemented this idea by exploiting polychromatic optical interference in the visible range with a multi-path interferometer, and achieved the factorization of seven digit numbers. The physical principle behind this "factoring" interference procedure can be potentially exploited also on entangled systems, as multi-photon entangled states, in order to achieve a polynomial scaling in the number of resources.

  6. Large number discrimination by mosquitofish.

    PubMed

    Agrillo, Christian; Piffer, Laura; Bisazza, Angelo

    2010-12-22

    Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4) were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance). Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all vertebrates.

  7. Generation of large numbers of dendritic cells from mouse bone marrow cultures supplemented with granulocyte/macrophage colony-stimulating factor

    PubMed Central

    1992-01-01

    Antigen-presenting, major histocompatibility complex (MHC) class II- rich dendritic cells are known to arise from bone marrow. However, marrow lacks mature dendritic cells, and substantial numbers of proliferating less-mature cells have yet to be identified. The methodology for inducing dendritic cell growth that was recently described for mouse blood now has been modified to MHC class II- negative precursors in marrow. A key step is to remove the majority of nonadherent, newly formed granulocytes by gentle washes during the first 2-4 d of culture. This leaves behind proliferating clusters that are loosely attached to a more firmly adherent "stroma." At days 4-6 the clusters can be dislodged, isolated by 1-g sedimentation, and upon reculture, large numbers of dendritic cells are released. The latter are readily identified on the basis of their distinct cell shape, ultrastructure, and repertoire of antigens, as detected with a panel of monoclonal antibodies. The dendritic cells express high levels of MHC class II products and act as powerful accessory cells for initiating the mixed leukocyte reaction. Neither the clusters nor mature dendritic cells are generated if macrophage colony-stimulating factor rather than granulocyte/macrophage colony-stimulating factor (GM-CSF) is applied. Therefore, GM-CSF generates all three lineages of myeloid cells (granulocytes, macrophages, and dendritic cells). Since > 5 x 10(6) dendritic cells develop in 1 wk from precursors within the large hind limb bones of a single animal, marrow progenitors can act as a major source of dendritic cells. This feature should prove useful for future molecular and clinical studies of this otherwise trace cell type. PMID:1460426

  8. The Intuitiveness of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Lem, Stephanie

    2015-01-01

    In this paper two studies are reported in which two contrasting claims concerning the intuitiveness of the law of large numbers are investigated. While Sedlmeier and Gigerenzer ("J Behav Decis Mak" 10:33-51, 1997) claim that people have an intuition that conforms to the law of large numbers, but that they can only employ this intuition…

  9. Forecasting distribution of numbers of large fires

    Treesearch

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  10. Reading the World through Very Large Numbers

    ERIC Educational Resources Information Center

    Greer, Brian; Mukhopadhyay, Swapna

    2010-01-01

    One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…

  11. Determining the Number of Factors in P-Technique Factor Analysis

    ERIC Educational Resources Information Center

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  12. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  13. Holographic turbulence in a large number of dimensions

    NASA Astrophysics Data System (ADS)

    Rozali, Moshe; Sabag, Evyatar; Yarom, Amos

    2018-04-01

    We consider relativistic hydrodynamics in the limit where the number of spatial dimensions is very large. We show that under certain restrictions, the resulting equations of motion simplify significantly. Holographic theories in a large number of dimensions satisfy the aforementioned restrictions and their dynamics are captured by hydrodynamics with a naturally truncated derivative expansion. Using analytic and numerical techniques we analyze two and three-dimensional turbulent flow of such fluids in various regimes and its relation to geometric data.

  14. Complex networks with large numbers of labelable attractors

    NASA Astrophysics Data System (ADS)

    Mi, Yuanyuan; Zhang, Lisheng; Huang, Xiaodong; Qian, Yu; Hu, Gang; Liao, Xuhong

    2011-09-01

    Information storage in many functional subsystems of the brain is regarded by theoretical neuroscientists to be related to attractors of neural networks. The number of attractors is large and each attractor can be temporarily represented or suppressed easily by corresponding external stimulus. In this letter, we discover that complex networks consisting of excitable nodes have similar fascinating properties of coexistence of large numbers of oscillatory attractors, most of which can be labeled with a few nodes. According to a simple labeling rule, different attractors can be identified and the number of labelable attractors can be predicted from the analysis of network topology. With the cues of the labeling association, these attractors can be conveniently retrieved or suppressed on purpose.

  15. Large numbers hypothesis. II - Electromagnetic radiation

    NASA Technical Reports Server (NTRS)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  16. Stochastic Reconnection for Large Magnetic Prandtl Numbers

    NASA Astrophysics Data System (ADS)

    Jafari, Amir; Vishniac, Ethan T.; Kowal, Grzegorz; Lazarian, Alex

    2018-06-01

    We consider stochastic magnetic reconnection in high-β plasmas with large magnetic Prandtl numbers, Pr m > 1. For large Pr m , field line stochasticity is suppressed at very small scales, impeding diffusion. In addition, viscosity suppresses very small-scale differential motions and therefore also the local reconnection. Here we consider the effect of high magnetic Prandtl numbers on the global reconnection rate in a turbulent medium and provide a diffusion equation for the magnetic field lines considering both resistive and viscous dissipation. We find that the width of the outflow region is unaffected unless Pr m is exponentially larger than the Reynolds number Re. The ejection velocity of matter from the reconnection region is also unaffected by viscosity unless Re ∼ 1. By these criteria the reconnection rate in typical astrophysical systems is almost independent of viscosity. This remains true for reconnection in quiet environments where current sheet instabilities drive reconnection. However, if Pr m > 1, viscosity can suppress small-scale reconnection events near and below the Kolmogorov or viscous damping scale. This will produce a threshold for the suppression of large-scale reconnection by viscosity when {\\Pr }m> \\sqrt{Re}}. In any case, for Pr m > 1 this leads to a flattening of the magnetic fluctuation power spectrum, so that its spectral index is ∼‑4/3 for length scales between the viscous dissipation scale and eddies larger by roughly {{\\Pr }}m3/2. Current numerical simulations are insensitive to this effect. We suggest that the dependence of reconnection on viscosity in these simulations may be due to insufficient resolution for the turbulent inertial range rather than a guide to the large Re limit.

  17. Lepton number violation in theories with a large number of standard model copies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-03-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less

  18. Fatal crashes involving large numbers of vehicles and weather.

    PubMed

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  19. Categories of Large Numbers in Line Estimation

    ERIC Educational Resources Information Center

    Landy, David; Charlesworth, Arthur; Ottmar, Erin

    2017-01-01

    How do people stretch their understanding of magnitude from the experiential range to the very large quantities and ranges important in science, geopolitics, and mathematics? This paper empirically evaluates how and whether people make use of numerical categories when estimating relative magnitudes of numbers across many orders of magnitude. We…

  20. Question Number Two: How Many Factors?

    ERIC Educational Resources Information Center

    Goodwyn, Fara

    2012-01-01

    Exploratory factor analysis involves five key decisions. The second decision, how many factors to retain, is the focus of the current paper. Extracting too many or too few factors often leads to devastating effects on study results. The advantages and disadvantages of the most effective and/or most utilized strategies to determine the number of…

  1. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  2. Rotating thermal convection at very large Rayleigh numbers

    NASA Astrophysics Data System (ADS)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  3. On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai

    2007-01-01

    In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…

  4. Factors associated with the number of calves born to Norwegian beef suckler cows.

    PubMed

    Holmøy, Ingrid H; Nelson, Sindre T; Martin, Adam D; Nødtvedt, Ane

    2017-05-01

    A retrospective cohort study was performed to evaluate factors associated with the number of calves born to Norwegian beef suckler cows. Production data from 20,541 cows in 2210 herds slaughtered over a three-year period (1st of January 2010 to 23rd of January 2013) were extracted from the national beef cattle registry. This study's inclusion criteria were met for 16,917 cows (from 1858 herds) which gave birth to 50,578 calves. The median number of calves born per cow was 2 (min 1, max 18). Two multilevel Poisson regression models with herd random effects showed that early maturing breeds (Hereford and Aberdeen Angus) gave birth to more calves than late maturing breeds (Charolais and Limousin) in four out of five areas of Norway. The significant breed-region interaction indicated that the coastal South East region of Norway, which has a relatively long growing season and gentle topography, yielded the highest number of calves born for all but one breed (Simmental). Cows that needed assistance or experienced dystocia at their first calving produced fewer calves than those that did not: incidence rate ratio 0.87 (95% confidence interval (CI) 0.84-0.91) for assistance and 0.70 (95% CI: 0.66-0.75) for dystocia, respectively. Cows in larger herds (>30 cows) produced 11% more calves in their lifetime compared to cows in smaller herds (≤30 cows) (P<0.001). The herd random effects were highly significant, suggesting that unmeasured factors at the herd level were responsible for a large amount of the unexplained variation in the number of calves born. The large inter-herd variation indicate systematic differences in herd level factors influencing the number of calves born to each cow. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Fault-tolerant control of large space structures using the stable factorization approach

    NASA Technical Reports Server (NTRS)

    Razavi, H. C.; Mehra, R. K.; Vidyasagar, M.

    1986-01-01

    Large space structures are characterized by the following features: they are in general infinite-dimensional systems, and have large numbers of undamped or lightly damped poles. Any attempt to apply linear control theory to large space structures must therefore take into account these features. Phase I consisted of an attempt to apply the recently developed Stable Factorization (SF) design philosophy to problems of large space structures, with particular attention to the aspects of robustness and fault tolerance. The final report on the Phase I effort consists of four sections, each devoted to one task. The first three sections report theoretical results, while the last consists of a design example. Significant results were obtained in all four tasks of the project. More specifically, an innovative approach to order reduction was obtained, stabilizing controller structures for plants with an infinite number of unstable poles were determined under some conditions, conditions for simultaneous stabilizability of an infinite number of plants were explored, and a fault tolerance controller design that stabilizes a flexible structure model was obtained which is robust against one failure condition.

  6. Large number limit of multifield inflation

    NASA Astrophysics Data System (ADS)

    Guo, Zhong-Kai

    2017-12-01

    We compute the tensor and scalar spectral index nt, ns, the tensor-to-scalar ratio r , and the consistency relation nt/r in the general monomial multifield slow-roll inflation models with potentials V ˜∑iλi|ϕi| pi . The general models give a novel relation that nt, ns and nt/r are all proportional to the logarithm of the number of fields Nf when Nf is getting extremely large with the order of magnitude around O (1040). An upper bound Nf≲N*eZ N* is given by requiring the slow variation parameter small enough where N* is the e -folding number and Z is a function of distributions of λi and pi. Besides, nt/r differs from the single-field result -1 /8 with substantial probability except for a few very special cases. Finally, we derive theoretical bounds r >2 /N* (r ≳0.03 ) and for nt, which can be tested by observation in the near future.

  7. Viscous instabilities in the q-vortex at large swirl numbers

    NASA Astrophysics Data System (ADS)

    Fabre, David; Jacquin, Laurent

    2002-11-01

    This comunication deals with the temporal stability of the q-vortex trailing line vortex model. We describe a family of viscous instabilities existing in a range of parameters which is usually assumed to be stable, namely large swirl parameters (q>1.5) and large Reynolds numbers. These instabilities affect negative azimuthal wavenumbers (m < 0) and take the form of centre-modes (i.e. with a structure concentrated along the vortex centerline). They are related to a family of viscous modes described by Stewartson, Ng & Brown (1988) in swirling Poiseuille flow, and are the temporal counterparts of weakly amplified spatial modes recently computed by Olendraru & Sellier (2002). These instabilities are studied numerically using an original and highly accurate Chebyshev collocation method, which allows a mapping of the unstable regions up to Rey 10^6 and q 7. Our results indicate that in the limit of very large Reynolds numbers, trailing vortices are affected by this kind of instabilities whatever the value of the swirl number.

  8. Using Horn's Parallel Analysis Method in Exploratory Factor Analysis for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Çokluk, Ömay; Koçak, Duygu

    2016-01-01

    In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…

  9. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  10. Forecasting distribution of numbers of large fires

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  11. All Numbers Are Not Equal: An Electrophysiological Investigation of Small and Large Number Representations

    ERIC Educational Resources Information Center

    Hyde, Daniel C.; Spelke, Elizabeth S.

    2009-01-01

    Behavioral and brain imaging research indicates that human infants, humans adults, and many nonhuman animals represent large nonsymbolic numbers approximately, discriminating between sets with a ratio limit on accuracy. Some behavioral evidence, especially with human infants, suggests that these representations differ from representations of small…

  12. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  13. Small and Large Number Processing in Infants and Toddlers with Williams Syndrome

    ERIC Educational Resources Information Center

    Van Herwegen, Jo; Ansari, Daniel; Xu, Fei; Karmiloff-Smith, Annette

    2008-01-01

    Previous studies have suggested that typically developing 6-month-old infants are able to discriminate between small and large numerosities. However, discrimination between small numerosities in young infants is only possible when variables continuous with number (e.g. area or circumference) are confounded. In contrast, large number discrimination…

  14. Spreadsheet Simulation of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Boger, George

    2005-01-01

    If larger and larger samples are successively drawn from a population and a running average calculated after each sample has been drawn, the sequence of averages will converge to the mean, [mu], of the population. This remarkable fact, known as the law of large numbers, holds true if samples are drawn from a population of discrete or continuous…

  15. Work-Related Musculoskeletal Symptoms and Job Factors Among Large-Herd Dairy Milkers.

    PubMed

    Douphrate, David I; Nonnenmann, Matthew W; Hagevoort, Robert; Gimeno Ruiz de Porras, David

    2016-01-01

    Dairy production in the United States is moving towards large-herd milking operations, resulting in an increase in task specialization and work demands. The objective of this project was to provide preliminary evidence of the association of a number of specific job conditions that commonly characterize large-herd parlor milking operations with work-related musculoskeletal symptoms (MSS). A modified version of the Standardized Nordic Questionnaire was administered to assess MSS prevalence among 450 US large-herd parlor workers. Worker demographics and MSS prevalences were generated. Prevalence ratios were also generated to determine associations of a number of specific job conditions that commonly characterize large-herd parlor milking operations with work-related MSS. Work-related MSS are prevalent among large-herd parlor workers, since nearly 80% report 12-month prevalences of one or more symptoms, which are primarily located in the upper extremities, specifically shoulders and wrist/hand. Specific large-herd milking parlor job conditions are associated with MSS in multiple body regions, including performing the same task repeatedly, insufficient rest breaks, working when injured, static postures, adverse environmental conditions, and reaching overhead. These findings support the need for administrative and engineering solutions aimed at reducing exposure to job risk factors for work-related MSS among large-herd parlor workers.

  16. Numbers Defy the Law of Large Numbers

    ERIC Educational Resources Information Center

    Falk, Ruma; Lann, Avital Lavie

    2015-01-01

    As the number of independent tosses of a fair coin grows, the rates of heads and tails tend to equality. This is misinterpreted by many students as being true also for the absolute numbers of the two outcomes, which, conversely, depart unboundedly from each other in the process. Eradicating that misconception, as by coin-tossing experiments,…

  17. [Dual process in large number estimation under uncertainty].

    PubMed

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  18. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  19. Automatic trajectory measurement of large numbers of crowded objects

    NASA Astrophysics Data System (ADS)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  20. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  1. Large-Eddy Simulation of the Flat-plate Turbulent Boundary Layer at High Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Inoue, Michio

    The near-wall, subgrid-scale (SGS) model [Chung and Pullin, "Large-eddy simulation and wall-modeling of turbulent channel flow'', J. Fluid Mech. 631, 281--309 (2009)] is used to perform large-eddy simulations (LES) of the incompressible developing, smooth-wall, flat-plate turbulent boundary layer. In this model, the stretched-vortex, SGS closure is utilized in conjunction with a tailored, near-wall model designed to incorporate anisotropic vorticity scales in the presence of the wall. The composite SGS-wall model is presently incorporated into a computer code suitable for the LES of developing flat-plate boundary layers. This is then used to study several aspects of zero- and adverse-pressure gradient turbulent boundary layers. First, LES of the zero-pressure gradient turbulent boundary layer are performed at Reynolds numbers Retheta based on the free-stream velocity and the momentum thickness in the range Retheta = 103-1012. Results include the inverse skin friction coefficient, 2/Cf , velocity profiles, the shape factor H, the Karman "constant", and the Coles wake factor as functions of Re theta. Comparisons with some direct numerical simulation (DNS) and experiment are made, including turbulent intensity data from atmospheric-layer measurements at Retheta = O (106). At extremely large Retheta , the empirical Coles-Fernholz relation for skin-friction coefficient provides a reasonable representation of the LES predictions. While the present LES methodology cannot of itself probe the structure of the near-wall region, the present results show turbulence intensities that scale on the wall-friction velocity and on the Clauser length scale over almost all of the outer boundary layer. It is argued that the LES is suggestive of the asymptotic, infinite Reynolds-number limit for the smooth-wall turbulent boundary layer and different ways in which this limit can be approached are discussed. The maximum Retheta of the present simulations appears to be limited by machine

  2. Number of Family Members, a New Influencing Factor to Affect the Risk of Melamine-Associated Urinary Stones

    PubMed Central

    LIU, Changjiang; LI, Hui; YANG, Kedi; YANG, Haixia

    2013-01-01

    Melamine is a new risk of urinary stones. Gansu province is a heavily affected area and has large population and underdeveloped economy. We hypothesized that number of family members and family income may play significant roles in the formation of urinary stones. A case-control study was performed among 190 infants. Results showed that the case group had less numbers of family members than the control (4.4 vs. 5.6, respectively). The multivariate logistic regression analysis indicated that number of family members was an independent influencing factor associated with urinary stones (OR, 0.606; 95% CI, 0.411–0.893; P = 0.011). Family income, however, did not exhibit a significant difference. Observed results suggested that number of family members was a new and significant influencing factor to affect the risk of melamine-associated urinary stones. PMID:23967433

  3. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  4. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  5. Solar concentration properties of flat fresnel lenses with large F-numbers

    NASA Technical Reports Server (NTRS)

    Cosby, R. M.

    1978-01-01

    The solar concentration performances of flat, line-focusing sun-tracking Fresnel lenses with selected f-numbers between 0.9 and 2.0 were analyzed. Lens transmittance was found to have a weak dependence on f-number, with a 2% increase occuring as the f-number is increased from 0.9 to 2.0. The geometric concentration ratio for perfectly tracking lenses peaked for an f-number near 1.35. Intensity profiles were more uniform over the image extent for large f-number lenses when compared to the f/0.9 lens results. Substantial decreases in geometri concentration ratios were observed for transverse tracking errors equal to or below 1 degree for all f-number lenses. With respect to tracking errors, the solar performance is optimum for f-numbers between 1.25 and 1.5.

  6. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  7. A full picture of large lepton number asymmetries of the Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barenboim, Gabriela; Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentallymore » consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.« less

  8. Asymptotic properties of entanglement polytopes for large number of qubits

    NASA Astrophysics Data System (ADS)

    Maciążek, Tomasz; Sawicki, Adam

    2018-02-01

    Entanglement polytopes have been recently proposed as a way of witnessing the stochastic local operations and classical communication (SLOCC) multipartite entanglement classes using single particle information. We present first asymptotic results concerning the feasibility of this approach for a large number of qubits. In particular, we show that entanglement polytopes of the L-qubit system accumulate in the distance O(\\frac{1}{\\sqrt{L}}) from the point corresponding to the maximally mixed reduced one-qubit density matrices. This implies existence of a possibly large region where many entanglement polytopes overlap, i.e. where the witnessing power of entanglement polytopes is weak. Moreover, we argue that the witnessing power cannot be strengthened by any entanglement distillation protocol, as for large L the required purity is above current capability.

  9. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  10. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Factors influencing large wildland fire suppression expenditures

    Treesearch

    Jingjing Liang; Dave E. Calkin; Krista M. Gebert; Tyron J. Venn; Robin P. Silverstein

    2008-01-01

    There is an urgent and immediate need to address the excessive cost of large fires. Here, we studied large wildland fire suppression expenditures by the US Department of Agriculture Forest Service. Among 16 potential nonmanagerial factors, which represented fire size and shape, private properties, public land attributes, forest and fuel conditions, and geographic...

  12. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    NASA Astrophysics Data System (ADS)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  13. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    ERIC Educational Resources Information Center

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  14. Large Enhancement of Thermal Conductivity and Lorenz Number in Topological Insulator Thin Films.

    PubMed

    Luo, Zhe; Tian, Jifa; Huang, Shouyuan; Srinivasan, Mithun; Maassen, Jesse; Chen, Yong P; Xu, Xianfan

    2018-02-27

    Topological insulators (TI) have attracted extensive research effort due to their insulating bulk states but conducting surface states. However, investigation and understanding of thermal transport in topological insulators, particularly the effect of surface states, are lacking. In this work, we studied thickness-dependent in-plane thermal and electrical conductivity of Bi 2 Te 2 Se TI thin films. A large enhancement in both thermal and electrical conductivity was observed for films with thicknesses below 20 nm, which is attributed to the surface states and bulk-insulating nature of these films. Moreover, a surface Lorenz number much larger than the Sommerfeld value was found. Systematic transport measurements indicated that the Fermi surface is located near the charge neutrality point (CNP) when the film thickness is below 20 nm. Possible reasons for the large Lorenz number include electrical and thermal current decoupling in the surface state Dirac fluid, and bipolar diffusion transport. A simple computational model indicates that the surface states and bipolar diffusion indeed can lead to enhanced electrical and thermal transport and a large Lorenz number.

  15. Power-law scaling in Bénard-Marangoni convection at large Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Boeck, Thomas; Thess, André

    2001-08-01

    Bénard-Marangoni convection at large Prandtl numbers is found to exhibit steady (nonturbulent) behavior in numerical experiments over a very wide range of Marangoni numbers Ma far away from the primary instability threshold. A phenomenological theory, taking into account the different character of thermal boundary layers at the bottom and at the free surface, is developed. It predicts a power-law scaling for the nondimensional velocity (Peclet number) and heat flux (Nusselt number) of the form Pe~Ma2/3, Nu~Ma2/9. This prediction is in good agreement with two-dimensional direct numerical simulations up to Ma=3.2×105.

  16. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  17. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  18. P14.21 Can vascular risk factors influence number of brain metastases?

    PubMed Central

    Berk, B.; Nagel, S.; Kortmann, R.; Hoffmann, K.; Gaudino, C.; Seidel, C.

    2017-01-01

    Abstract BACKGROUND: Up to 30-40% of patients with solid tumors develop cerebral metastases. Number of cerebral metastases is relevant for treatment and prognosis. However, factors that determine number of metastases are not well defined. Distribution of metastases is influenced by blood vessels and cerebral small vessel disease can reduce number of metastases. Aim of this pilot study was to analyze the influence of vascular risk factors (arterial hypertension, diabetes mellitus, smoking, hypercholesterolemia) and of peripheral arterial occlusive disease (PAOD) on number of brain metastases. METHODS: 200 patients with pre-therapeutic 3D-brain MRI and available clinical data were analyzed retrospectively. Number of metastases (NoM) was compared between patients with/without vascular risk factors (vasRF). Results: Patients with PAOD had significant less brain metastases than patients without PAOD (NoM=4.43 vs. 6.02, p=0.043), no other single vasRF conferred a significant effect on NoM. NoM differed significantly between different tumor entities. CONCLUSION: Presence of PAOD showed some effect on number of brain metastases implying that tumor-independent vascular factors can influence brain metastasation.

  19. Large- and small-scale environmental factors drive distributions of cool-adapted plants in karstic microrefugia

    PubMed Central

    Vojtkó, András; Farkas, Tünde; Szabó, Anna; Havadtői, Krisztina; Vojtkó, Anna E.; Tölgyesi, Csaba; Cseh, Viktória; Erdős, László; Maák, István Elek; Keppel, Gunnar

    2017-01-01

    Background and aims Dolines are small- to large-sized bowl-shaped depressions of karst surfaces. They may constitute important microrefugia, as thermal inversion often maintains cooler conditions within them. This study aimed to identify the effects of large- (macroclimate) and small-scale (slope aspect and vegetation type) environmental factors on cool-adapted plants in karst dolines of East-Central Europe. We also evaluated the potential of these dolines to be microrefugia that mitigate the effects of climate change on cool-adapted plants in both forest and grassland ecosystems. Methods We compared surveys of plant species composition that were made between 2007 and 2015 in 21 dolines distributed across four mountain ranges (sites) in Hungary and Romania. We examined the effects of environmental factors on the distribution and number of cool-adapted plants on three scales: (1) regional (all sites); (2) within sites and; (3) within dolines. Generalized linear models and non-parametric tests were used for the analyses. Key Results Macroclimate, vegetation type and aspect were all significant predictors of the diversity of cool-adapted plants. More cool-adapted plants were recorded in the coolest site, with only few found in the warmest site. At the warmest site, the distribution of cool-adapted plants was restricted to the deepest parts of dolines. Within sites of intermediate temperature and humidity, the effect of vegetation type and aspect on the diversity of cool-adapted plants was often significant, with more taxa being found in grasslands (versus forests) and on north-facing slopes (versus south-facing slopes). Conclusions There is large variation in the number and spatial distribution of cool-adapted plants in karst dolines, which is related to large- and small-scale environmental factors. Both macro- and microrefugia are therefore likely to play important roles in facilitating the persistence of cool-adapted plants under global warming. PMID:28025290

  20. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  1. Operational momentum in large-number addition and subtraction by 9-month-olds.

    PubMed

    McCrink, Koleen; Wynn, Karen

    2009-08-01

    Recent studies on nonsymbolic arithmetic have illustrated that under conditions that prevent exact calculation, adults display a systematic tendency to overestimate the answers to addition problems and underestimate the answers to subtraction problems. It has been suggested that this operational momentum results from exposure to a culture-specific practice of representing numbers spatially; alternatively, the mind may represent numbers in spatial terms from early in development. In the current study, we asked whether operational momentum is present during infancy, prior to exposure to culture-specific representations of numbers. Infants (9-month-olds) were shown videos of events involving the addition or subtraction of objects with three different types of outcomes: numerically correct, too large, and too small. Infants looked significantly longer only at those incorrect outcomes that violated the momentum of the arithmetic operation (i.e., at too-large outcomes in subtraction events and too-small outcomes in addition events). The presence of operational momentum during infancy indicates developmental continuity in the underlying mechanisms used when operating over numerical representations.

  2. Modified large number theory with constant G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Recami, E.

    1983-03-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: accordingmore » to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic.« less

  3. Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number

    NASA Astrophysics Data System (ADS)

    Smith, W. R.; Wang, Q. X.

    2017-08-01

    The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.

  4. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  5. Loss of locality in gravitational correlators with a large number of insertions

    NASA Astrophysics Data System (ADS)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  6. A large number of stepping motor network construction by PLC

    NASA Astrophysics Data System (ADS)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  7. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    PubMed

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  8. Large- and small-scale environmental factors drive distributions of cool-adapted plants in karstic microrefugia.

    PubMed

    Bátori, Zoltán; Vojtkó, András; Farkas, Tünde; Szabó, Anna; Havadtői, Krisztina; Vojtkó, Anna E; Tölgyesi, Csaba; Cseh, Viktória; Erdős, László; Maák, István Elek; Keppel, Gunnar

    2017-01-01

    Dolines are small- to large-sized bowl-shaped depressions of karst surfaces. They may constitute important microrefugia, as thermal inversion often maintains cooler conditions within them. This study aimed to identify the effects of large- (macroclimate) and small-scale (slope aspect and vegetation type) environmental factors on cool-adapted plants in karst dolines of East-Central Europe. We also evaluated the potential of these dolines to be microrefugia that mitigate the effects of climate change on cool-adapted plants in both forest and grassland ecosystems. We compared surveys of plant species composition that were made between 2007 and 2015 in 21 dolines distributed across four mountain ranges (sites) in Hungary and Romania. We examined the effects of environmental factors on the distribution and number of cool-adapted plants on three scales: (1) regional (all sites); (2) within sites and; (3) within dolines. Generalized linear models and non-parametric tests were used for the analyses. Macroclimate, vegetation type and aspect were all significant predictors of the diversity of cool-adapted plants. More cool-adapted plants were recorded in the coolest site, with only few found in the warmest site. At the warmest site, the distribution of cool-adapted plants was restricted to the deepest parts of dolines. Within sites of intermediate temperature and humidity, the effect of vegetation type and aspect on the diversity of cool-adapted plants was often significant, with more taxa being found in grasslands (versus forests) and on north-facing slopes (versus south-facing slopes). There is large variation in the number and spatial distribution of cool-adapted plants in karst dolines, which is related to large- and small-scale environmental factors. Both macro- and microrefugia are therefore likely to play important roles in facilitating the persistence of cool-adapted plants under global warming. © The Author 2016. Published by Oxford University Press on behalf of

  9. Exploratory factor analysis of self-reported symptoms in a large, population-based military cohort

    PubMed Central

    2010-01-01

    Background US military engagements have consistently raised concern over the array of health outcomes experienced by service members postdeployment. Exploratory factor analysis has been used in studies of 1991 Gulf War-related illnesses, and may increase understanding of symptoms and health outcomes associated with current military conflicts in Iraq and Afghanistan. The objective of this study was to use exploratory factor analysis to describe the correlations among numerous physical and psychological symptoms in terms of a smaller number of unobserved variables or factors. Methods The Millennium Cohort Study collects extensive self-reported health data from a large, population-based military cohort, providing a unique opportunity to investigate the interrelationships of numerous physical and psychological symptoms among US military personnel. This study used data from the Millennium Cohort Study, a large, population-based military cohort. Exploratory factor analysis was used to examine the covariance structure of symptoms reported by approximately 50,000 cohort members during 2004-2006. Analyses incorporated 89 symptoms, including responses to several validated instruments embedded in the questionnaire. Techniques accommodated the categorical and sometimes incomplete nature of the survey data. Results A 14-factor model accounted for 60 percent of the total variance in symptoms data and included factors related to several physical, psychological, and behavioral constructs. A notable finding was that many factors appeared to load in accordance with symptom co-location within the survey instrument, highlighting the difficulty in disassociating the effects of question content, location, and response format on factor structure. Conclusions This study demonstrates the potential strengths and weaknesses of exploratory factor analysis to heighten understanding of the complex associations among symptoms. Further research is needed to investigate the relationship between

  10. Graph Embedding Techniques for Bounding Condition Numbers of Incomplete Factor Preconditioning

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen

    1997-01-01

    We extend graph embedding techniques for bounding the spectral condition number of preconditioned systems involving symmetric, irreducibly diagonally dominant M-matrices to systems where the preconditioner is not diagonally dominant. In particular, this allows us to bound the spectral condition number when the preconditioner is based on an incomplete factorization. We provide a review of previous techniques, describe our extension, and give examples both of a bound for a model problem, and of ways in which our techniques give intuitive way of looking at incomplete factor preconditioners.

  11. Factors governing particle number emissions in a waste-to-energy plant.

    PubMed

    Ozgen, Senem; Cernuschi, Stefano; Giugliano, Michele

    2015-05-01

    Particle number concentration and size distribution measurements were performed on the stack gas of a waste-to-energy plant which co-incinerates municipal solid waste, sewage sludge and clinical waste in two lines. Average total number of particles was found to be 4.0·10(5)cm(-3) and 1.9·10(5)cm(-3) for the line equipped with a wet flue gas cleaning process and a dry cleaning system, respectively. Ultrafine particles (dp<100nm) accounted for about 97% of total number concentration for both lines, whereas the nanoparticle (dp<50nm) contribution differed slightly between the lines (87% and 84%). The experimental data is explored statistically through some multivariate pattern identifying methods such as factor analysis and cluster analysis to help the interpretation of the results regarding the origin of the particles in the flue gas with the objective of determining the factors governing the particle number emissions. The higher moisture of the flue gas in the wet cleaning process was found to increase the particle number emissions on average by a factor of about 2 due to increased secondary formation of nanoparticles through nucleation of gaseous precursors such as sulfuric acid, ammonia and water. The influence of flue gas dilution and cooling monitored through the variation of the sampling conditions also confirms the potential effect of the secondary new particle formation in increasing the particle number emissions. This finding shows the importance of reporting the experimental conditions in detail to enable the comparison and interpretation of particle number emissions. Regarding the fuel characteristics no difference was observed in terms of particle number concentration and size distributions between the clinical waste feed and the municipal solid waste co-incineration with sludge. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    NASA Astrophysics Data System (ADS)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  13. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    PubMed Central

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  14. Phylogenetic Copy-Number Factorization of Multiple Tumor Samples.

    PubMed

    Zaccaria, Simone; El-Kebir, Mohammed; Klau, Gunnar W; Raphael, Benjamin J

    2018-04-16

    Cancer is an evolutionary process driven by somatic mutations. This process can be represented as a phylogenetic tree. Constructing such a phylogenetic tree from genome sequencing data is a challenging task due to the many types of mutations in cancer and the fact that nearly all cancer sequencing is of a bulk tumor, measuring a superposition of somatic mutations present in different cells. We study the problem of reconstructing tumor phylogenies from copy-number aberrations (CNAs) measured in bulk-sequencing data. We introduce the Copy-Number Tree Mixture Deconvolution (CNTMD) problem, which aims to find the phylogenetic tree with the fewest number of CNAs that explain the copy-number data from multiple samples of a tumor. We design an algorithm for solving the CNTMD problem and apply the algorithm to both simulated and real data. On simulated data, we find that our algorithm outperforms existing approaches that either perform deconvolution/factorization of mixed tumor samples or build phylogenetic trees assuming homogeneous tumor samples. On real data, we analyze multiple samples from a prostate cancer patient, identifying clones within these samples and a phylogenetic tree that relates these clones and their differing proportions across samples. This phylogenetic tree provides a higher resolution view of copy-number evolution of this cancer than published analyses.

  15. Law of Large Numbers: the Theory, Applications and Technology-based Education

    PubMed Central

    Dinov, Ivo D.; Christou, Nicolas; Gould, Robert

    2011-01-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals – to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN). PMID:21603584

  16. Infants Use Different Mechanisms to Make Small and Large Number Ordinal Judgments

    ERIC Educational Resources Information Center

    vanMarle, Kristy

    2013-01-01

    Previous research has shown indirectly that infants may use two different mechanisms-an object tracking system and an analog magnitude mechanism--to represent small (less than 4) and large (greater than or equal to 4) numbers of objects, respectively. The current study directly tested this hypothesis in an ordinal choice task by presenting 10- to…

  17. Rapid identification of high particle number emitting on-road vehicles and its application to a large fleet of diesel buses.

    PubMed

    Jayaratne, E R; Morawska, L; Ristovski, Z D; He, C

    2007-07-15

    Pollutant concentrations measured in the exhaust plume of a vehicle may be related to the pollutant emission factor using the CO2 concentration as a measure of the dilution factor. We have used this method for the rapid identification of high particle number (PN) emitting on-road vehicles. The method was validated for PN using a medium-duty vehicle and successfully applied to measurements of PN emissions from a large fleet of on-road diesel buses. The ratio of PN concentration to CO2 concentration, Z, in the exhaust plume was estimated for individual buses. On the average, a bus emitted about 1.5 x 10(9) particles per mg of CO2 emitted. A histogram of the number of buses as a function of Z showed, for the first time, that the PN emissions from diesel buses followed a gamma distribution, with most of the values within a narrow range and a few buses exhibiting relatively large values. It was estimated that roughly 10% and 50% of the PN emissions came from just 2% and 25% of the buses, respectively. A regression analysis showed that there was a positive correlation between Z and age of buses, with the slope of the best line being significantly different from zero. The mean Z value for the pre-Euro buses was significantly greater than each of the values for the Euro I and II buses.

  18. Saturation of the Magnetorotational Instability at Large Elssaser Number

    NASA Astrophysics Data System (ADS)

    Julien, Keith; Jamroz, Benjamin; Knobloch, Edgar

    2009-11-01

    The MRI is believed to play an important role in accretion disk physics in extracting angular momentum from the disk and allowing accretion to take place. The instability is investigated within the shearing box approximation under conditions of fundamental importance to astrophysical accretion disk theory. The shear is taken to be the dominant source of energy, but the instability itself requires the presence of a weaker vertical magnetic field. Dissipative effects are suffiently weak that the Elsasser number is large. Thus dissipative forces do not play a role in the leading order linear instability mechanism. However, they are sufficiently large to permit a nonlinear feedback mechanism whereby the turbulent stresses generated by the MRI act on and modify the local background shear in the angular velocity profile. To date this response has been omitted in shearing box simulations and is captured by a reduced pde model derived from the global MHD fluid equations using multiscale asymptotic perturbation theory. Results from simulations of the model indicate a linear phase of exponential growth followed by a nonlinear adjustment to algebraic growth and decay in the fluctuating quantities. Remarkably, the velocity and magnetic field correlations associated with these growth and decay laws conspire to achieve saturation of angular momentum transport.

  19. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  20. A modified large number theory with constant G

    NASA Astrophysics Data System (ADS)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  1. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  2. Saturation of the magnetorotational instability at large Elsasser number

    NASA Astrophysics Data System (ADS)

    Jamroz, B.; Julien, K.; Knobloch, E.

    2008-09-01

    The magnetorotational instability is investigated within the shearing box approximation in the large Elsasser number regime. In this regime, which is of fundamental importance to astrophysical accretion disk theory, shear is the dominant source of energy, but the instability itself requires the presence of a weaker vertical magnetic field. Dissipative effects are weaker still but not negligible. The regime explored retains the condition that (viscous and ohmic) dissipative forces do not play a role in the leading order linear instability mechanism. However, they are sufficiently large to permit a nonlinear feedback mechanism whereby the turbulent stresses generated by the MRI act on and modify the local background shear in the angular velocity profile. To date this response has been omitted in shearing box simulations and is captured by a reduced pde model derived here from the global MHD fluid equations using multiscale asymptotic perturbation theory. Results from numerical simulations of the reduced pde model indicate a linear phase of exponential growth followed by a nonlinear adjustment to algebraic growth and decay in the fluctuating quantities. Remarkably, the velocity and magnetic field correlations associated with these algebraic growth and decay laws conspire to achieve saturation of the angular momentum transport. The inclusion of subdominant ohmic dissipation arrests the algebraic growth of the fluctuations on a longer, dissipative time scale.

  3. Effects of Shell-Buckling Knockdown Factors in Large Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.

    2012-01-01

    Shell-buckling knockdown factors (SBKF) have been used in large cylindrical shell structures to account for uncertainty in buckling loads. As the diameter of the cylinder increases, achieving the manufacturing tolerances becomes increasingly more difficult. Knockdown factors account for manufacturing imperfections in the shell geometry by decreasing the allowable buckling load of the cylinder. In this paper, large-diameter (33 ft) cylinders are investigated by using various SBKF's. An investigation that is based on finite-element analysis (FEA) is used to develop design sensitivity relationships. Different manufacturing imperfections are modeled into a perfect cylinder to investigate the effects of these imperfections on buckling. The analysis results may be applicable to large- diameter rockets, cylindrical tower structures, bulk storage tanks, and silos.

  4. Automated 3D trajectory measuring of large numbers of moving particles.

    PubMed

    Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu

    2011-04-11

    Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America

  5. Number of Coronary Heart Disease Risk Factors and Mortality in Patients With First Myocardial Infarction

    PubMed Central

    Canto, John G.; Kiefe, Catarina I.; Rogers, William J.; Peterson, Eric D.; Frederick, Paul D.; French, William J.; Gibson, C. Michael; Pollack, Charles V.; Ornato, Joseph P.; Zalenski, Robert J.; Penney, Jan; Tiefenbrunn, Alan J.; Greenland, Philip

    2013-01-01

    Context Few studies have examined the association between the number of coronary heart disease risk factors and outcomes of acute myocardial infarction in community practice. Objective To determine the association between the number of coronary heart disease risk factors in patients with first myocardial infarction and hospital mortality. Design Observational study from the National Registry of Myocardial Infarction, 1994-2006. Patients We examined the presence and absence of 5 major traditional coronary heart disease risk factors (hypertension, smoking, dyslipidemia, diabetes, and family history of coronary heart disease) and hospital mortality among 542 008 patients with first myocardial infarction and without prior cardiovascular disease. Main Outcome Measure All-cause in-hospital mortality. Results A majority (85.6%) of patients who presented with initial myocardial infarction had at least 1 of the 5 coronary heart disease risk factors, and 14.4% had none of the 5 risk factors. Age varied inversely with the number of coronary heart disease risk factors, from a mean age of 71.5 years with 0 risk factors to 56.7 years with 5 risk factors (P for trend <.001). The total number of in-hospital deaths for all causes was 50 788. Unadjusted in-hospital mortality rates were 14.9%, 10.9%, 7.9%, 5.3%, 4.2%, and 3.6% for patients with 0, 1, 2, 3, 4, and 5 risk factors, respectively. After adjusting for age and other clinical factors, there was an inverse association between the number of coronary heart disease risk factors and hospital mortality adjusted odds ratio (1.54; 95% CI, 1.23-1.94) among individuals with 0 vs 5 risk factors. This association was consistent among several age strata and important patient subgroups. Conclusion Among patients with incident acute myocardial infarction without prior cardiovascular disease, in-hospital mortality was inversely related to the number of coronary heart disease risk factors. PMID:22089719

  6. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    NASA Astrophysics Data System (ADS)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-03-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.

  7. Number of negative lymph nodes as a prognostic factor in esophageal squamous cell carcinoma.

    PubMed

    Ma, Mingquan; Tang, Peng; Jiang, Hongjing; Gong, Lei; Duan, Xiaofeng; Shang, Xiaobin; Yu, Zhentao

    2017-10-01

    The aim of this study is to investigate the number of negative lymph nodes (NLNs) as a prognostic factor for survival in patients with resected esophageal squamous cell carcinoma. A total of 381 esophageal squamous cell carcinoma patients who had underwent surgical resection as the primary treatment was enrolled into this retrospective study. The impact of number of NLNs on patient's overall survival was assessed and compared with the factors among the current tumor-nodes-metastasis (TNM) staging system. The number of NLNs was closely related to the overall survival, and the 5-year survival rate was 45.4% for number of NLNs of >20 (142 cases) and 26.4% for NLNs ≤ 20 (239 cases) (P = 0.001). In multivariate survival analysis, the number of NLNs remained an independent prognostic factor (P = 0.002) as did the other current TNM factors. For subgroup analysis, the predictive value of number of NLNs was significant in patients with T3 or T4 disease (P = 0.001) and patients with N1 and N2-3 disease (P = 0.025, 0.043), but not in patients with T1 or T2 disease or patients with N0 disease. The number of NLNs, which represents the extent of lymphadenectomy for esophageal squamous cell carcinoma, could impact the overall survival of patients with resected esophageal squamous cell carcinoma, especially among those with nodal-positive disease and advanced T-stage tumor. © 2016 John Wiley & Sons Australia, Ltd.

  8. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  9. Reynolds number dependence of large-scale friction control in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp

    2016-12-01

    The present work investigates the effectiveness of the control strategy introduced by Schoppa and Hussain [Phys. Fluids 10, 1049 (1998), 10.1063/1.869789] as a function of Reynolds number (Re). The skin-friction drag reduction method proposed by these authors, consisting of streamwise-invariant, counter-rotating vortices, was analyzed by Canton et al. [Flow, Turbul. Combust. 97, 811 (2016), 10.1007/s10494-016-9723-8] in turbulent channel flows for friction Reynolds numbers (Reτ) corresponding to the value of the original study (i.e., 104) and 180. For these Re, a slightly modified version of the method proved to be successful and was capable of providing a drag reduction of up to 18%. The present study analyzes the Reynolds number dependence of this drag-reducing strategy by performing two sets of direct numerical simulations (DNS) for Reτ=360 and 550. A detailed analysis of the method as a function of the control parameters (amplitude and wavelength) and Re confirms, on the one hand, the effectiveness of the large-scale vortices at low Re and, on the other hand, the decreasing and finally vanishing effectiveness of this method for higher Re. In particular, no drag reduction can be achieved for Reτ=550 for any combination of the parameters controlling the vortices. For low Reynolds numbers, the large-scale vortices are able to affect the near-wall cycle and alter the wall-shear-stress distribution to cause an overall drag reduction effect, in accordance with most control strategies. For higher Re, instead, the present method fails to penetrate the near-wall region and cannot induce the spanwise velocity variation observed in other more established control strategies, which focus on the near-wall cycle. Despite the negative outcome, the present results demonstrate the shortcomings of the control strategy and show that future focus should be on methods that directly target the near-wall region or other suitable alternatives.

  10. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  11. Automated flow cytometric analysis across large numbers of samples and cell types.

    PubMed

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  12. Journal impact factors and the influence of age and number of citations

    USDA-ARS?s Scientific Manuscript database

    The impact factor (IF) of a scientific journal is considered a measure of how important a journal is within its discipline, and it is based on a simple relationship between the number of citations of the journal’s articles divided by the number of articles in the scientific journal (http://en.wikipe...

  13. Porous medium convection at large Rayleigh number: Studies of coherent structure, transport, and reduced dynamics

    NASA Astrophysics Data System (ADS)

    Wen, Baole

    statistically-steady porous medium convection results from an interplay between the competing effects of these two types of instability. Upper bound analysis is then employed to investigate the dependence of the heat transport enhancement factor, i.e. the Nusselt number Nu, on Ra and L. To solve the optimization problems arising from the "background field" upper-bound variational analysis, a novel two-step algorithm in which time is introduced into the formulation is developed. The new algorithm obviates the need for numerical continuation, thereby enabling the best available bounds to be computed up to Ra ≈ 2.65 x 104. A mathematical proof is given to demonstrate that the only steady state to which this numerical algorithm can converge is the required global optimal of the variational problem. Using this algorithm, the dependence of the bounds on L( Ra) is explored, and a "minimal flow unit" is identified. Finally, the upper bound variational methodology is also shown to yield quantitatively useful predictions of Nu and to furnish a functional basis that is naturally adapted to the boundary layer dynamics at large Ra..

  14. Impact of the number of aspiration risk factors on mortality and recurrence in community-onset pneumonia.

    PubMed

    Noguchi, Shingo; Yatera, Kazuhiro; Kato, Tatsuji; Chojin, Yasuo; Fujino, Yoshihisa; Akata, Kentaro; Kawanami, Toshinori; Sakamoto, Noriho; Mukae, Hiroshi

    2017-01-01

    The clinical significance of the number of aspiration risk factors in patients with pneumonia is unknown as yet. In the present study, we clarify the significance of the number of aspiration risk factors for mortality and recurrence in pneumonia patients. This study included 322 patients hospitalized with pneumonia between December 2014 and June 2016. We investigated associations between the number of aspiration risk factors present (orientation disturbance, bedridden, chronic cerebrovascular disease, dementia, sleeping medications and gastroesophageal disease) and 30-day and 6-month mortality, and pneumonia recurrence within 30 days. Patients were categorized by number of risk factors present into groups of 0-1, 2, 3, and 4 or more. Of a total of 322 patients, 93 (28.9%) had 0-1 risk factors, 112 (34.8%) had 2, 88 (27.3%) had 3, and 29 (9.0%) had 4 or more risk factors. The percentages of patients with recurrence of pneumonia were 13.0%, 33.0%, 43.2%, and 54.2% in the 0-1, 2, 3, and 4 or more risk factor groups, respectively. The percentages of patients with 30-day mortality were 2.2%, 5.4%, 11.4%, and 24.1%, and those of patients with 6-month mortality were 6.6%, 24.5%, 30.7%, and 50.0%, in the 0-1, 2, 3, and 4 or more risk factor groups, respectively. The number of aspiration risk factors was associated with increases in both mortality and recurrence in pneumonia patients. Therefore, in clinical practice, physicians should consider not only the presence of aspiration risks but also the number of aspiration risk factors in these patients.

  15. Large Eddy Simulation of High Reynolds Number Complex Flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman

    Marine configurations are subject to a variety of complex hydrodynamic phenomena affecting the overall performance of the vessel. The turbulent flow affects the hydrodynamic drag, propulsor performance and structural integrity, control-surface effectiveness, and acoustic signature of the marine vessel. Due to advances in massively parallel computers and numerical techniques, an unsteady numerical simulation methodology such as Large Eddy Simulation (LES) is well suited to study such complex turbulent flows whose Reynolds numbers (Re) are typically on the order of 10. 6. LES also promises increasedaccuracy over RANS based methods in predicting unsteady phenomena such as cavitation and noise production. This dissertation develops the capability to enable LES of high Re flows in complex geometries (e.g. a marine vessel) on unstructured grids and provide physical insight into the turbulent flow. LES is performed to investigate the geometry induced separated flow past a marine propeller attached to a hull, in an off-design condition called crashback. LES shows good quantitative agreement with experiments and provides a physical mechanism to explain the increase in side-force on the propeller blades below an advance ratio of J=-0.7. Fundamental developments in the dynamic subgrid-scale model for LES are pursued to improve the LES predictions, especially for complex flows on unstructured grids. A dynamic procedure is proposed to estimate a Lagrangian time scale based on a surrogate correlation without any adjustable parameter. The proposed model is applied to turbulent channel, cylinder and marine propeller flows and predicts improved results over other model variants due to a physically consistent Lagrangian time scale. A wall model is proposed for application to LES of high Reynolds number wall-bounded flows. The wall model is formulated as the minimization of a generalized constraint in the dynamic model for LES and applied to LES of turbulent channel flow at various

  16. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    PubMed

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  17. Bayesian analysis of biogeography when the number of areas is large.

    PubMed

    Landis, Michael J; Matzke, Nicholas J; Moore, Brian R; Huelsenbeck, John P

    2013-11-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a "data-augmentation" approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea.

  18. Photon number amplification/duplication through parametric conversion

    NASA Technical Reports Server (NTRS)

    Dariano, G. M.; Macchiavello, C.; Paris, M.

    1993-01-01

    The performance of parametric conversion in achieving number amplification and duplication is analyzed. It is shown that the effective maximum gains G(sub *) remain well below their integer ideal values, even for large signals. Correspondingly, one has output Fano factors F(sub *) which are increasing functions of the input photon number. On the other hand, in the inverse (deamplifier/recombiner) operating mode quasi-ideal gains G(sub *) and small factors F(sub *) approximately equal to 10 percent are obtained. Output noise and non-ideal gains are ascribed to spontaneous parametric emission.

  19. Variation of froude number with discharge for large-gradient steams

    USGS Publications Warehouse

    Wahl, Kenneth L.; ,

    1993-01-01

    Under chemical-control conditions, the Froude number (f) for a cross-section can be approximated as a function of the ratio R2/ 3/d 1/2 , where R is the hydraulic radius and d is the average depth. For cross sections where the ratio increases with increasing depth, F can also increase with depth Current-meter measurement data for 433 streamflow gaging stations in Colorado were reviewed, and 62 stations were identified at which F increases with depth of flow. Data for four streamflow gaging stations are presented. In some cases, F approaches 1 as the discharge approaches the magnitude of the median annual peak discharge. The data also indicate that few actual current meter measurement have been made at the large discharges where velocities can be supercritical.

  20. Students' Understanding of Large Numbers as a Key Factor in Their Understanding of Geologic Time

    ERIC Educational Resources Information Center

    Cheek, Kim A.

    2012-01-01

    An understanding of geologic time is comprised of 2 facets. Events in Earth's history can be placed in relative and absolute temporal succession on a vast timescale. Rates of geologic processes vary widely, and some occur over time periods well outside human experience. Several factors likely contribute to an understanding of geologic time, one of…

  1. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    PubMed

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  2. Innate or Acquired? - Disentangling Number Sense and Early Number Competencies.

    PubMed

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se . To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  3. Children's Mappings of Large Number Words to Numerosities

    ERIC Educational Resources Information Center

    Barth, Hilary; Starr, Ariel; Sullivan, Jessica

    2009-01-01

    Previous studies have suggested that children's learning of the relation between number words and approximate numerosities depends on their verbal counting ability, and that children exhibit no knowledge of mappings between number words and approximate numerical magnitudes for number words outside their productive verbal counting range. In the…

  4. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  5. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khare, Avinash; Saxena, Avadh

    2014-03-15

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ{sup 4}, the discrete MKdV as well asmore » for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn{sup 2}(x, m), it also admits solutions in terms of dn {sup 2}(x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations.« less

  6. Bayesian Analysis of Biogeography when the Number of Areas is Large

    PubMed Central

    Landis, Michael J.; Matzke, Nicholas J.; Moore, Brian R.; Huelsenbeck, John P.

    2013-01-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a “data-augmentation” approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea. [ancestral area analysis; Bayesian biogeographic inference; data augmentation; historical biogeography; Markov chain Monte Carlo.] PMID:23736102

  7. Factors associated with number of duodenal samples obtained in suspected celiac disease.

    PubMed

    Shamban, Leonid; Sorser, Serge; Naydin, Stan; Lebwohl, Benjamin; Shukr, Mousa; Wiemann, Charlotte; Yevsyukov, Daniel; Piper, Michael H; Warren, Bradley; Green, Peter H R

    2017-12-01

     Many people with celiac disease are undiagnosed and there is evidence that insufficient duodenal samples may contribute to underdiagnosis. The aims of this study were to investigate whether more samples leads to a greater likelihood of a diagnosis of celiac disease and to elucidate factors that influence the number of samples collected.  We identified patients from two community hospitals who were undergoing duodenal biopsy for indications (as identified by International Classification of Diseases code) compatible with possible celiac disease. Three cohorts were evaluated: no celiac disease (NCD, normal villi), celiac disease (villous atrophy, Marsh score 3), and possible celiac disease (PCD, Marsh score < 3). Endoscopic features, indication, setting, trainee presence, and patient demographic details were evaluated for their role in sample collection.  5997 patients met the inclusion criteria. Patients with a final diagnosis of celiac disease had a median of 4 specimens collected. The percentage of patients diagnosed with celiac disease with one sample was 0.3 % compared with 12.8 % of those with six samples ( P  = 0.001). Patient factors that positively correlated with the number of samples collected were endoscopic features, demographic details, and indication ( P  = 0.001). Endoscopist factors that positively correlated with the number of samples collected were absence of a trainee, pediatric gastroenterologist, and outpatient setting ( P  < 0.001).  Histological diagnosis of celiac disease significantly increased with six samples. Multiple factors influenced whether adequate biopsies were taken. Adherence to guidelines may increase the diagnosis rate of celiac disease.

  8. Free tarsomarginal graft for large congenital coloboma repair in patients with Tessier number 10 clefts.

    PubMed

    Fu, Yao; Shao, Chunyi; Lu, Wenjuan; Li, Jin; Fan, Xianqun

    2016-08-01

    The aim of this study was to evaluate the long-term outcome when a free tarsomarginal graft is used to repair a large congenital coloboma in patients with a Tessier number 10 cleft. This was a retrospective, interventional case series. The medical records were reviewed for five children (six eyes) diagnosed as having Tessier number 10 cleft with large upper eyelid defects and symblepharon. These children were referred to the Department of Ophthalmology of Shanghai Ninth People's Hospital, between May 2007 and December 2012. Reconstructive techniques included repair of the upper eyelid defect with a free tarsomarginal graft taken from the lower eyelid, and reconstruction of the conjunctival fornix by using a conjunctival autograft after symblepharon lysis. All the children were followed up for more than 2 years. Postoperative upper eyelid contour, viability and function for corneal protection, and recurrence of symblepharon were assessed. A one-stage reconstruction procedure was used in all children. All reconstructed eyelids achieved a surgical goal of providing corneal protection and improved cosmesis, with marked improvement of exposure keratopathy and no associated lagophthalmos. Adequate reconstruction of the upper fornix was obtained, and there was no obvious recurrence of symblepharon. A free tarsomarginal graft is beneficial and seems to be an adequate method for reconstruction of large eyelid defects in children with a Tessier number 10 cleft. Symblepharon lysis with a conjunctival autograft for reconstruction of the ocular surface can be performed at the same time as eyelid repair as a one-stage procedure. Copyright © 2016. Published by Elsevier Ltd.

  9. Causal inference between bioavailability of heavy metals and environmental factors in a large-scale region.

    PubMed

    Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing

    2017-07-01

    The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All

  10. Investigation of Rossby-number similarity in the neutral boundary layer using large-eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohmstede, W.D.; Cederwall, R.T.; Meyers, R.E.

    One special case of particular interest, especially to theoreticians, is the steady-state, horizontally homogeneous, autobarotropic (PLB), hereafter referred to as the neutral boundary layer (NBL). The NBL is in fact a 'rare' atmospheric phenomenon, generally associated with high-wind situations. Nevertheless, there is a disproportionate interest in this problem because Rossby-number similarity theory provides a sound approach for addressing this issue. Rossby-number similarity theory has rather wide acceptance, but because of the rarity of the 'true' NBL state, there remains an inadequate experimental database for quantifying constants associated with the Rossby-number similarity concept. Although it remains a controversial issue, it hasmore » been proposed that large-eddy simulation (LES) is an alternative to physical experimentation for obtaining basic atmospherc 'data'. The objective of the study reported here is to investigate Rossby-number similarity in the NBL using LES. Previous studies have not addressed Rossby-number similarity explicitly, although they made use of it in the interpretation of their results. The intent is to calculate several sets of NBL solutions that are ambiguous relative to the their respective Rossby numbers and compare the results for similarity, or the lack of it. 14 refs., 1 fig.« less

  11. Number-Theory in Nuclear-Physics in Number-Theory: Non-Primality Factorization As Fission VS. Primality As Fusion; Composites' Islands of INstability: Feshbach-Resonances?

    NASA Astrophysics Data System (ADS)

    Siegel, Edward

    2011-04-01

    Numbers: primality/indivisibility/non-factorization versus compositeness/divisibility /factor-ization, often in tandem but not always, provocatively close analogy to nuclear-physics: (2 + 1)=(fusion)=3; (3+1)=(fission)=4[=2 x 2]; (4+1)=(fusion)=5; (5+1)=(fission)=6[=2 x 3]; (6 + 1)=(fusion)=7; (7+1)=(fission)=8[= 2 x 4 = 2 x 2 x 2]; (8 + 1) =(non: fission nor fusion)= 9[=3 x 3]; then ONLY composites' Islands of fusion-INstability: 8, 9, 10; then 14, 15, 16,... Could inter-digit Feshbach-resonances exist??? Applications to: quantum-information and computing non-Shore factorization, millennium-problem Riemann-hypotheses physics-proof as numbers/digits Goodkin Bose-Einstein Condensation intersection with graph-theory ``short-cut'' method: Rayleigh(1870)-Polya(1922)-``Anderson'' (1958)-localization, Goldbach-conjecture, financial auditing/accounting as quantum-statistical-physics;... abound!!!

  12. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    DTIC Science & Technology

    2017-03-01

    partitioned sub-swarms. The work covered in this thesis is to build a model of the NPS swarm’s communication network in ns-3 simulation software and use...partitioned sub- swarms. The work covered in this thesis is to build a model of the NPS swarm’s communication network in ns-3 simulation software and...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MAPPING AD HOC COMMUNICATIONS NETWORK OF A LARGE NUMBER FIXED-WING UAV SWARM by Alexis

  13. Modelling high Reynolds number wall–turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    PubMed Central

    Dogan, Eda; Hearst, R. Jason

    2017-01-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to ‘simulate’ high Reynolds number wall–turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167584

  14. Phage diabody repertoires for selection of large numbers of bispecific antibody fragments.

    PubMed

    McGuinness, B T; Walter, G; FitzGerald, K; Schuler, P; Mahoney, W; Duncan, A R; Hoogenboom, H R

    1996-09-01

    Methods for the generation of large numbers of different bispecific antibodies are presented. Cloning strategies are detailed to create repertoires of bispecific diabody molecules with variability on one or both of the antigen binding sites. This diabody format, when combined with the power of phage display technology, allows the generation and analysis of thousands of different bispecific molecules. Selection for binding presumably also selects for more stable diabodies. Phage diabody libraries enable screening or selection of the best combination bispecific molecule with regards to affinity of binding, epitope recognition and pairing before manufacture of the best candidate.

  15. Vehicle, driver and atmospheric factors in light-duty vehicle particle number emissions.

    DOT National Transportation Integrated Search

    2014-06-01

    Made possible by the collection of on-board tailpipe emissions data, this research identifies road : and driver factors that are associated with a relatively understudied tailpipe pollutant from light-duty vehicles: ultrafine particle number emission...

  16. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    PubMed Central

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD. PMID:29725316

  17. Crash risk factors for interstate large trucks in North Carolina.

    PubMed

    Teoh, Eric R; Carter, Daniel L; Smith, Sarah; McCartt, Anne T

    2017-09-01

    Provide an updated examination of risk factors for large truck involvements in crashes resulting in injury or death. A matched case-control study was conducted in North Carolina of large trucks operated by interstate carriers. Cases were defined as trucks involved in crashes resulting in fatal or non-fatal injury, and one control truck was matched on the basis of location, weekday, time of day, and truck type. The matched-pair odds ratio provided an estimate of the effect of various driver, vehicle, or carrier factors. Out-of-service (OOS) brake violations tripled the risk of crashing; any OOS vehicle defect increased crash risk by 362%. Higher historical crash rates (fatal, injury, or all crashes) of the carrier were associated with increased risk of crashing. Operating on a short-haul exemption increased crash risk by 383%. Antilock braking systems reduced crash risk by 65%. All of these results were statistically significant at the 95% confidence level. Other safety technologies also showed estimated benefits, although not statistically significant. With the exception of the finding that short-haul exemption is associated with increased crash risk, results largely bolster what is currently known about large truck crash risk and reinforce current enforcement practices. Results also suggest vehicle safety technologies can be important in lowering crash risk. This means that as safety technology continues to penetrate the fleet, whether from voluntary usage or government mandates, reductions in large truck crashes may be achieved. Practical application: Results imply that increased enforcement and use of crash avoidance technologies can improve the large truck crash problem. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  18. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  19. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  20. The large lungs of elite swimmers: an increased alveolar number?

    PubMed

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    chest depth, but by developing physically wider chests, containing an increased number of alveoli, rather than alveoli of increased size. However, in this cross-sectional study, hereditary factors cannot be ruled out, although we believe them to be less likely.

  1. Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations

    PubMed Central

    2013-01-01

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328

  2. Bayesian inference of the number of factors in gene-expression analysis: application to human virus challenge studies

    PubMed Central

    2010-01-01

    Background Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Results Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Conclusions Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data. PMID:21062443

  3. Bayesian inference of the number of factors in gene-expression analysis: application to human virus challenge studies.

    PubMed

    Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence

    2010-11-09

    Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.

  4. Large atom number Bose-Einstein condensate machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streed, Erik W.; Chikkatur, Ananth P.; Gustavson, Todd L.

    2006-02-15

    We describe experimental setups for producing large Bose-Einstein condensates of {sup 23}Na and {sup 87}Rb. In both, a high-flux thermal atomic beam is decelerated by a Zeeman slower and is then captured and cooled in a magneto-optical trap. The atoms are then transferred into a cloverleaf-style Ioffe-Pritchard magnetic trap and cooled to quantum degeneracy with radio-frequency-induced forced evaporation. Typical condensates contain 20x10{sup 6} atoms. We discuss the similarities and differences between the techniques used for producing large {sup 87}Rb and {sup 23}Na condensates in the context of nearly identical setups.

  5. Bayesian pedigree inference with small numbers of single nucleotide polymorphisms via a factor-graph representation.

    PubMed

    Anderson, Eric C; Ng, Thomas C

    2016-02-01

    We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.

  6. Number-Theory in Nuclear-Physics in Number-Theory: Non-Primality Factorization As Fission VS. Primality As Fusion; Composites' Islands of INstability: Feshbach-Resonances?

    NASA Astrophysics Data System (ADS)

    Smith, A.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Numbers: primality/indivisibility/non-factorization versus compositeness/divisibility/ factorization, often in tandem but not always, provocatively close analogy to nuclear-physics: (2 + 1)=(fusion)=3; (3+1)=(fission)=4[=2 x 2]; (4+1)=(fusion)=5; (5 +1)=(fission)=6[=2 x 3]; (6 + 1)=(fusion)=7; (7+1)=(fission)=8[= 2 x 4 = 2 x 2 x 2]; (8 + 1) =(non: fission nor fusion)= 9[=3 x 3]; then ONLY composites' Islands of fusion-INstability: 8, 9, 10; then 14, 15, 16, ... Could inter-digit Feshbach-resonances exist??? Possible applications to: quantum-information/ computing non-Shore factorization, millennium-problem Riemann-hypotheses proof as Goodkin BEC intersection with graph-theory "short-cut" method: Rayleigh(1870)-Polya(1922)-"Anderson"(1958)-localization, Goldbach-conjecture, financial auditing/accounting as quantum-statistical-physics; ...abound!!! Watkins [www.secamlocal.ex.ac.uk/people/staff/mrwatkin/] "Number-Theory in Physics" many interconnections: "pure"-maths number-theory to physics including Siegel [AMS Joint Mtg.(2002)-Abs.# 973-60-124] inversion of statistics on-average digits' Newcomb(1881)-Weyl(14-16)-Benford(38)-law to reveal both the quantum and BEQS (digits = bosons = digits:"spinEless-boZos"). 1881 1885 1901 1905 1925 < 1927, altering quantum-theory history!!!

  7. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupavskii, A B; Raigorodskii, A M

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3more » (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.« less

  8. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    PubMed

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Stochastic theory of large-scale enzyme-reaction networks: Finite copy number corrections to rate equation models

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Straube, Arthur V.; Grima, Ramon

    2010-11-01

    Chemical reactions inside cells occur in compartment volumes in the range of atto- to femtoliters. Physiological concentrations realized in such small volumes imply low copy numbers of interacting molecules with the consequence of considerable fluctuations in the concentrations. In contrast, rate equation models are based on the implicit assumption of infinitely large numbers of interacting molecules, or equivalently, that reactions occur in infinite volumes at constant macroscopic concentrations. In this article we compute the finite-volume corrections (or equivalently the finite copy number corrections) to the solutions of the rate equations for chemical reaction networks composed of arbitrarily large numbers of enzyme-catalyzed reactions which are confined inside a small subcellular compartment. This is achieved by applying a mesoscopic version of the quasisteady-state assumption to the exact Fokker-Planck equation associated with the Poisson representation of the chemical master equation. The procedure yields impressively simple and compact expressions for the finite-volume corrections. We prove that the predictions of the rate equations will always underestimate the actual steady-state substrate concentrations for an enzyme-reaction network confined in a small volume. In particular we show that the finite-volume corrections increase with decreasing subcellular volume, decreasing Michaelis-Menten constants, and increasing enzyme saturation. The magnitude of the corrections depends sensitively on the topology of the network. The predictions of the theory are shown to be in excellent agreement with stochastic simulations for two types of networks typically associated with protein methylation and metabolism.

  10. Identification methods of key contributing factors in crashes with high numbers of fatalities and injuries in China.

    PubMed

    Chen, Yikai; Li, Yiming; King, Mark; Shi, Qin; Wang, Changjun; Li, Pingfan

    2016-11-16

    In China, serious road traffic crashes (SRTCs) are those in which there are 10-30 fatalities, 50-100 serious injuries, or a total cost of 50-100 million RMB (U.S.$8-16 M), and particularly serious road traffic crashes (PSRTCs) are those that are more severe or costly. Due to the large number of fatalities and injuries as well as the negative public reaction they elicit, SRTCs and PSRTCs have become of great concern to China during recent years. The aim of this study is to identify the main factors contributing to these road traffic crashes and to propose preventive measures to reduce their number. 49 contributing factors of the SRTCs and PSRTCs that occurred from 2007 to 2013 were collected from the database "In-depth investigation and analysis system for major road traffic crashes" (IIASMRTC) and were analyzed through the integrated use of principal component analysis and hierarchical clustering to determine the primary and secondary groups of contributing factors. Speeding and overloading of passengers were the primary contributing factors, featuring in up to 66.3 and 32.6% of accidents, respectively. Two secondary contributing factors were road related: lack of or nonstandard roadside safety infrastructure and slippery roads due to rain, snow, or ice. The current approach to SRTCs and PSRTCs is focused on the attribution of responsibility and the enforcement of regulations considered relevant to particular SRTCs and PSRTCs. It would be more effective to investigate contributing factors and characteristics of SRTCs and PSRTCs as a whole to provide adequate information for safety interventions in regions where SRTCs and PSRTCs are more common. In addition to mandating a driver training program and publicization of the hazards associated with traffic violations, implementation of speed cameras, speed signs, markings, and vehicle-mounted Global Positioning Systems (GPS) are suggested to reduce speeding of passenger vehicles, while increasing regular checks by

  11. Capuchin monkeys (Cebus apella) treat small and large numbers of items similarly during a relative quantity judgment task.

    PubMed

    Beran, Michael J; Parrish, Audrey E

    2016-08-01

    A key issue in understanding the evolutionary and developmental emergence of numerical cognition is to learn what mechanism(s) support perception and representation of quantitative information. Two such systems have been proposed, one for dealing with approximate representation of sets of items across an extended numerical range and another for highly precise representation of only small numbers of items. Evidence for the first system is abundant across species and in many tests with human adults and children, whereas the second system is primarily evident in research with children and in some tests with non-human animals. A recent paper (Choo & Franconeri, Psychonomic Bulletin & Review, 21, 93-99, 2014) with adult humans also reported "superprecise" representation of small sets of items in comparison to large sets of items, which would provide more support for the presence of a second system in human adults. We first presented capuchin monkeys with a test similar to that of Choo and Franconeri in which small or large sets with the same ratios had to be discriminated. We then presented the same monkeys with an expanded range of comparisons in the small number range (all comparisons of 1-9 items) and the large number range (all comparisons of 10-90 items in 10-item increments). Capuchin monkeys showed no increased precision for small over large sets in making these discriminations in either experiment. These data indicate a difference in the performance of monkeys to that of adult humans, and specifically that monkeys do not show improved discrimination performance for small sets relative to large sets when the relative numerical differences are held constant.

  12. Deformation of leaky-dielectric fluid globules under strong electric fields: Boundary layers and jets at large Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Schnitzer, Ory; Frankel, Itzchak; Yariv, Ehud

    2013-11-01

    In Taylor's theory of electrohydrodynamic drop deformation (Proc. R. Soc. Lond. A, vol. 291, 1966, pp. 159-166), inertia is neglected at the outset, resulting in fluid velocity that scales as the square of the applied-field magnitude. For large drops, with increasing field strength the Reynolds number predicted by this scaling may actually become large, suggesting the need for a complementary large-Reynolds-number investigation. Balancing viscous stresses and electrical shear forces in this limit reveals a different velocity scaling, with the 4/3-power of the applied-field magnitude. We focus here on the flow over a gas bubble. It is essentially confined to two boundary layers propagating from the poles to the equator, where they collide to form a radial jet. At leading order in the Capillary number, the bubble deforms due to (i) Maxwell stresses; (ii) the hydrodynamic boundary-layer pressure associated with centripetal acceleration; and (iii) the intense pressure distribution acting over the narrow equatorial deflection zone, appearing as a concentrated load. Remarkably, the unique flow topology and associated scalings allow to obtain a closed-form expression for this deformation through application of integral mass and momentum balances. On the bubble scale, the concentrated pressure load is manifested in the appearance of a non-smooth equatorial dimple.

  13. Unique risk and protective factors for partner aggression in a large scale air force survey.

    PubMed

    Slep, Amy M Smith; Foran, Heather M; Heyman, Richard E; Snarr, Jeffery D

    2010-08-01

    The objective of this study is to examine risk factors of physical aggression against a partner in a large representative Active Duty Air Force sample. A stratified sample of 128,950 United States Active Duty members were invited to participate in an Air Force-wide anonymous online survey across 82 bases. The final sample (N = 52,780) was weighted to be representative of the United States Air Force. Backward stepwise regression analyses were conducted to identify unique predictors of partner physical aggression perpetration within and across different ecological levels (individual, family, organization, and community levels). Relationship satisfaction, alcohol problems, financial stress, and number of years in the military were identified as unique predictors of men's and women's perpetration of violence against their partner across ecological levels. Parental status, support from neighbors, personal coping, and support from formal agencies also uniquely predicted men's but not women's perpetration of violence across ecological levels. This study identified specific risk factors of partner violence that may be targeted by prevention and intervention efforts aimed at different levels of impact (e.g., family interventions, community-wide programs).

  14. Investigating risk factors and possible infectious aetiologies of mummified fetuses on a large piggery in Australia.

    PubMed

    Dron, N; Hernández-Jover, M; Doyle, R E; Holyoake, P K

    2014-12-01

    To investigate risk factors and potential infectious aetiologies of an increased mummification rate (>2%) identified over time on a 1200-sow farrow-to-finish farm in Australia. Association of potential non-infectious risk factors and the mummification rate was investigated using 15 years of breeding herd data (40,940 litters) and logistic regression analysis. Samples from a limited number of mummified fetuses were taken to identify potential infectious aetiologies (porcine parvovirus, Leptospira pomona, porcine circovirus type 2, Bungowannah virus and enterovirus). Logistic regression analysis suggested that the mummification rate was significantly associated with sow breed and parity, year and total born and stillborn piglets per litter. The mummification rate was lower (P < 0.001) in Landrace (3.4%) and Large White (2.6%) sows than in Duroc sows (4.9%). Gilts (2.9%) had a lower (P < 0.001) mummification rate than older sows. The mummification rate increased with total born litter size and decreased with the number of stillborn piglets (P < 0.001). A clustering effect within individual sows was identified, indicating that some sows with mummified fetuses in a litter were more likely to have repeated mummifications in subsequent litters. No infectious agents were identified in the samples taken. Results from this study suggest that the increased mummification rate identified over time on this farm is likely to be a non-infectious multifactorial problem predisposing the occurrence of mummification. Further research is required to better understand the pathophysiology of mummification and the role that different non-infectious factors play in the occurrence of mummified fetuses. © 2014 Australian Veterinary Association.

  15. Factors associated with numbers of remaining teeth among type 2 diabetes: a cross-sectional study.

    PubMed

    Huang, Jui-Chu; Peng, Yun-Shing; Fan, Jun-Yu; Jane, Sui-Whi; Tu, Liang-Tse; Chang, Chang-Cheng; Chen, Mei-Yen

    2013-07-01

    To explore the factors associated with the numbers of remaining teeth among type 2 diabetes community residents. Promoting oral health is an important nursing role for patients with diabetes, especially in disadvantaged areas. However, limited research has been carried out on the relationship between numbers of remaining teeth, diabetes-related biomarkers and personal oral hygiene among diabetic rural residents. A cross-sectional, descriptive design with a simple random sample was used. This study was part of a longitudinal cohort study of health promotion for preventing diabetic foot among rural community diabetic residents. It was carried out in 18 western coastal and inland districts of Chiayi County in central Taiwan. In total, 703 participants were enrolled in this study. The findings indicated that a high percentage of the participants (26%) had no remaining natural teeth. Nearly three quarters (74%) had fewer than 20 natural teeth. After controlling for the potential confounding factors, multivariate analysis demonstrated that the factors determining numbers of remaining teeth were age (p < 0.001), education (p < 0.001), using dental floss (p = 0.003), ankle brachial pressure index (p = 0.028), waist circumference (p = 0.024) and HbA1C (p = 0.033). Except for some unmodifiable factors, the factors most significantly associated with numbers of remaining teeth were less tooth-brushing with dental floss, abnormal ankle brachial pressure and poor glycemic control. This study highlights the importance of nursing intervention in oral hygiene for patients with type 2 diabetes. It is necessary to initiate oral health promotion activities when diabetes is first diagnosed, especially for older diabetic residents of rural or coastal areas who are poorly educated. © 2013 John Wiley & Sons Ltd.

  16. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  17. How large B-factors can be in protein crystal structures.

    PubMed

    Carugo, Oliviero

    2018-02-23

    Protein crystal structures are potentially over-interpreted since they are routinely refined without any restraint on the upper limit of atomic B-factors. Consequently, some of their atoms, undetected in the electron density maps, are allowed to reach extremely large B-factors, even above 100 square Angstroms, and their final positions are purely speculative and not based on any experimental evidence. A strategy to define B-factors upper limits is described here, based on the analysis of protein crystal structures deposited in the Protein Data Bank prior 2008, when the tendency to allow B-factor to arbitrary inflate was limited. This B-factor upper limit (B_max) is determined by extrapolating the relationship between crystal structure average B-factor and percentage of crystal volume occupied by solvent (pcVol) to pcVol =100%, when, ab absurdo, the crystal contains only liquid solvent, the structure of which is, by definition, undetectable in electron density maps. It is thus possible to highlight structures with average B-factors larger than B_max, which should be considered with caution by the users of the information deposited in the Protein Data Bank, in order to avoid scientifically deleterious over-interpretations.

  18. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  19. Large deletions play a minor but essential role in congenital coagulation factor VII and X deficiencies.

    PubMed

    Rath, M; Najm, J; Sirb, H; Kentouche, K; Dufke, A; Pauli, S; Hackmann, K; Liehr, T; Hübner, C A; Felbor, U

    2015-01-01

    Congenital factor VII (FVII) and factor X (FX) deficiencies belong to the group of rare bleeding disorders which may occur in separate or combined forms since both the F7 and F10 genes are located in close proximity on the distal long arm of chromosome 13 (13q34). We here present data of 192 consecutive index cases with FVII and/or FX deficiency. 10 novel and 53 recurrent sequence alterations were identified in the F7 gene and 5 novel as well as 11 recurrent in the F10 gene including one homozygous 4.35 kb deletion within F7 (c.64+430_131-6delinsTCGTAA) and three large heterozygous deletions involving both the F7 and F10 genes. One of the latter proved to be cytogenetically visible as a chromosome 13q34 deletion and associated with agenesis of the corpus callosum and psychomotor retardation. Large deletions play a minor but essential role in the mutational spectrum of the F7 and F10 genes. Copy number analyses (e. g. MLPA) should be considered if sequencing cannot clarify the underlying reason of an observed coagulopathy. Of note, in cases of combined FVII/FX deficiency, a deletion of the two contiguous genes might be part of a larger chromosomal rearrangement.

  20. [Rating and ranking of medical journals: a randomised controlled evaluation of impact factor and number of listed journals].

    PubMed

    Göbel, U; Niem, V

    2012-01-01

    The impact factor is a purely bibliometric parameter built on a number of publications and their citations that occur within clearly defined periods. Appropriate interpretation of the impact factor is important as it is also used worldwide for the evaluation of research performance. It is assumed that the number of medical journals reflects the extent of diseases and patient populations involved and that the number is correlated with the level of the impact factor. 174 category lists (Subject Categories) are included in the area Health Sciences of the ISI Web of Knowledge of Thomson Reuters, 71 of which belong to the field of medicine and 50 of which have a clinical and/or application-oriented focus. These alphabetically arranged 50 category lists were consecutively numbered, randomized by odd and even numbers, respectively, into 2 equal-sized groups and then grouped according to organ specialities, sub-specialities and cross-disciplinary fields. By tossing up a coin it was decided which group should be evaluated first. Only then the category lists were downloaded and the number of journals, as well as the impact factors of journals ranking number 1 and 2, as well as the impact factors of journals at the end of the first third and at the end of the first half of each category list were compared. The number of journals per category list varies considerably between 5 and 252. The lists of organ specialties and cross-disciplinary fields include more than three times as many journals as those of the sub-specialities; the highest numbers of journals are listed for the cross-disciplinary fields. The level of impact factor of journals that rank number 1 in the lists varies considerably and ranges from 3,058 to 94,333; a similar variability exists for the journals at rank 2. On the other hand, the impact factor of journals at the end of the first third of the lists varies from 1,214 and 3,953, and for those journals at the end of the first half of a respective category

  1. What are the low- Q and large- x boundaries of collinear QCD factorization theorems?

    DOE PAGES

    Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...

    2017-05-26

    Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less

  2. Design and test of a natural laminar flow/large Reynolds number airfoil with a high design cruise lift coefficient

    NASA Technical Reports Server (NTRS)

    Kolesar, C. E.

    1987-01-01

    Research activity on an airfoil designed for a large airplane capable of very long endurance times at a low Mach number of 0.22 is examined. Airplane mission objectives and design optimization resulted in requirements for a very high design lift coefficient and a large amount of laminar flow at high Reynolds number to increase the lift/drag ratio and reduce the loiter lift coefficient. Natural laminar flow was selected instead of distributed mechanical suction for the measurement technique. A design lift coefficient of 1.5 was identified as the highest which could be achieved with a large extent of laminar flow. A single element airfoil was designed using an inverse boundary layer solution and inverse airfoil design computer codes to create an airfoil section that would achieve performance goals. The design process and results, including airfoil shape, pressure distributions, and aerodynamic characteristics are presented. A two dimensional wind tunnel model was constructed and tested in a NASA Low Turbulence Pressure Tunnel which enabled testing at full scale design Reynolds number. A comparison is made between theoretical and measured results to establish accuracy and quality of the airfoil design technique.

  3. A Large Scale Wind Tunnel for the Study of High Reynolds Number Turbulent Boundary Layer Physics

    NASA Astrophysics Data System (ADS)

    Priyadarshana, Paththage; Klewicki, Joseph; Wosnik, Martin; White, Chris

    2008-11-01

    Progress and the basic features of the University of New Hampshire's very large multi-disciplinary wind tunnel are reported. The refinement of the overall design has been greatly aided through consultations with an external advisory group. The facility test section is 73 m long, 6 m wide, and 2.5 m nominally high, and the maximum free stream velocity is 30 m/s. A very large tunnel with relatively low velocities makes the small scale turbulent motions resolvable by existing measurement systems. The maximum Reynolds number is estimated at &+circ;= δuτ/ν˜50000, where δ is the boundary layer thickness and uτ is the friction velocity. The effects of scale separation on the generation of the Reynolds stress gradient appearing in the mean momentum equation are briefly discussed to justify the need to attain &+circ; in excess of about 40000. Lastly, plans for future utilization of the facility as a community-wide resource are outlined. This project is supported through the NSF-EPSCoR RII Program, grant number EPS0701730.

  4. Some types of parent number talk count more than others: relations between parents' input and children's cardinal-number knowledge.

    PubMed

    Gunderson, Elizabeth A; Levine, Susan C

    2011-09-01

    Before they enter preschool, children vary greatly in their numerical and mathematical knowledge, and this knowledge predicts their achievement throughout elementary school (e.g. Duncan et al., 2007; Ginsburg & Russell, 1981). Therefore, it is critical that we look to the home environment for parental inputs that may lead to these early variations. Recent work has shown that the amount of number talk that parents engage in with their children is robustly related to a critical aspect of mathematical development - cardinal-number knowledge (e.g. knowing that the word 'three' refers to sets of three entities; Levine, Suriyakham, Rowe, Huttenlocher & Gunderson, 2010). The present study characterizes the different types of number talk that parents produce and investigates which types are most predictive of children's later cardinal-number knowledge. We find that parents' number talk involving counting or labeling sets of present, visible objects is related to children's later cardinal-number knowledge, whereas other types of parent number talk are not. In addition, number talk that refers to large sets of present objects (i.e. sets of size 4 to 10 that fall outside children's ability to track individual objects) is more robustly predictive of children's later cardinal-number knowledge than talk about smaller sets. The relation between parents' number talk about large sets of present objects and children's cardinal-number knowledge remains significant even when controlling for factors such as parents' socioeconomic status and other measures of parents' number and non-number talk. © 2011 Blackwell Publishing Ltd.

  5. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ye; Thornber, Ben

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology andmore » wind tunnel experiments.« less

  6. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  7. Development of High-Fill-Factor Large-Aperture Micromirrors for Agile Optical Phased Arrays

    DTIC Science & Technology

    2010-02-28

    Final Project Report Contract/Grant Title: Development of High-Fill-Factor Large-Aperture Micromirrors for Agile Optical Phased Arrays...factor (HFF) micromirror array (MMA) has been proposed, fabricated and tested. Optical-phased-array (OPA) beam steering based on the HFF MMA has also...electrically tuned to multiple 2. 1. Background High-fill-factor (HFF) micromirror arrays (MMAs) can form optical phased arrays (OPAs) for laser beam

  8. Procedures and equipment for staining large numbers of plant root samples for endomycorrhizal assay.

    PubMed

    Kormanik, P P; Bryan, W C; Schultz, R C

    1980-04-01

    A simplified method of clearing and staining large numbers of plant roots for vesicular-arbuscular (VA) mycorrhizal assay is presented. Equipment needed for handling multiple samples is described, and two formulations for the different chemical solutions are presented. Because one formulation contains phenol, its use should be limited to basic studies for which adequate laboratory exhaust hoods are available and great clarity of fungal structures is required. The second staining formulation, utilizing lactic acid instead of phenol, is less toxic, requires less elaborate laboratory facilities, and has proven to be completely satisfactory for VA assays.

  9. Phases of a stack of membranes in a large number of dimensions of configuration space

    NASA Astrophysics Data System (ADS)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  10. Environmental Factors Affecting Large-Bodied Coral Reef Fish Assemblages in the Mariana Archipelago

    PubMed Central

    Richards, Benjamin L.; Williams, Ivor D.; Vetter, Oliver J.; Williams, Gareth J.

    2012-01-01

    Large-bodied reef fishes represent an economically and ecologically important segment of the coral reef fish assemblage. Many of these individuals supply the bulk of the reproductive output for their population and have a disproportionate effect on their environment (e.g. as apex predators or bioeroding herbivores). Large-bodied reef fishes also tend to be at greatest risk of overfishing, and their loss can result in a myriad of either cascading (direct) or indirect trophic and other effects. While many studies have investigated habitat characteristics affecting populations of small-bodied reef fishes, few have explored the relationship between large-bodied species and their environment. Here, we describe the distribution of the large-bodied reef fishes in the Mariana Archipelago with an emphasis on the environmental factors associated with their distribution. Of the factors considered in this study, a negative association with human population density showed the highest relative influence on the distribution of large-bodied reef fishes; however, depth, water temperature, and distance to deep water also were important. These findings provide new information on the ecology of large-bodied reef fishes can inform discussions concerning essential fish habitat and ecosystem-based management for these species and highlight important knowledge gaps worthy of additional research. PMID:22384014

  11. Environmental factors affecting large-bodied coral reef fish assemblages in the Mariana Archipelago.

    PubMed

    Richards, Benjamin L; Williams, Ivor D; Vetter, Oliver J; Williams, Gareth J

    2012-01-01

    Large-bodied reef fishes represent an economically and ecologically important segment of the coral reef fish assemblage. Many of these individuals supply the bulk of the reproductive output for their population and have a disproportionate effect on their environment (e.g. as apex predators or bioeroding herbivores). Large-bodied reef fishes also tend to be at greatest risk of overfishing, and their loss can result in a myriad of either cascading (direct) or indirect trophic and other effects. While many studies have investigated habitat characteristics affecting populations of small-bodied reef fishes, few have explored the relationship between large-bodied species and their environment. Here, we describe the distribution of the large-bodied reef fishes in the Mariana Archipelago with an emphasis on the environmental factors associated with their distribution. Of the factors considered in this study, a negative association with human population density showed the highest relative influence on the distribution of large-bodied reef fishes; however, depth, water temperature, and distance to deep water also were important. These findings provide new information on the ecology of large-bodied reef fishes can inform discussions concerning essential fish habitat and ecosystem-based management for these species and highlight important knowledge gaps worthy of additional research.

  12. Agri-Environmental Resource Management by Large-Scale Collective Action: Determining KEY Success Factors

    ERIC Educational Resources Information Center

    Uetake, Tetsuya

    2015-01-01

    Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…

  13. Borrowed Capital as Risk Factor for Large Construction Companies in Russia

    NASA Astrophysics Data System (ADS)

    Guzikova, L.; Plotnikova, E.; Zubareva, M.

    2017-11-01

    The paper investigates the features of the formation of the capital structure of large construction companies from the standpoint of the financial risks and opportunities for companies’ development. The authors compare the opportunities and risks linked with the use of the own and borrowed capital, analyze the capital structure of large Russian construction companies, identify factors affecting the capital structure and determining the ratio of own and borrowed sources of financing. In the paper the hypothesis is considered that companies use larger volumes of borrowed capital by means of their assets increase.

  14. Evolutionary expansion and divergence in a large family of primate-specific zinc finger transcription factor genes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, A T; Huntley, S; Tran-Gyamfi, M

    Although most genes are conserved as one-to-one orthologs in different mammalian orders, certain gene families have evolved to comprise different numbers and types of protein-coding genes through independent series of gene duplications, divergence and gene loss in each evolutionary lineage. One such family encodes KRAB-zinc finger (KRAB-ZNF) genes, which are likely to function as transcriptional repressors. One KRAB-ZNF subfamily, the ZNF91 clade, has expanded specifically in primates to comprise more than 110 loci in the human genome, yielding large gene clusters in human chromosomes 19 and 7 and smaller clusters or isolated copies at other chromosomal locations. Although phylogenetic analysismore » indicates that many of these genes arose before the split between old world monkeys and new world monkeys, the ZNF91 subfamily has continued to expand and diversify throughout the evolution of apes and humans. The paralogous loci are distinguished by sequence divergence within their zinc finger arrays indicating a selection for proteins with different DNA binding specificities. RT-PCR and in situ hybridization data show that some of these ZNF genes can have tissue-specific expression patterns, however many KRAB-ZNFs that are near-ubiquitous could also be playing very specific roles in halting target pathways in all tissues except for a few, where the target is released by the absence of its repressor. The number of variant KRAB-ZNF proteins is increased not only because of the large number of loci, but also because many loci can produce multiple splice variants, which because of the modular structure of these genes may have separate and perhaps even conflicting regulatory roles. The lineage-specific duplication and rapid divergence of this family of transcription factor genes suggests a role in determining species-specific biological differences and the evolution of novel primate traits.« less

  15. High Reynolds Number Investigation of a Flush-Mounted, S-Duct Inlet With Large Amounts of Boundary Layer Ingestion

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Carter, Melissa B.; Allan, Brian G.

    2005-01-01

    An experimental investigation of a flush-mounted, S-duct inlet with large amounts of boundary layer ingestion has been conducted at Reynolds numbers up to full scale. The study was conducted in the NASA Langley Research Center 0.3-Meter Transonic Cryogenic Tunnel. In addition, a supplemental computational study on one of the inlet configurations was conducted using the Navier-Stokes flow solver, OVERFLOW. Tests were conducted at Mach numbers from 0.25 to 0.83, Reynolds numbers (based on aerodynamic interface plane diameter) from 5.1 million to 13.9 million (full-scale value), and inlet mass-flow ratios from 0.29 to 1.22, depending on Mach number. Results of the study indicated that increasing Mach number, increasing boundary layer thickness (relative to inlet height) or ingesting a boundary layer with a distorted profile decreased inlet performance. At Mach numbers above 0.4, increasing inlet airflow increased inlet pressure recovery but also increased distortion. Finally, inlet distortion was found to be relatively insensitive to Reynolds number, but pressure recovery increased slightly with increasing Reynolds number.

  16. Large scale motions of multiple limit-cycle high Reynolds number annular and toroidal rotor/stator cavities

    NASA Astrophysics Data System (ADS)

    Bridel-Bertomeu, Thibault; Gicquel, L. Y. M.; Staffelbach, G.

    2017-06-01

    Rotating cavity flows are essential components of industrial applications but their dynamics are still not fully understood when it comes to the relation between the fluid organization and monitored pressure fluctuations. From computer hard-drives to turbo-pumps of space launchers, designed devices often produce flow oscillations that can either destroy the component prematurely or produce too much noise. In such a context, large scale dynamics of high Reynolds number rotor/stator cavities need better understanding especially at the flow limit-cycle or associated statistically stationary state. In particular, the influence of curvature as well as cavity aspect ratio on the large scale organization and flow stability at a fixed rotating disc Reynolds number is fundamental. To probe such flows, wall-resolved large eddy simulation is applied to two different rotor/stator cylindrical cavities and one annular cavity. Validation of the predictions proves the method to be suited and to capture the disc boundary layer patterns reported in the literature. It is then shown that in complement to these disc boundary layer analyses, at the limit-cycle the rotating flows exhibit characteristic patterns at mid-height in the homogeneous core pointing the importance of large scale features. Indeed, dynamic modal decomposition reveals that the entire flow dynamics are driven by only a handful of atomic modes whose combination links the oscillatory patterns observed in the boundary layers as well as in the core of the cavity. These fluctuations form macro-structures, born in the unstable stator boundary layer and extending through the homogeneous inviscid core to the rotating disc boundary layer, causing its instability under some conditions. More importantly, the macro-structures significantly differ depending on the configuration pointing the need for deeper understanding of the influence of geometrical parameters as well as operating conditions.

  17. Antenatal risk factors for postnatal depression: a large prospective study.

    PubMed

    Milgrom, Jeannette; Gemmill, Alan W; Bilszta, Justin L; Hayes, Barbara; Barnett, Bryanne; Brooks, Janette; Ericksen, Jennifer; Ellwood, David; Buist, Anne

    2008-05-01

    This study measured antenatal risk factors for postnatal depression in the Australian population, both singly and in combination. Risk factor data were gathered antenatally and depressive symptoms measured via the beyondblue National Postnatal Depression Program, a large prospective cohort study into perinatal mental health, conducted in all six states of Australia, and in the Australian Capital Territory, between 2002 and 2005. Pregnant women were screened for symptoms of postnatal depression at antenatal clinics in maternity services around Australia using the Edinburgh Postnatal Depression Scale (EPDS) and a psychosocial risk factor questionnaire that covered key demographic and psychosocial information. From a total of 40,333 participants, we collected antenatal EPDS data from 35,374 women and 3144 of these had a score >12 (8.9%). Subsequently, efforts were made to follow-up 22,968 women with a postnatal EPDS. Of 12,361 women who completed postnatal EPDS forms, 925 (7.5%) had an EPDS score >12. Antenatal depression together with a prior history of depression and a low level of partner support were the strongest independent antenatal predictors of a postnatal EPDS score >12. The two main limitations of the study were the use of the EPDS (a self-report screening tool) as the measure of depressive symptoms rather than a clinical diagnosis, and the rate of attrition between antenatal screening and the collection of postnatal follow-up data. Antenatal depressive symptoms appear to be as common as postnatal depressive symptoms. Previous depression, current depression/anxiety, and low partner support are found to be key antenatal risk factors for postnatal depression in this large prospective cohort, consistent with existing meta-analytic surveys. Current depression/anxiety (and to some extent social support) may be amenable to change and can therefore be targeted for intervention.

  18. Risk factors for drug dependence among out-patients on opioid therapy in a large US health-care system.

    PubMed

    Boscarino, Joseph A; Rukstalis, Margaret; Hoffman, Stuart N; Han, John J; Erlich, Porat M; Gerhard, Glenn S; Stewart, Walter F

    2010-10-01

    Our study sought to assess the prevalence of and risk factors for opioid drug dependence among out-patients on long-term opioid therapy in a large health-care system. Using electronic health records, we identified out-patients receiving 4+ physician orders for opioid therapy in the past 12 months for non-cancer pain within a large US health-care system. We completed diagnostic interviews with 705 of these patients to identify opioid use disorders and assess risk factors. Preliminary analyses suggested that current opioid dependence might be as high as 26% [95% confidence interval (CI) = 22.0-29.9] among the patients studied. Logistic regressions indicated that current dependence was associated with variables often in the medical record, including age <65 [odds ratio (OR) = 2.33, P = 0.001], opioid abuse history (OR = 3.81, P < 0.001), high dependence severity (OR = 1.85, P = 0.001), major depression (OR = 1.29, P = 0.022) and psychotropic medication use (OR = 1.73, P = 0.006). Four variables combined (age, depression, psychotropic medications and pain impairment) predicted increased risk for current dependence, compared to those without these factors (OR = 8.01, P < 0.001). Knowing that the patient also had a history of severe dependence and opioid abuse increased this risk substantially (OR = 56.36, P < 0.001). Opioid misuse and dependence among prescription opioid patients in the United States may be higher than expected. A small number of factors, many documented in the medical record, predicted opioid dependence among the out-patients studied. These preliminary findings should be useful in future research efforts. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.

  19. Comparison of jet Mach number decay data with a correlation and jet spreading contours for a large variety of nozzles

    NASA Technical Reports Server (NTRS)

    Groesbeck, D. E.; Huff, R. G.; Vonglahn, U. H.

    1977-01-01

    Small-scale circular, noncircular, single- and multi-element nozzles with flow areas as large as 122 sq cm were tested with cold airflow at exit Mach numbers from 0.28 to 1.15. The effects of multi-element nozzle shape and element spacing on jet Mach number decay were studied in an effort to reduce the noise caused by jet impingement on externally blown flap (EBF) STOL aircraft. The jet Mach number decay data are well represented by empirical relations. Jet spreading and Mach number decay contours are presented for all configurations tested.

  20. Heritability and factors associated with number of harness race starts in the Spanish Trotter horse population.

    PubMed

    Solé, M; Valera, M; Gómez, M D; Sölkner, J; Molina, A; Mészáros, G

    2017-05-01

    Longevity/durability is a relevant trait in racehorses. Genetic analysis and knowledge of factors that influence number of harness race starts would be advantageous for both horse welfare and the equine industry. To perform a genetic analysis on harness racing using number of races as a measure of longevity/durability and to identify factors associated with career length in Spanish Trotter Horses (STH). Longitudinal study. Performance data (n = 331,970) on the STH population for harness racing at national level between 1990 and 2014 were used. A grouped data model was fitted to assess factors influencing the risk of ending harness racing career and to estimate the heritability and breeding values for total number of harness races starts as an indicator of horses' longevity and durability. The model included sex, age at first race and first start earnings as time-independent effects, and the calendar year, driver, trainer, racetrack category and season of competition as time-dependent effects. Across the whole dataset, the average number of harness races horses achieved in Spain was 54.7 races, and this was associated with the horses' sex, age at first race and first start earnings, calendar year, driver, racetrack category, and season. The heritability estimated (0.17 ± 0.01) for number of harness race starts indicates that a beneficial response to direct genetic selection can be expected. Data on horses' health status were not available. Horses' total number of harness race starts is a promising tool for genetic analysis and the evaluation of racing longevity and durability. The estimated heritability provides evidence to support the application of genetic selection of total career number of races to improve longevity/durability of STH. © 2016 EVJ Ltd.

  1. Factors influencing the efficiency of generating genetically engineered pigs by nuclear transfer: multi-factorial analysis of a large data set.

    PubMed

    Kurome, Mayuko; Geistlinger, Ludwig; Kessler, Barbara; Zakhartchenko, Valeri; Klymiuk, Nikolai; Wuensch, Annegret; Richter, Anne; Baehr, Andrea; Kraehe, Katrin; Burkhardt, Katinka; Flisikowski, Krzysztof; Flisikowska, Tatiana; Merkl, Claudia; Landmann, Martina; Durkovic, Marina; Tschukes, Alexander; Kraner, Simone; Schindelhauer, Dirk; Petri, Tobias; Kind, Alexander; Nagashima, Hiroshi; Schnieke, Angelika; Zimmer, Ralf; Wolf, Eckhard

    2013-05-20

    Somatic cell nuclear transfer (SCNT) using genetically engineered donor cells is currently the most widely used strategy to generate tailored pig models for biomedical research. Although this approach facilitates a similar spectrum of genetic modifications as in rodent models, the outcome in terms of live cloned piglets is quite variable. In this study, we aimed at a comprehensive analysis of environmental and experimental factors that are substantially influencing the efficiency of generating genetically engineered pigs. Based on a considerably large data set from 274 SCNT experiments (in total 18,649 reconstructed embryos transferred into 193 recipients), performed over a period of three years, we assessed the relative contribution of season, type of genetic modification, donor cell source, number of cloning rounds, and pre-selection of cloned embryos for early development to the cloning efficiency. 109 (56%) recipients became pregnant and 85 (78%) of them gave birth to offspring. Out of 318 cloned piglets, 243 (76%) were alive, but only 97 (40%) were clinically healthy and showed normal development. The proportion of stillborn piglets was 24% (75/318), and another 31% (100/318) of the cloned piglets died soon after birth. The overall cloning efficiency, defined as the number of offspring born per SCNT embryos transferred, including only recipients that delivered, was 3.95%. SCNT experiments performed during winter using fetal fibroblasts or kidney cells after additive gene transfer resulted in the highest number of live and healthy offspring, while two or more rounds of cloning and nuclear transfer experiments performed during summer decreased the number of healthy offspring. Although the effects of individual factors may be different between various laboratories, our results and analysis strategy will help to identify and optimize the factors, which are most critical to cloning success in programs aiming at the generation of genetically engineered pig models.

  2. On polynomial selection for the general number field sieve

    NASA Astrophysics Data System (ADS)

    Kleinjung, Thorsten

    2006-12-01

    The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.

  3. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    PubMed Central

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  4. Large copy-number variations in patients with statin-associated myopathy affecting statin myopathy-related loci.

    PubMed

    Stránecký, V; Neřoldová, M; Hodaňová, K; Hartmannová, H; Piherová, L; Zemánková, P; Přistoupilová, A; Vrablík, M; Adámková, V; Kmoch, S; Jirsa, M

    2016-12-13

    Some patients are susceptible to statin-associated myopathy (SAM) either because of genetic variations affecting statin uptake and metabolism, or because they predispose their carriers to muscular diseases. Among the frequent variants examined using the genome-wide association study approach, SLCO1B1 c.521T>C represents the only validated predictor of SAM in patients treated with high-dose simvastatin. Our aim was to ascertain the overall contribution of large copy-number variations (CNVs) to SAM diagnosed in 86 patients. CNVs were detected by whole genome genotyping using Illumina HumanOmni2.5 Exome BeadChips. Exome sequence data were used for validation of CNVs in SAM-related loci. In addition, we performed a specific search for CNVs in the SLCO1B region detected recently in Rotor syndrome subjects. Rare deletions possibly contributing to genetic predisposition to SAM were found in two patients: one removed EYS associated previously with SAM, the other was present in LARGE associated with congenital muscular dystrophy. Another two patients carried deletions in CYP2C19, which may predispose to clopidogrel-statin interactions. We found no common large CNVs potentially associated with SAM and no CNVs in the SLCO1B locus. Our findings suggest that large CNVs do not play a substantial role in the etiology of SAM.

  5. Number-Theory in Nuclear-Physics in Number-Theory: Non-Primality Factorization As Fission VS. Primality As Fusion; Composites' Islands of INstability: Feshbach-Resonances?

    NASA Astrophysics Data System (ADS)

    Siegel, Edward

    2011-10-01

    Numbers: primality/indivisibility/non-factorization versus compositeness/divisibility /factor-ization, often in tandem but not always, provocatively close analogy to nuclear-physics: (2 + 1)=(fusion)=3; (3+1)=(fission)=4[=2 × 2]; (4+1)=(fusion)=5; (5 +1)=(fission)=6[=2 × 3]; (6 + 1)=(fusion)=7; (7+1)=(fission)=8[= 2 × 4 = 2 × 2 × 2]; (8 + 1) =(non: fission nor fusion)= 9[=3 × 3]; then ONLY composites' Islands of fusion-INstability: 8, 9, 10; then 14, 15, 16,... Could inter-digit Feshbach-resonances exist??? Applications to: quantum-information/computing non-Shore factorization, millennium-problem Riemann-hypotheses proof as Goodkin BEC intersection with graph-theory ``short-cut'' method: Rayleigh(1870)-Polya(1922)-``Anderson'' (1958)-localization, Goldbach-conjecture, financial auditing/accounting as quantum-statistical-physics;... abound!!!

  6. Interconnect patterns for printed organic thermoelectric devices with large fill factors

    NASA Astrophysics Data System (ADS)

    Gordiz, Kiarash; Menon, Akanksha K.; Yee, Shannon K.

    2017-09-01

    Organic materials can be printed into thermoelectric (TE) devices for low temperature energy harvesting applications. The output voltage of printed devices is often limited by (i) small temperature differences across the active materials attributed to small leg lengths and (ii) the lower Seebeck coefficient of organic materials compared to their inorganic counterparts. To increase the voltage, a large number of p- and n-type leg pairs is required for organic TEs; this, however, results in an increased interconnect resistance, which then limits the device output power. In this work, we discuss practical concepts to address this problem by positioning TE legs in a hexagonal closed-packed layout. This helps achieve higher fill factors (˜91%) than conventional inorganic devices (˜25%), which ultimately results in higher voltages and power densities due to lower interconnect resistances. In addition, wiring the legs following a Hilbert spacing-filling pattern allows for facile load matching to each application. This is made possible by leveraging the fractal nature of the Hilbert interconnect pattern, which results in identical sub-modules. Using the Hilbert design, sub-modules can better accommodate non-uniform temperature distributions because they naturally self-localize. These device design concepts open new avenues for roll-to-roll printing and custom TE module shapes, thereby enabling organic TE modules for self-powered sensors and wearable electronic applications.

  7. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals

    PubMed Central

    Iliev, Filip L.; Stanev, Valentin G.; Vesselinov, Velimir V.

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found. PMID:29518126

  8. Nonnegative Matrix Factorization for identification of unknown number of sources emitting delayed signals.

    PubMed

    Iliev, Filip L; Stanev, Valentin G; Vesselinov, Velimir V; Alexandrov, Boian S

    2018-01-01

    Factor analysis is broadly used as a powerful unsupervised machine learning tool for reconstruction of hidden features in recorded mixtures of signals. In the case of a linear approximation, the mixtures can be decomposed by a variety of model-free Blind Source Separation (BSS) algorithms. Most of the available BSS algorithms consider an instantaneous mixing of signals, while the case when the mixtures are linear combinations of signals with delays is less explored. Especially difficult is the case when the number of sources of the signals with delays is unknown and has to be determined from the data as well. To address this problem, in this paper, we present a new method based on Nonnegative Matrix Factorization (NMF) that is capable of identifying: (a) the unknown number of the sources, (b) the delays and speed of propagation of the signals, and (c) the locations of the sources. Our method can be used to decompose records of mixtures of signals with delays emitted by an unknown number of sources in a nondispersive medium, based only on recorded data. This is the case, for example, when electromagnetic signals from multiple antennas are received asynchronously; or mixtures of acoustic or seismic signals recorded by sensors located at different positions; or when a shift in frequency is induced by the Doppler effect. By applying our method to synthetic datasets, we demonstrate its ability to identify the unknown number of sources as well as the waveforms, the delays, and the strengths of the signals. Using Bayesian analysis, we also evaluate estimation uncertainties and identify the region of likelihood where the positions of the sources can be found.

  9. Cosmonumerology, Cosmophysics, and the Large Numbers Hypothesis: British Cosmology in the 1930s

    NASA Astrophysics Data System (ADS)

    Durham, Ian

    2001-04-01

    A number of unorthodox cosmological models were developed in the 1930s, many by British theoreticians. Three of the most notable of these theories included Eddington's cosmonumerology, Milne's cosmophysics, and Dirac's large numbers hypothesis (LNH). Dirac's LNH was based partly on the other two and it has been argued that modern steady-state theories are based partly on Milne's cosmophysics. But what influenced Eddington and Milne? Both were products of the late Victorian education system in Britain and could conceivably have been influenced by Victorian thought which, in addition to its strict (though technically unoffical) social caste system, had a flair for the unusual. Victorianism was filled with a fascination for the occult and the supernatural, and science was not insulated from this trend (witness the Henry Slade trial in 1877). It is conceivable that the normally strict mentality of the scientific process in the minds of Eddington and Milne was affected, indirectly, by this trend for the unusual, possibly pushing them into thinking "outside the box" as it were. In addition, cosmonumerology and the LNH exhibit signs of Pythagorean and Aristotelian thought. It is the aim of this ongoing project at St. Andrews to determine the influences and characterize the relations existing in and within these and related theories.

  10. Exploring the feasibility of using copy number variants as genetic markers through large-scale whole genome sequencing experiments

    USDA-ARS?s Scientific Manuscript database

    Copy number variants (CNV) are large scale duplications or deletions of genomic sequence that are caused by a diverse set of molecular phenomena that are distinct from single nucleotide polymorphism (SNP) formation. Due to their different mechanisms of formation, CNVs are often difficult to track us...

  11. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization

    NASA Astrophysics Data System (ADS)

    MacArt, Jonathan F.; Mueller, Michael E.

    2016-12-01

    Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.

  12. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    NASA Astrophysics Data System (ADS)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  13. High Reynolds Number Investigation of a Flush Mounted, S-Duct Inlet With Large Amounts of Boundary Layer Ingestion

    NASA Technical Reports Server (NTRS)

    Berrier, Bobby L.; Carter, Melissa B.; Allan, Brian G.

    2005-01-01

    An experimental investigation of a flush-mounted, S-duct inlet with large amounts of boundary layer ingestion has been conducted at Reynolds numbers up to full scale. The study was conducted in the NASA Langley Research Center 0.3-Meter Transonic Cryogenic Tunnel. In addition, a supplemental computational study on one of the inlet configurations was conducted using the Navier-Stokes flow solver, OVERFLOW. Tests were conducted at Mach numbers from 0.25 to 0.83, Reynolds numbers (based on aerodynamic interface plane diameter) from 5.1 million to 13.9 million (full-scale value), and inlet mass-flow ratios from 0.29 to 1.22, depending on Mach number. Results of the study indicated that increasing Mach number, increasing boundary layer thickness (relative to inlet height) or ingesting a boundary layer with a distorted profile decreased inlet performance. At Mach numbers above 0.4, increasing inlet airflow increased inlet pressure recovery but also increased distortion. Finally, inlet distortion was found to be relatively insensitive to Reynolds number, but pressure recovery increased slightly with increasing Reynolds number.This CD-ROM supplement contains inlet data including: Boundary layer data, Duct static pressure data, performance-AIP (fan face) data, Photos, Tunnel wall P-PTO data and definitions.

  14. Multiplex titration RT-PCR: rapid determination of gene expression patterns for a large number of genes

    NASA Technical Reports Server (NTRS)

    Nebenfuhr, A.; Lomax, T. L.

    1998-01-01

    We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.

  15. Building Numbers from Primes

    ERIC Educational Resources Information Center

    Burkhart, Jerry

    2009-01-01

    Prime numbers are often described as the "building blocks" of natural numbers. This article shows how the author and his students took this idea literally by using prime factorizations to build numbers with blocks. In this activity, students explore many concepts of number theory, including the relationship between greatest common factors and…

  16. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    PubMed Central

    Jardim-Messeder, Débora; Lambert, Kelly; Noctor, Stephen; Pestana, Fernanda M.; de Castro Leal, Maria E.; Bertelsen, Mads F.; Alagaili, Abdulaziz N.; Mohammad, Osama B.; Manger, Paul R.; Herculano-Houzel, Suzana

    2017-01-01

    Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion) share with non-primates, including artiodactyls (the typical prey of large carnivorans), roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal composition of carnivoran

  17. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species.

    PubMed

    Jardim-Messeder, Débora; Lambert, Kelly; Noctor, Stephen; Pestana, Fernanda M; de Castro Leal, Maria E; Bertelsen, Mads F; Alagaili, Abdulaziz N; Mohammad, Osama B; Manger, Paul R; Herculano-Houzel, Suzana

    2017-01-01

    Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion) share with non-primates, including artiodactyls (the typical prey of large carnivorans), roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal composition of carnivoran

  18. Location, number and factors associated with cerebral microbleeds in an Italian-British cohort of CADASIL patients.

    PubMed

    Nannucci, Serena; Rinnoci, Valentina; Pracucci, Giovanni; MacKinnon, Andrew D; Pescini, Francesca; Adib-Samii, Poneh; Bianchi, Silvia; Dotti, Maria Teresa; Federico, Antonio; Inzitari, Domenico; Markus, Hugh S; Pantoni, Leonardo

    2018-01-01

    The frequency, clinical correlates, and risk factors of cerebral microbleeds (CMB) in Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL) are still poorly known. We aimed at determining the location and number of CMB and their relationship with clinical manifestations, vascular risk factors, drugs, and other neuroimaging features in CADASIL patients. We collected clinical data by means of a structured proforma and centrally evaluated CMB on magnetic resonance gradient echo sequences applying the Microbleed Anatomical Rating Scale in CADASIL patients seen in 2 referral centers in Italy and United Kingdom. We evaluated 125 patients. CMB were present in 34% of patients and their presence was strongly influenced by the age. Twenty-nine percent of the patients had CMB in deep subcortical location, 22% in a lobar location, and 18% in infratentorial regions. After adjustment for age, factors significantly associated with a higher total number of CMB were hemorrhagic stroke, dementia, urge incontinence, and statins use (this latter not confirmed by multivariate analysis). Infratentorial and deep CMB were associated with dementia and urge incontinence, lobar CMB with hemorrhagic stroke, dementia, and statins use. Unexpectedly, patients with migraine, with or without aura, had a lower total, deep, and lobar number of CMB than patients without migraine. CMB formation in CADASIL seems to increase with age. History of hemorrhagic stroke, dementia, urge incontinence, and statins use are associated with a higher number of CMB. However, these findings need to be confirmed by longitudinal studies.

  19. Modelling of Field-Reversed Configuration Experiment with Large Safety Factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinhauer, L; Guo, H; Hoffman, A

    2005-11-28

    The Translation-Confinement-Sustainment facility has been operated in the 'translation-formation' mode in which a plasma is ejected at high-speed from a {theta}-pinch-like source into a confinement chamber where it settles into a field-reversed-configuration state. Measurements of the poloidal and toroidal field have been the basis of modeling to infer the safety factor. It is found that the edge safety factor exceeds two, and that there is strong forward magnetic shear. The high-q arises because the large elongation compensates for the modest ratio of toroidal-to-poloidal field in the plasma. This is the first known instance of a very high-{beta} plasma with amore » safety factor greater than unity. Two-fluid modeling of the measurements also indicate several other significant features: a broad 'transition layer' at the plasma boundary with probable line-tying effects, complex high-speed flows, and the appearance of a two-fluid minimum-energy state in the plasma core. All these features may contribute to both the stability and good confinement of the plasma.« less

  20. Decreased numbers of chemotactic factor receptors in chronic neutropenia with defective chemotaxis: spontaneous recovery from the neutrophil abnormalities during early childhood

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasui, K.; Yamazaki, M.; Miyagawa, Y.

    Childhood chronic neutropenia with decreased numbers of chemotactic factor receptors as well as defective chemotaxis was first demonstrated in an 8-month-old girl. Chemotactic factor receptors on neutrophils were assayed using tritiated N-formyl-methionyl-leucyl-phenylalanine (/sup 3/H-FMLP). The patient's neutrophils had decreased numbers of the receptors: numbers of the receptors were 20,000 (less than 3 SD) as compared with those of control cells of 52,000 +/- 6000 (mean +/- SD) (n = 10). The neutropenia disappeared spontaneously by 28 months of age parallel with the improvement of chemotaxis and increase in numbers of chemotactic factor receptors. These results demonstrate a transient decrease ofmore » neutrophil chemotactic factor receptors as one of the pathophysiological bases of a transient defect of neutrophil chemotaxis in this disorder.« less

  1. Railway noise annoyance and the importance of number of trains, ground vibration, and building situational factors.

    PubMed

    Gidlöf-Gunnarsson, Anita; Ögren, Mikael; Jerson, Tomas; Öhrström, Evy

    2012-01-01

    Internationally accepted exposure-response relationships show that railway noise causes less annoyance than road traffic and aircraft noise. Railway transport, both passenger and freight transport, is increasing, and new railway lines are planned for environmental reasons. The combination of more frequent railway traffic and faster and heavier trains will, most probably, lead to more disturbances from railway traffic in the near future. To effectively plan for mitigations against noise and vibration from railway traffic, new studies are needed to obtain a better basis of knowledge. The main objectives of the present study was to investigate how the relationship between noise levels from railway traffic and general annoyance is influenced by (i) number of trains, (ii) the presence of ground borne vibrations, and (iii) building situational factors, such as orientation of balcony/patio and bedroom window. Socio-acoustic field studies were executed in residential areas; (1) with relatively intense railway traffic; (2) with strong vibrations, and; (3) with the most intense railway traffic in the country. Data was obtained for 1695 respondents exposed to sound levels ranging from L(Aeq,24h) 45 to 65 dB. Both number of trains and presence of ground-borne vibrations, and not just the noise level per se, are of relevance for how annoying railway noise is perceived. The results imply that, for the proportion annoyed to be equal, a 5 - 7 dB lower noise level is needed in areas where the railway traffic causes strong ground-borne vibrations and in areas with a very large number of trains. General noise annoyance was twice as high among residents in dwellings with balcony / patio oriented towards the railway and about 1.5 times higher among residents with bedroom windows facing the railway.

  2. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  3. Factors influencing the number of applications submitted per applicant to orthopedic residency programs

    PubMed Central

    Finkler, Elissa S.; Fogel, Harold A.; Kroin, Ellen; Kliethermes, Stephanie; Wu, Karen; Nystrom, Lukas M.; Schiff, Adam P.

    2016-01-01

    Background From 2002 to 2014, the orthopedic surgery residency applicant pool increased by 25% while the number of applications submitted per applicant rose by 69%, resulting in an increase of 109% in the number of applications received per program. Objective This study aimed to identify applicant factors associated with an increased number of applications to orthopedic surgery residency programs. Design An anonymous survey was sent to all applicants applying to the orthopedic surgery residency program at Loyola University. Questions were designed to define the number of applications submitted per respondent as well as the strength of their application. Of 733 surveys sent, 140 (19.1%) responses were received. Setting An academic institution in Maywood, IL. Participants Fourth-year medical students applying to the orthopedic surgery residency program at Loyola University. Results An applicant's perception of how competitive he or she was (applicants who rated themselves as ‘average’ submitted more applications than those who rated themselves as either ‘good’ or ‘outstanding’, p=0.001) and the number of away rotations (those who completed >2 away rotations submitted more applications, p=0.03) were significantly associated with an increased number of applications submitted. No other responses were found to be associated with an increased number of applications submitted. Conclusion Less qualified candidates are not applying to significantly more programs than their more qualified counterparts. The increasing number of applications represents a financial strain on the applicant, given the costs required to apply to more programs, and a time burden on individual programs to screen increasing numbers of applicants. In order to stabilize or reverse this alarming trend, orthopedic surgery residency programs should openly disclose admission criteria to prospective candidates, and medical schools should provide additional guidance for candidates in this process

  4. Association of market, organizational and financial factors with the number, and types of capital expenditures.

    PubMed

    McCue, Michael J

    2011-01-01

    Prior literature provides only a descriptive view of the types and numbers of capital expenditures made by hospitals. This study conducted an empirical analysis to assess simultaneously what market, organizational, and financial factors relate to the number of capital projects as well as the specific types: medical equipment, expansion, and maintenance projects. Sampling California hospital capital expenditure data from 2002 to 2007, this study aggregated the number of capital projects by each type of capital investment decision: medical equipment, expansion, and maintenance/renovation per hospital. Using ordinary least squares regression, this study evaluated the association of these factors with these types of capital investment projects. This study found that hospitals capturing a greater share of the market, maintaining high levels of liquidity, and operating with more than 350 beds invested in a greater number of capital projects per hospital as well as medical equipment and expansionary projects per hospital. Within the state of California, the demand for health care services within a hospital market as well as cash and investment reserves were key drivers in the hospital CEOs and boards' decision to increase their capital purchases. The types of purchases included capital outlays related to medical equipment, such as CT scanners, MRIs, and surgical systems, and revenue-generating expansionary projects, such as new bed towers, hospitals wings, operating and emergency rooms, and replacement hospitals from 2002 to 2007.

  5. Integration of human factors principles in LARG organizations--a conceptual model.

    PubMed

    Figueira, Sara; Machado, V Cruz; Nunes, Isabel L

    2012-01-01

    Nowadays many companies are undergoing organizational transformations in order to meet the changing market demands. Thus, in order to become more competitive, supply chains (SC) are adopting new management paradigms to improve SC performance: lean, agile, resilient and green (LARG paradigms). The implementation of new production paradigms demands particular care with the issues related with Human Factors to avoid health and safety problems to workers and losses to companies. Thus, the successful introduction of these new production paradigms depends among others on a Human Factors oriented approach. This work presents a conceptual framework that allows integrating ergonomic and safety design principles during the different implementation phases of lean, agile, resilient and green practices.

  6. Increased numbers of total nucleated and CD34+ cells in blood group O cord blood: an analysis of neonatal innate factors in the Korean population.

    PubMed

    Lee, Hye Ryun; Park, Jeong Su; Shin, Sue; Roh, Eun Youn; Yoon, Jong Hyun; Han, Kyou Sup; Kim, Byung Jae; Storms, Robert W; Chao, Nelson J

    2012-01-01

    We analyzed neonatal factors that could affect hematopoietic variables of cord blood (CB) donated from Korean neonates. The numbers of total nucleated cells (TNCs), CD34+ cells, and CD34+ cells/TNCs of CB in neonates were compared according to sex, gestational age, birth weight, birth weight centile for gestational age, and ABO blood group. With 11,098 CB units analyzed, blood group O CB showed an increased number of TNCs, CD34+ cells, and CD34+ cells/TNCs compared with other blood groups. Although TNC counts were lower in males, no difference in the number of CD34+ cells was demonstrated because the number of CD34+ cells/TNCs was higher in males. An increase in the gestational age resulted in an increase in the number of TNCs and decreases in the number of CD34+ cells and CD34+ cells/TNCs. The numbers of TNCs, CD34+ cells, and CD34+ cells/TNCs increased according to increased birth weight centile as well as birth weight. CB with blood group O has unique hematologic variables in this large-scale analysis of Korean neonates, although the impact on the storage policies of CB banks or the clinical outcome of transplantation remains to be determined. © 2011 American Association of Blood Banks.

  7. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 < Pr < 35 and Rayleigh numbers in the range of 108 < Ra < 4 ×1011 . From these measurements we deduce changes of the flow structure. We learn about the stability and dynamics of the large-scale circulation (LSC), as well as about its breakdown and the onset of vortex formation close to the top and bottom plate. We shall examine correlations between these measurements and changes in the heat transport. This work was supported by NSF grant DRM11-58514. SW acknowledges support by the Deutsche Forschungsgemeinschaft.

  8. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices.

    PubMed

    Park, KeeHyun; Lim, SeungHyeon

    2015-01-01

    In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  9. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    PubMed Central

    Lim, SeungHyeon

    2015-01-01

    In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic. PMID:26247034

  10. The large Maf factor Traffic Jam controls gonad morphogenesis in Drosophila.

    PubMed

    Li, Michelle A; Alls, Jeffrey D; Avancini, Rita M; Koo, Karen; Godt, Dorothea

    2003-11-01

    Interactions between somatic and germline cells are critical for the normal development of egg and sperm. Here we show that the gene traffic jam (tj) produces a soma-specific factor that controls gonad morphogenesis and is required for female and male fertility. tj encodes the only large Maf factor in Drosophila melanogaster, an orthologue of the atypical basic Leu zipper transcription factors c-Maf and MafB/Kreisler in vertebrates. Expression of tj occurs in somatic gonadal cells that are in direct contact with germline cells throughout development. In tj mutant gonads, somatic cells fail to inter-mingle and properly envelop germline cells, causing an early block in germ cell differentiation. In addition, tj mutant somatic cells show an increase in the level of expression for several adhesion molecules. We propose that tj is a critical modulator of the adhesive properties of somatic cells, facilitating germline-soma interactions that are essential for germ cell differentiation.

  11. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia.

    PubMed

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA-DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA-DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place.

  12. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia

    PubMed Central

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA–DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA–DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place. PMID:23073392

  13. Factors Influencing Pharmacy Students' Attendance Decisions in Large Lectures

    PubMed Central

    Helms, Kristen L.; McDonough, Sharon K.; Breland, Michelle L.

    2009-01-01

    Objectives To identify reasons for pharmacy student attendance and absenteeism in large lectures and to determine whether certain student characteristics affect student absenteeism. Methods Pharmacy students' reasons to attend and not attend 3 large lecture courses were identified. Using a Web-based survey instrument, second-year pharmacy students were asked to rate to what degree various reasons affected their decision to attend or not attend classes for 3 courses. Bivariate analyses were used to assess the relationships between student characteristics and degree of absenteeism. Results Ninety-eight students (75%) completed the survey instrument. The degree of student absenteeism differed among the 3 courses. Most student demographic characteristics examined were not related to the degree of absenteeism. Different reasons to attend and not to attend class were identified for each of the 3 courses, suggesting that attendance decisions were complex. Conclusions Respondents wanted to take their own notes and the instructor highlighted what was important to know were the top 2 common reasons for pharmacy students to attend classes. Better understanding of factors influencing student absenteeism may help pharmacy educators design effective interventions to facilitate student attendance. PMID:19777098

  14. Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.

    PubMed

    Janiga, Gábor

    2014-04-01

    This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Do family history of CHD, education, paternal social class, number of siblings and height explain the association between psychosocial factors at work and coronary heart disease? The Whitehall II study

    PubMed Central

    Hintsa, T; Shipley, M; Gimeno, D; Elovainio, M; Chandola, T; Jokela, M; Keltikangas-Järvinen, L; Vahtera, J; Marmot, MG; Kivimäki, M

    2011-01-01

    Objectives To examine whether the association between psychosocial factors at work and incident coronary heart disease (CHD) is explained by pre-employment factors such as family history of CHD, education, paternal social class, number of siblings and height. Methods A prospective cohort study of 6435 of British men aged 35–55 years at phase 1 (1985–1988) and free from prevalent CHD at phase 2 (1989–1990) was conducted. Psychosocial factors at work were assessed at phases 1 and 2 and mean scores across the two phases were used to determine long-term exposure. Selected pre-employment factors were assessed at phase 1. Follow-up for coronary death, first non-fatal myocardial infarction or definite angina between phase 2 and 1999 was based on clinical records (250 events, follow-up 8.7 years). Results Pre-employment factors were associated with risk for CHD: hazard ratio, HRs (95% CI) were 1.33 (1.03 to 1.73) for family history of CHD, 1.18 (1.05–1.32) for each quartile decrease in height, and marginally 1.16 (0.99–1.35) for each category increase in number of siblings. Psychosocial work factors predicted CHD: 1.72 (1.08–2.74) for low job control and 1.72 (1.10–2.67) for low organisational justice. Adjustment for pre-employment factors changed these associations by 4.1% or less. Conclusions In this well-characterised occupational cohort of British men, the association between psychosocial factors at work and CHD was largely independent of family history of CHD, education, paternal education and social class, number of siblings and height. PMID:19819857

  16. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    PubMed

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  17. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    PubMed Central

    Krahulcová, Anna; Trávníček, Pavel; Rejmánek, Marcel

    2017-01-01

    Background and Aims Aesculus L. (horse chestnut, buckeye) is a genus of 12–19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Methods Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. Key Results The same chromosome number, 2n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum, confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C–1 in A. parviflora to 1·275 pg 2C–1 in A. glabra var. glabra. Conclusions The chromosome number of 2n = 40 seems to be conclusively the universal 2n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. PMID:28065925

  18. Changes in numbers of large ovarian follicles, plasma luteinizing hormone and estradiol-17beta concentrations and egg production figures in farmed ostriches throughout the year.

    PubMed

    Bronneberg, R G G; Stegeman, J A; Vernooij, J C M; Dieleman, S J; Decuypere, E; Bruggeman, V; Taverne, M A M

    2007-06-01

    In this study we described and analysed changes in the numbers of large ovarian follicles (diameter 6.1-9.0 cm) and in the plasma concentrations of luteinizing hormone (LH) and estradiol-17beta (E(2)beta) in relation to individual egg production figures of farmed ostriches (Struthio camelus spp.) throughout one year. Ultrasound scanning and blood sampling for plasma hormone analysis were performed in 9 hens on a monthly basis during the breeding season and in two periods of the non-breeding season. Our data demonstrated that: (1) large follicles were detected and LH concentrations were elevated already 1 month before first ovipositions of the egg production season took place; (2) E(2)beta concentrations increased as soon as the egg production season started; (3) numbers of large follicles, LH and E(2)beta concentrations were elevated during the entire egg production season; and that (4) numbers of large follicles, LH and E(2)beta concentrations decreased simultaneous with or following the last ovipositions of the egg production season. By comparing these parameters during the egg production season with their pre-and post-seasonal values, significant differences were found in the numbers of large follicles and E(2)beta concentrations between the pre-seasonal, seasonal and post-seasonal period; while LH concentrations were significantly different between the seasonal and post-seasonal period. In conclusion, our data demonstrate that changes in numbers of large follicles and in concentrations of LH and E(2)beta closely parallel individual egg production figures and provide some new cues that egg production in ostriches is confined to a marked reproductive season. Moreover, our data provide indications that mechanism, initiating, maintaining and terminating the egg production season in farmed breeding ostriches are quite similar to those already known for other seasonal breeding bird species.

  19. Factors affecting the performance of large-aperture microphone arrays.

    PubMed

    Silverman, Harvey F; Patterson, William R; Sachar, Joshua

    2002-05-01

    Large arrays of microphones have been proposed and studied as a possible means of acquiring data in offices, conference rooms, and auditoria without requiring close-talking microphones. When such an array essentially surrounds all possible sources, it is said to have a large aperture. Large-aperture arrays have attractive properties of spatial resolution and signal-to-noise enhancement. This paper presents a careful comparison of theoretical and measured performance for an array of 256 microphones using simple delay-and-sum beamforming. This is the largest currently functional, all digital-signal-processing array that we know of. The array is wall-mounted in the moderately adverse environment of a general-purpose laboratory (8 m x 8 m x 3 m). The room has a T60 reverberation time of 550 ms. Reverberation effects in this room severely impact the array's performance. However, the width of the main lobe remains comparable to that of a simplified prediction. Broadband spatial resolution shows a single central peak with 10 dB gain about 0.4 m in diameter at the -3 dB level. Away from that peak, the response is approximately flat over most of the room. Optimal weighting for signal-to-noise enhancement degrades the spatial resolution minimally. Experimentally, we verify that signal-to-noise gain is less than proportional to the square root of the number of microphones probably due to the partial correlation of the noise between channels, to variation of signal intensity with polar angle about the source, and to imperfect correlation of the signal over the array caused by reverberations. We show measurements of the relative importance of each effect in our environment.

  20. Factors affecting the performance of large-aperture microphone arrays

    NASA Astrophysics Data System (ADS)

    Silverman, Harvey F.; Patterson, William R.; Sachar, Joshua

    2002-05-01

    Large arrays of microphones have been proposed and studied as a possible means of acquiring data in offices, conference rooms, and auditoria without requiring close-talking microphones. When such an array essentially surrounds all possible sources, it is said to have a large aperture. Large-aperture arrays have attractive properties of spatial resolution and signal-to-noise enhancement. This paper presents a careful comparison of theoretical and measured performance for an array of 256 microphones using simple delay-and-sum beamforming. This is the largest currently functional, all digital-signal-processing array that we know of. The array is wall-mounted in the moderately adverse environment of a general-purpose laboratory (8 m×8 m×3 m). The room has a T60 reverberation time of 550 ms. Reverberation effects in this room severely impact the array's performance. However, the width of the main lobe remains comparable to that of a simplified prediction. Broadband spatial resolution shows a single central peak with 10 dB gain about 0.4 m in diameter at the -3 dB level. Away from that peak, the response is approximately flat over most of the room. Optimal weighting for signal-to-noise enhancement degrades the spatial resolution minimally. Experimentally, we verify that signal-to-noise gain is less than proportional to the square root of the number of microphones probably due to the partial correlation of the noise between channels, to variation of signal intensity with polar angle about the source, and to imperfect correlation of the signal over the array caused by reverberations. We show measurements of the relative importance of each effect in our environment.

  1. Attack risk for butterflies changes with eyespot number and size

    PubMed Central

    Ho, Sebastian; Schachat, Sandra R.; Piel, William H.; Monteiro, Antónia

    2016-01-01

    Butterfly eyespots are known to function in predator deflection and predator intimidation, but it is still unclear what factors cause eyespots to serve one function over the other. Both functions have been demonstrated in different species that varied in eyespot size, eyespot number and wing size, leaving the contribution of each of these factors to butterfly survival unclear. Here, we study how each of these factors contributes to eyespot function by using paper butterfly models, where each factor is varied in turn, and exposing these models to predation in the field. We find that the presence of multiple, small eyespots results in high predation, whereas single large eyespots (larger than 6 mm in diameter) results in low predation. These data indicate that single large eyespots intimidate predators, whereas multiple small eyespots produce a conspicuous, but non-intimidating signal to predators. We propose that eyespots may gain an intimidation function by increasing in size. Our measurements of eyespot size in 255 nymphalid butterfly species show that large eyespots are relatively rare and occur predominantly on ventral wing surfaces. By mapping eyespot size on the phylogeny of the family Nymphalidae, we show that these large eyespots, with a potential intimidation function, are dispersed throughout multiple nymphalid lineages, indicating that phylogeny is not a strong predictor of eyespot size. PMID:26909190

  2. High-accuracy absolute rotation rate measurements with a large ring laser gyro: establishing the scale factor.

    PubMed

    Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R

    2017-02-01

    Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16  m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.

  3. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, J; Ma, L

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beammore » numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.« less

  4. Non-dietary risk factors for gastric dilatation-volvulus in large and giant breed dogs.

    PubMed

    Glickman, L T; Glickman, N W; Schellenberg, D B; Raghavan, M; Lee, T

    2000-11-15

    To identify non-dietary risk factors for gastric dilatation-volvulus (GDV) in large breed and giant breed dogs. Prospective cohort study. 1,637 dogs > or = 6 months old of the following breeds: Akita, Bloodhound, Collie, Great Dane, Irish Setter, Irish Wolfhound, Newfoundland, Rottweiler, Saint Bernard, Standard Poodle, and Weimaraner. Owners of dogs that did not have a history of GDV were recruited at dog shows, and the dog's length and height and the depth and width of its thorax and abdomen were measured. Information concerning the dog's medical history, genetic background, personality, and diet was obtained from the owners, and owners were contacted by mail and telephone at approximately 1-year intervals to determine whether dogs had developed GDV or died. Incidence of GDV, calculated on the basis of dog-years at risk for dogs that were or were not exposed to potential risk factors, was used to calculate the relative risk of GDV. Cumulative incidence of GDV during the study was 6% for large breed and giant breed dogs. Factors significantly associated with an increased risk of GDV were increasing age, having a first-degree relative with a history of GDV, having a faster speed of eating, and having a raised feeding bowl. Approximately 20 and 52% of cases of GDV among the large breed and giant breed dogs, respectively, were attributed to having a raised feed bowl.

  5. Interacting Factors Driving a Major Loss of Large Trees with Cavities in a Forest Ecosystem

    PubMed Central

    Lindenmayer, David B.; Blanchard, Wade; McBurney, Lachlan; Blair, David; Banks, Sam; Likens, Gene E.; Franklin, Jerry F.; Laurance, William F.; Stein, John A. R.; Gibbons, Philip

    2012-01-01

    Large trees with cavities provide critical ecological functions in forests worldwide, including vital nesting and denning resources for many species. However, many ecosystems are experiencing increasingly rapid loss of large trees or a failure to recruit new large trees or both. We quantify this problem in a globally iconic ecosystem in southeastern Australia – forests dominated by the world's tallest angiosperms, Mountain Ash (Eucalyptus regnans). Tree, stand and landscape-level factors influencing the death and collapse of large living cavity trees and the decay and collapse of dead trees with cavities are documented using a suite of long-term datasets gathered between 1983 and 2011. The historical rate of tree mortality on unburned sites between 1997 and 2011 was >14% with a mortality spike in the driest period (2006–2009). Following a major wildfire in 2009, 79% of large living trees with cavities died and 57–100% of large dead trees were destroyed on burned sites. Repeated measurements between 1997 and 2011 revealed no recruitment of any new large trees with cavities on any of our unburned or burned sites. Transition probability matrices of large trees with cavities through increasingly decayed condition states projects a severe shortage of large trees with cavities by 2039 that will continue until at least 2067. This large cavity tree crisis in Mountain Ash forests is a product of: (1) the prolonged time required (>120 years) for initiation of cavities; and (2) repeated past wildfires and widespread logging operations. These latter factors have resulted in all landscapes being dominated by stands ≤72 years and just 1.16% of forest being unburned and unlogged. We discuss how the features that make Mountain Ash forests vulnerable to a decline in large tree abundance are shared with many forest types worldwide. PMID:23071486

  6. Interacting factors driving a major loss of large trees with cavities in a forest ecosystem.

    PubMed

    Lindenmayer, David B; Blanchard, Wade; McBurney, Lachlan; Blair, David; Banks, Sam; Likens, Gene E; Franklin, Jerry F; Laurance, William F; Stein, John A R; Gibbons, Philip

    2012-01-01

    Large trees with cavities provide critical ecological functions in forests worldwide, including vital nesting and denning resources for many species. However, many ecosystems are experiencing increasingly rapid loss of large trees or a failure to recruit new large trees or both. We quantify this problem in a globally iconic ecosystem in southeastern Australia--forests dominated by the world's tallest angiosperms, Mountain Ash (Eucalyptus regnans). Tree, stand and landscape-level factors influencing the death and collapse of large living cavity trees and the decay and collapse of dead trees with cavities are documented using a suite of long-term datasets gathered between 1983 and 2011. The historical rate of tree mortality on unburned sites between 1997 and 2011 was >14% with a mortality spike in the driest period (2006-2009). Following a major wildfire in 2009, 79% of large living trees with cavities died and 57-100% of large dead trees were destroyed on burned sites. Repeated measurements between 1997 and 2011 revealed no recruitment of any new large trees with cavities on any of our unburned or burned sites. Transition probability matrices of large trees with cavities through increasingly decayed condition states projects a severe shortage of large trees with cavities by 2039 that will continue until at least 2067. This large cavity tree crisis in Mountain Ash forests is a product of: (1) the prolonged time required (>120 years) for initiation of cavities; and (2) repeated past wildfires and widespread logging operations. These latter factors have resulted in all landscapes being dominated by stands ≤72 years and just 1.16% of forest being unburned and unlogged. We discuss how the features that make Mountain Ash forests vulnerable to a decline in large tree abundance are shared with many forest types worldwide.

  7. Large-scale circulation patterns, instability factors and global precipitation modeling as influenced by external forcing

    NASA Astrophysics Data System (ADS)

    Bundel, A.; Kulikova, I.; Kruglova, E.; Muravev, A.

    2003-04-01

    The scope of the study is to estimate the relationship between large-scale circulation regimes, various instability indices and global precipitation with different boundary conditions, considered as external forcing. The experiments were carried out in the ensemble-prediction framework of the dynamic-statistical monthly forecast scheme run in the Hydrometeorological Research Center of Russia every ten days. The extension to seasonal intervals makes it necessary to investigate the role of slowly changing boundary conditions among which the sea surface temperature (SST) may be defined as the most effective factor. Continuous integrations of the global spectral T41L15 model for the whole year 2000 (starting from January 1) were performed with the climatic SST and the Reynolds Archive SSTs. Monthly values of the SST were projected on the year days using spline interpolation technique. First, the global precipitation values in experiments were compared to the GPCP (Global Precipitation Climate Program) daily observation data. Although the global mean precipitation is underestimated by the model, some large-scale regional amounts correspond to the real ones (e.g. for Europe) fairly well. On the whole, however, anomaly phases failed to be reproduced. The precipitation averaged over the whole land revealed a greater sensitivity to the SSTs than that over the oceans. The wavelet analysis was applied to separate the low- and high-frequency signal of the SST influence on the large-scale circulation and precipitation. A derivative of the Wallace-Gutzler teleconnection index for the East-Atlantic oscillation was taken as the circulation characteristic. The daily oscillation index values and precipitation amounts averaged over Europe were decomposed using wavelet approach with different “mother wavelets” up to approximation level 3. It was demonstrated that an increase in the precipitation amount over Europe was associated with the zonal flow intensification over the

  8. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    NASA Astrophysics Data System (ADS)

    Monty, J. P.; Allen, J. J.; Lien, K.; Chong, M. S.

    2011-12-01

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of `superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements.

  9. Heat and fluid flow characteristics of an oval fin-and-tube heat exchanger with large diameters for textile machine dryer

    NASA Astrophysics Data System (ADS)

    Bae, Kyung Jin; Cha, Dong An; Kwon, Oh Kyung

    2016-11-01

    The objectives of this paper are to develop correlations between heat transfer and pressure drop for oval finned-tube heat exchanger with large diameters (larger than 20 mm) used in a textile machine dryer. Numerical tests using ANSYS CFX are performed for four different parameters; tube size, fin pitch, transverse tube pitch and longitudinal tube pitch. The numerical results showed that the Nusselt number and the friction factor are in a range of -16.2 ~ +3.1 to -7.7 ~ +3.9 %, respectively, compared with experimental results. It was found that the Nusselt number linearly increased with increasing Reynolds number, but the friction factor slightly decreased with increasing Reynolds number. It was also found that the variation of longitudinal tube pitch has little effect on the Nusselt number and friction factor than other parameters (below 2.0 and 2.5 %, respectively). This study proposed a new Nusselt number and friction factor correlation of the oval finned-tube heat exchanger with large diameters for textile machine dryer.

  10. Large ν - \\overline{ν} oscillations from high-dimensional lepton number violating operator

    NASA Astrophysics Data System (ADS)

    Geng, Chao-Qiang; Huang, Da

    2017-03-01

    It is usually believed that the observation of the neutrino-antineutrino ( ν - \\overline{ν} ) oscillations is almost impossible since the oscillation probabilities are expected to be greatly suppressed by the square of tiny ratio of neutrino masses to energies. Such an argument is applicable to most models for neutrino mass generation based on the Weinberg operator, including the seesaw models. However, in the present paper, we shall give a counterexample to this argument, and show that large ν - \\overline{ν} oscillation probabilities can be obtained in a class of models in which both neutrino masses and neutrinoless double beta (0 νββ) decays are induced by the high-dimensional lepton number violating operator O_7={\\overline{u}}_R{l}_R^c{\\overline{L}}_L{H}^{\\ast }{d}_R+H.c. with u and d representing the first two generations of quarks. In particular, we find that the predicted 0 νββ decay rates have already placed interesting constraints on the {ν}_e\\leftrightarrow {\\overline{ν}}_e oscillation. Moreover, we provide an UV-complete model to realize this scenario, in which a dark matter candidate naturally appears due to the new U(1) d symmetry.

  11. Large transcription units unify copy number variants and common fragile sites arising under replication stress.

    PubMed

    Wilson, Thomas E; Arlt, Martin F; Park, So Hae; Rajendran, Sountharia; Paulsen, Michelle; Ljungman, Mats; Glover, Thomas W

    2015-02-01

    Copy number variants (CNVs) resulting from genomic deletions and duplications and common fragile sites (CFSs) seen as breaks on metaphase chromosomes are distinct forms of structural chromosome instability precipitated by replication inhibition. Although they share a common induction mechanism, it is not known how CNVs and CFSs are related or why some genomic loci are much more prone to their occurrence. Here we compare large sets of de novo CNVs and CFSs in several experimental cell systems to each other and to overlapping genomic features. We first show that CNV hotpots and CFSs occurred at the same human loci within a given cultured cell line. Bru-seq nascent RNA sequencing further demonstrated that although genomic regions with low CNV frequencies were enriched in transcribed genes, the CNV hotpots that matched CFSs specifically corresponded to the largest active transcription units in both human and mouse cells. Consistently, active transcription units >1 Mb were robust cell-type-specific predictors of induced CNV hotspots and CFS loci. Unlike most transcribed genes, these very large transcription units replicated late and organized deletion and duplication CNVs into their transcribed and flanking regions, respectively, supporting a role for transcription in replication-dependent lesion formation. These results indicate that active large transcription units drive extreme locus- and cell-type-specific genomic instability under replication stress, resulting in both CNVs and CFSs as different manifestations of perturbed replication dynamics. © 2015 Wilson et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Large transcription units unify copy number variants and common fragile sites arising under replication stress

    PubMed Central

    Park, So Hae; Rajendran, Sountharia; Paulsen, Michelle; Ljungman, Mats; Glover, Thomas W.

    2015-01-01

    Copy number variants (CNVs) resulting from genomic deletions and duplications and common fragile sites (CFSs) seen as breaks on metaphase chromosomes are distinct forms of structural chromosome instability precipitated by replication inhibition. Although they share a common induction mechanism, it is not known how CNVs and CFSs are related or why some genomic loci are much more prone to their occurrence. Here we compare large sets of de novo CNVs and CFSs in several experimental cell systems to each other and to overlapping genomic features. We first show that CNV hotpots and CFSs occurred at the same human loci within a given cultured cell line. Bru-seq nascent RNA sequencing further demonstrated that although genomic regions with low CNV frequencies were enriched in transcribed genes, the CNV hotpots that matched CFSs specifically corresponded to the largest active transcription units in both human and mouse cells. Consistently, active transcription units >1 Mb were robust cell-type-specific predictors of induced CNV hotspots and CFS loci. Unlike most transcribed genes, these very large transcription units replicated late and organized deletion and duplication CNVs into their transcribed and flanking regions, respectively, supporting a role for transcription in replication-dependent lesion formation. These results indicate that active large transcription units drive extreme locus- and cell-type-specific genomic instability under replication stress, resulting in both CNVs and CFSs as different manifestations of perturbed replication dynamics. PMID:25373142

  13. Sensitivity of a Riparian Large Woody Debris Recruitment Model to the Number of Contributing Banks and Tree Fall Pattern

    Treesearch

    Don C. Bragg; Jeffrey L. Kershner

    2004-01-01

    Riparian large woody debris (LWD) recruitment simulations have traditionally applied a random angle of tree fall from two well-forested stream banks. We used a riparian LWD recruitment model (CWD, version 1.4) to test the validity these assumptions. Both the number of contributing forest banks and predominant tree fall direction significantly influenced simulated...

  14. Numbers in Action.

    PubMed

    Rugani, Rosa; Sartori, Luisa

    2016-01-01

    Humans show a remarkable tendency to describe and think of numbers as being placed on a mental number line (MNL), with smaller numbers located on the left and larger ones on the right. Faster responses to small numbers are indeed performed on the left side of space, while responses to large numbers are facilitated on the right side of space (spatial-numerical association of response codes, SNARC effect). This phenomenon is considered the experimental demonstration of the MNL and has been extensively replicated throughout a variety of paradigms. Nevertheless, the majority of previous literature has mainly investigated this effect by means of response times and accuracy, whereas studies considering more subtle and automatic measures such as kinematic parameters are rare (e.g., in a reaching-to-grasp movement, the grip aperture is enlarged in responding to larger numbers than in responding to small numbers). In this brief review we suggest that numerical magnitude can also affect the what and how of action execution (i.e., temporal and spatial components of movement). This evidence could have large implications in the strongly debated issue concerning the effect of experience and culture on the orientation of MNL.

  15. Five Describing Factors of Dyslexia.

    PubMed

    Tamboer, Peter; Vorst, Harrie C M; Oort, Frans J

    2016-09-01

    Two subtypes of dyslexia (phonological, visual) have been under debate in various studies. However, the number of symptoms of dyslexia described in the literature exceeds the number of subtypes, and underlying relations remain unclear. We investigated underlying cognitive features of dyslexia with exploratory and confirmatory factor analyses. A sample of 446 students (63 with dyslexia) completed a large test battery and a large questionnaire. Five factors were found in both the test battery and the questionnaire. These 10 factors loaded on 5 latent factors (spelling, phonology, short-term memory, rhyme/confusion, and whole-word processing/complexity), which explained 60% of total variance. Three analyses supported the validity of these factors. A confirmatory factor analysis fit with a solution of five factors (RMSEA = .03). Those with dyslexia differed from those without dyslexia on all factors. A combination of five factors provided reliable predictions of dyslexia and nondyslexia (accuracy >90%). We also looked for factorial deficits on an individual level to construct subtypes of dyslexia, but found varying profiles. We concluded that a multiple cognitive deficit model of dyslexia is supported, whereas the existence of subtypes remains unclear. We discussed the results in relation to advanced compensation strategies of students, measures of intelligence, and various correlations within groups of those with and without dyslexia. © Hammill Institute on Disabilities 2014.

  16. Optimum Guidance Law and Information Management for a Large Number of Formation Flying Spacecrafts

    NASA Astrophysics Data System (ADS)

    Tsuda, Yuichi; Nakasuka, Shinichi

    In recent years, formation flying technique is recognized as one of the most important technologies for deep space and orbital missions that involve multiple spacecraft operations. Formation flying mission improves simultaneous observability over a wide area, redundancy and reconfigurability of the system with relatively small and low cost spacecrafts compared with the conventional single spacecraft mission. From the viewpoint of guidance and control, realizing formation flying mission usually requires tight maintenance and control of the relative distances, speeds and orientations between the member satellites. This paper studies a practical architecture for formation flight missions focusing mainly on guidance and control, and describes a new guidance algorithm for changing and keeping the relative positions and speeds of the satellites in formation. The resulting algorithm is suitable for onboard processing and gives the optimum impulsive trajectory for satellites flying closely around a certain reference orbit, that can be elliptic, parabolic or hyperbolic. Based on this guidance algorithm, this study introduces an information management methodology between the member spacecrafts which is suitable for a large formation flight architecture. Routing and multicast communication based on the wireless local area network technology are introduced. Some mathematical analyses and computer simulations will be shown in the presentation to reveal the feasibility of the proposed formation flight architecture, especially when a very large number of satellites join the formation.

  17. The Shock and Vibration Digest. Volume 14, Number 12

    DTIC Science & Technology

    1982-12-01

    to evaluate the uses of statistical energy analysis for determining sound transmission performance. Coupling loss factors were mea- sured and compared...measurements for the artificial (Also see No. 2623) cracks in mild-steel test pieces. 82-2676 Ihprovement of the Method of Statistical Energy Analysis for...eters, using a large number of free-response time histories In the application of the statistical energy analysis theory simultaneously in one analysis

  18. Large-scale integrative network-based analysis identifies common pathways disrupted by copy number alterations across cancers

    PubMed Central

    2013-01-01

    Background Many large-scale studies analyzed high-throughput genomic data to identify altered pathways essential to the development and progression of specific types of cancer. However, no previous study has been extended to provide a comprehensive analysis of pathways disrupted by copy number alterations across different human cancers. Towards this goal, we propose a network-based method to integrate copy number alteration data with human protein-protein interaction networks and pathway databases to identify pathways that are commonly disrupted in many different types of cancer. Results We applied our approach to a data set of 2,172 cancer patients across 16 different types of cancers, and discovered a set of commonly disrupted pathways, which are likely essential for tumor formation in majority of the cancers. We also identified pathways that are only disrupted in specific cancer types, providing molecular markers for different human cancers. Analysis with independent microarray gene expression datasets confirms that the commonly disrupted pathways can be used to identify patient subgroups with significantly different survival outcomes. We also provide a network view of disrupted pathways to explain how copy number alterations affect pathways that regulate cell growth, cycle, and differentiation for tumorigenesis. Conclusions In this work, we demonstrated that the network-based integrative analysis can help to identify pathways disrupted by copy number alterations across 16 types of human cancers, which are not readily identifiable by conventional overrepresentation-based and other pathway-based methods. All the results and source code are available at http://compbio.cs.umn.edu/NetPathID/. PMID:23822816

  19. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.

    2011-04-10

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers frommore » their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.« less

  20. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    DOE R&D Accomplishments Database

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  1. Genetic factors affecting EBV copy number in lymphoblastoid cell lines derived from the 1000 Genome Project samples.

    PubMed

    Mandage, Rajendra; Telford, Marco; Rodríguez, Juan Antonio; Farré, Xavier; Layouni, Hafid; Marigorta, Urko M; Cundiff, Caitlin; Heredia-Genestar, Jose Maria; Navarro, Arcadi; Santpere, Gabriel

    2017-01-01

    Epstein-Barr virus (EBV), human herpes virus 4, has been classically associated with infectious mononucleosis, multiple sclerosis and several types of cancers. Many of these diseases show marked geographical differences in prevalence, which points to underlying genetic and/or environmental factors. Those factors may include a different susceptibility to EBV infection and viral copy number among human populations. Since EBV is commonly used to transform B-cells into lymphoblastoid cell lines (LCLs) we hypothesize that differences in EBV copy number among individual LCLs may reflect differential susceptibility to EBV infection. To test this hypothesis, we retrieved whole-genome sequenced EBV-mapping reads from 1,753 LCL samples derived from 19 populations worldwide that were sequenced within the context of the 1000 Genomes Project. An in silico methodology was developed to estimate the number of EBV copy number in LCLs and validated these estimations by real-time PCR. After experimentally confirming that EBV relative copy number remains stable over cell passages, we performed a genome wide association analysis (GWAS) to try detecting genetic variants of the host that may be associated with EBV copy number. Our GWAS has yielded several genomic regions suggestively associated with the number of EBV genomes per cell in LCLs, unraveling promising candidate genes such as CAND1, a known inhibitor of EBV replication. While this GWAS does not unequivocally establish the degree to which genetic makeup of individuals determine viral levels within their derived LCLs, for which a larger sample size will be needed, it potentially highlighted human genes affecting EBV-related processes, which constitute interesting candidates to follow up in the context of EBV related pathologies.

  2. Factors affecting large peakflows on Appalachian watersheds: lessons from the Fernow Experimental Forest

    Treesearch

    James N. Kochenderfer; Mary Beth Adams; Gary W. Miller; David J. Helvey

    2007-01-01

    Data collected since 1951 on the Fernow Experimental Forest near Parsons, West Virginia, and at a gaging station on the nearby Cheat River since 1913 were used to evaluate factors affecting large peakflows on forested watersheds. Treatments ranged from periodic partial cuts to complete deforestation using herbicides. Total storm precipitation and average storm...

  3. Size-resolved particle number emission patterns under real-world driving conditions using positive matrix factorization.

    PubMed

    Domínguez-Sáez, Aida; Viana, Mar; Barrios, Carmen C; Rubio, Jose R; Amato, Fulvio; Pujadas, Manuel; Querol, Xavier

    2012-10-16

    A novel on-board system was tested to characterize size-resolved particle number emission patterns under real-world driving conditions, running in a EURO4 diesel vehicle and in a typical urban circuit in Madrid (Spain). Emission profiles were determined as a function of driving conditions. Source apportionment by Positive Matrix Factorization (PMF) was carried out to interpret the real-world driving conditions. Three emission patterns were identified: (F1) cruise conditions, with medium-high speeds, contributing in this circuit with 60% of total particle number and a particle size distribution dominated by particles >52 nm and around 60 nm; (F2) transient conditions, stop-and-go conditions at medium-high speed, contributing with 25% of the particle number and mainly emitting particles in the nucleation mode; and (F3) creep-idle conditions, representing traffic congestion and frequent idling periods, contributing with 14% to the total particle number and with particles in the nucleation mode (<29.4 nm) and around 98 nm. We suggest potential approaches to reduce particle number emissions depending on particle size and driving conditions. Differences between real-world emission patterns and regulatory cycles (NEDC) are also presented, which evidence that detecting particle number emissions <40 nm is only possible under real-world driving conditions.

  4. Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Larsson, Johan

    2013-01-01

    A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.

  5. Developing Young Children's Multidigit Number Sense.

    ERIC Educational Resources Information Center

    Diezmann, Carmel M.; English, Lyn D.

    2001-01-01

    This article describes a series of enrichment experiences designed to develop young (ages 5 to 8) gifted children's understanding of large numbers, central to their investigation of space travel. It describes activities designed to teach reading of large numbers and exploring numbers to a thousand and then a million. (Contains ten references.) (DB)

  6. Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.

    PubMed

    Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik

    2015-10-01

    The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.

  7. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    NASA Astrophysics Data System (ADS)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  8. Estimating Divergence Parameters With Small Samples From a Large Number of Loci

    PubMed Central

    Wang, Yong; Hey, Jody

    2010-01-01

    Most methods for studying divergence with gene flow rely upon data from many individuals at few loci. Such data can be useful for inferring recent population history but they are unlikely to contain sufficient information about older events. However, the growing availability of genome sequences suggests a different kind of sampling scheme, one that may be more suited to studying relatively ancient divergence. Data sets extracted from whole-genome alignments may represent very few individuals but contain a very large number of loci. To take advantage of such data we developed a new maximum-likelihood method for genomic data under the isolation-with-migration model. Unlike many coalescent-based likelihood methods, our method does not rely on Monte Carlo sampling of genealogies, but rather provides a precise calculation of the likelihood by numerical integration over all genealogies. We demonstrate that the method works well on simulated data sets. We also consider two models for accommodating mutation rate variation among loci and find that the model that treats mutation rates as random variables leads to better estimates. We applied the method to the divergence of Drosophila melanogaster and D. simulans and detected a low, but statistically significant, signal of gene flow from D. simulans to D. melanogaster. PMID:19917765

  9. A Large-Scale Analysis of Impact Factor Biased Journal Self-Citations.

    PubMed

    Chorus, Caspar; Waltman, Ludo

    2016-01-01

    Based on three decades of citation data from across scientific fields of science, we study trends in impact factor biased self-citations of scholarly journals, using a purpose-built and easy to use citation based measure. Our measure is given by the ratio between i) the relative share of journal self-citations to papers published in the last two years, and ii) the relative share of journal self-citations to papers published in preceding years. A ratio higher than one suggests that a journal's impact factor is disproportionally affected (inflated) by self-citations. Using recently reported survey data, we show that there is a relation between high values of our proposed measure and coercive journal self-citation malpractices. We use our measure to perform a large-scale analysis of impact factor biased journal self-citations. Our main empirical result is, that the share of journals for which our measure has a (very) high value has remained stable between the 1980s and the early 2000s, but has since risen strongly in all fields of science. This time span corresponds well with the growing obsession with the impact factor as a journal evaluation measure over the last decade. Taken together, this suggests a trend of increasingly pervasive journal self-citation malpractices, with all due unwanted consequences such as inflated perceived importance of journals and biased journal rankings.

  10. Long term outcome and prognostic factors for large hepatocellular carcinoma (10 cm or more) after surgical resection.

    PubMed

    Pandey, Durgatosh; Lee, Kang-Hoe; Wai, Chun-Tao; Wagholikar, Gajanan; Tan, Kai-Chah

    2007-10-01

    Surgical resection is the standard treatment for hepatocellular carcinoma (HCC). However, the role of surgery in treatment of large tumors (10 cm or more) is controversial. We have analyzed, in a single centre, the long-term outcome associated with surgical resection in patients with such large tumors. We retrospectively investigated 166 patients who had undergone surgical resection between July 1995 and December 2006 because of large (10 cm or more) HCC. Survival analysis was done using the Kaplan-Meier method. Prognostic factors were evaluated using univariate and multivariate analyses. Of the 166 patients evaluated, 80% were associated with viral hepatitis and 48.2% had cirrhosis. The majority of patients underwent a major hepatectomy (48.2% had four or more segments resected and 9% had additional organ resection). The postoperative mortality was 3%. The median survival in our study was 20 months, with an actuarial 5-year and 10-year overall survival of 28.6% and 25.6%, respectively. Of these patients, 60% had additional treatment in the form of transarterial chemoembolization, radiofrequency ablation or both. On multivariate analysis, vascular invasion (P < 0.001), cirrhosis (P = 0.028), and satellite lesions/multicentricity (P = 0.006) were significant prognostic factors influencing survival. The patients who had none of these three risk factors had 5-year and 10-year overall survivals of 57.7% each, compared with 22.5% and 19.3%, respectively, for those with at least one risk factor (P < 0.001). Surgical resection for those with large HCC can be safely performed with a reasonable long-term survival. For tumors with poor prognostic factors, there is a pressing need for effective adjuvant therapy.

  11. Conflict of Interest Policies for Organizations Producing a Large Number of Clinical Practice Guidelines

    PubMed Central

    Norris, Susan L.; Holmer, Haley K.; Burda, Brittany U.; Ogden, Lauren A.; Fu, Rongwei

    2012-01-01

    Background Conflict of interest (COI) of clinical practice guideline (CPG) sponsors and authors is an important potential source of bias in CPG development. The objectives of this study were to describe the COI policies for organizations currently producing a significant number of CPGs, and to determine if these policies meet 2011 Institute of Medicine (IOM) standards. Methodology/Principal Findings We identified organizations with five or more guidelines listed in the National Guideline Clearinghouse between January 1, 2009 and November 5, 2010. We obtained the COI policy for each organization from publicly accessible sources, most often the organization's website, and compared those polices to IOM standards related to COI. 37 organizations fulfilled our inclusion criteria, of which 17 (46%) had a COI policy directly related to CPGs. These COI policies varied widely with respect to types of COI addressed, from whom disclosures were collected, monetary thresholds for disclosure, approaches to management, and updating requirements. Not one organization's policy adhered to all seven of the IOM standards that were examined, and nine organizations did not meet a single one of the standards. Conclusions/Significance COI policies among organizations producing a large number of CPGs currently do not measure up to IOM standards related to COI disclosure and management. CPG developers need to make significant improvements in these policies and their implementation in order to optimize the quality and credibility of their guidelines. PMID:22629391

  12. Factors controlling particle number concentration and size at metro stations

    NASA Astrophysics Data System (ADS)

    Reche, C.; Moreno, T.; Martins, V.; Minguillón, M. C.; Jones, T.; de Miguel, E.; Capdevila, M.; Centelles, S.; Querol, X.

    2017-05-01

    An extensive air quality campaign was performed at differently designed station platforms in the Barcelona metro system, aiming to investigate the factors governing airborne particle number (N) concentrations and their size distributions. The study of the daily trends of N concentrations by different size ranges shows that concentrations of N0.3-10 are closely related with the schedule of the metro service. Conversely, the hourly variation of N0.007-10 (mainly composed of ultrafine particles) could be partly governed by the entrance of particles from outdoor emissions through mechanical ventilation. Measurements under different ventilation settings at three metro platforms reveal that the effect on air quality linked to changes in the tunnel ventilation depends on the station design. Night-time maintenance works in tunnels are frequent activities in the metro system; and after intense prolonged works, these can result in higher N concentrations at platforms during the following metro operating hours (by up to 30%), this being especially evident for N1-10. Due to the complex mixture of factors controlling N, together with the differences in trends recorded for particles within different size ranges, developing an air quality strategy at metro systems is a great challenge. When compared to street-level urban particles concentrations, the priority in metro air quality should be dealing with particles coarser than 0.3 μm. In fact, the results suggest that at narrow platforms served by single-track tunnels the current forced tunnel ventilation during operating hours is less efficient in reducing coarse particles compared to fine.

  13. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    PubMed

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Gametocidal Factor Transferred from Aegilops geniculata Roth Can Be Adapted for Large-Scale Chromosome Manipulations in Cereals

    PubMed Central

    Kwiatek, Michał T.; Wiśniewska, Halina; Ślusarkiewicz-Jarzina, Aurelia; Majka, Joanna; Majka, Maciej; Belter, Jolanta; Pudelska, Hanna

    2017-01-01

    Segregation distorters are curious, evolutionarily selfish genetic elements, which distort Mendelian segregation in their favor at the expense of others. Those agents include gametocidal factors (Gc), which ensure their preferential transmission by triggering damages in cells lacking them via chromosome break induction. Hence, we hypothesized that the gametocidal system can be adapted for chromosome manipulations between Triticum and Secale chromosomes in hexaploid triticale (×Triticosecale Wittmack). In this work we studied the little-known gametocidal action of a Gc factor located on Aegilops geniculata Roth chromosome 4Mg. Our results indicate that the initiation of the gametocidal action takes place at anaphase II of meiosis of pollen mother cells. Hence, we induced androgenesis at postmeiotic pollen divisions (via anther cultures) in monosomic 4Mg addition plants of hexaploid triticale (AABBRR) followed by production of doubled haploids, to maintain the chromosome aberrations caused by the gametocidal action. This approach enabled us to obtain a large number of plants with two copies of particular chromosome translocations, which were identified by the use of cytomolecular methods. We obtained 41 doubled haploid triticale lines and 17 of them carried chromosome aberrations that included plants with the following chromosome sets: 40T+Dt2RS+Dt2RL (5 lines), 40T+N2R (1), 38T+D4RS.4BL (3), 38T+D5BS-5BL.5RL (5), and 38T+D7RS.3AL (3). The results show that the application of the Gc mechanism in combination with production of doubled haploid lines provides a sufficiently large population of homozygous doubled haploid individuals with two identical copies of translocation chromosomes. In our opinion, this approach will be a valuable tool for the production of novel plant material, which could be used for gene tracking studies, genetic mapping, and finally to enhance the diversity of cereals. PMID:28396677

  15. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Axelrod, T. S.

    2006-07-01

    The Large Synoptic Survey Telescope (LSST) is an 8.4 meter telescope with a 10 square degree field degree field and a 3 Gigapixel imager, planned to be on-sky in 2012. It is a dedicated all-sky survey instrument, with several complementary science missions. These include understanding dark energy through weak lensing and supernovae; exploring transients and variable objects; creating and maintaining a solar system map, with particular emphasis on potentially hazardous objects; and increasing the precision with which we understand the structure of the Milky Way. The instrument operates continuously at a rapid cadence, repetitively scanning the visible sky every few nights. The data flow rates from LSST are larger than those from current surveys by roughly a factor of 1000: A few GB/night are typical today. LSST will deliver a few TB/night. From a computing hardware perspective, this factor of 1000 can be dealt with easily in 2012. The major issues in designing the LSST data management system arise from the fact that the number of people available to critically examine the data will not grow from current levels. This has a number of implications. For example, every large imaging survey today is resigned to the fact that their image reduction pipelines fail at some significant rate. Many of these failures are dealt with by rerunning the reduction pipeline under human supervision, with carefully ``tweaked'' parameters to deal with the original problem. For LSST, this will no longer be feasible. The problem is compounded by the fact that the processing must of necessity occur on clusters with large numbers of CPU's and disk drives, and with some components connected by long-haul networks. This inevitably results in a significant rate of hardware component failures, which can easily lead to further software failures. Both hardware and software failures must be seen as a routine fact of life rather than rare exceptions to normality.

  16. Incidence of infants born small- and large-for-gestational-age in an Italian cohort over a 20-year period and associated risk factors.

    PubMed

    Chiavaroli, Valentina; Castorani, Valeria; Guidone, Paola; Derraik, José G B; Liberati, Marco; Chiarelli, Francesco; Mohn, Angelika

    2016-04-26

    We assessed the incidence of infants born small-for-gestational-age (SGA) and large-for-gestational-age (LGA) in an Italian cohort over 20 years (1993-2013). Furthermore, we investigated maternal factors associated with SGA and LGA births. A retrospective review of obstetric records was performed on infants born in Chieti (Italy) covering every 5(th) year over a 20-year period, specifically examining data for 1993, 1998, 2003, 2008, and 2013. Infants with birthweight <10(th) percentile were defined as SGA, and those with birthweight >90(th) percentile as LGA. Data collected included newborn anthropometry, birth (multiple vs singleton), maternal anthropometry, previous miscarriage, gestational diabetes, hypertension, and smoking during pregnancy. There were a pooled total of 5896 live births recorded across the 5 selected years. The number of SGA (+60.6 %) and LGA (+90.2 %) births increased considerably between 1993 and 2013. However, there were no marked changes in the incidence of SGA or LGA births (8.3 % and 10.8 % in 1993 versus 7.6 % and 11.7 % in 2013, respectively). Maternal factors associated with increased risk of SGA infants included hypertension, smoking, and previous miscarriage (all p < 0.05), while greater pre-pregnancy BMI and gestational diabetes were risk factors for LGA births (all p < 0.05). There was an increase in the number of SGA and LGA births in Chieti over the last two decades, but there was little change in incidence over time. Most maternal factors associated with increased odds of SGA and LGA births were modifiable, thus incidence could be reduced by targeted interventions.

  17. Susceptibility of contrail ice crystal numbers to aircraft soot particle emissions

    NASA Astrophysics Data System (ADS)

    Kärcher, B.; Voigt, C.

    2017-08-01

    We develop an idealized, physically based model describing combined effects of ice nucleation and sublimation on ice crystal number during persistent contrail formation. Our study represents the first effort to predict ice numbers at the point where contrails transition into contrail cirrus—several minutes past formation—by connecting them to aircraft soot particle emissions and atmospheric supersaturation with respect to ice. Results averaged over an observed exponential distribution of ice supersaturation (mean value 15%) indicate that large reductions in soot particle numbers are needed to lower contrail ice crystal numbers significantly for soot emission indices around 1015 (kg fuel)-1, because reductions in nucleated ice number are partially compensated by sublimation losses. Variations in soot particle (-50%) and water vapor (+10%) emission indices at threefold lower soot emissions resulting from biofuel blending cause ice crystal numbers to change by -35% and <5%, respectively. The efficiency of reduction depends on ice supersaturation and the size distribution of nucleated ice crystals in jet exhaust plumes and on atmospheric ice supersaturation, making the latter another key factor in contrail mitigation. We expect our study to have important repercussions for planning airborne measurements targeting contrail formation, designing parameterization schemes for use in large-scale models, reducing uncertainties in predicting contrail cirrus, and mitigating the climate impact of aviation.

  18. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    NASA Technical Reports Server (NTRS)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  19. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    NASA Technical Reports Server (NTRS)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  20. Sum-Difference Numbers

    ERIC Educational Resources Information Center

    Shi, Yixun

    2010-01-01

    Starting with an interesting number game sometimes used by school teachers to demonstrate the factorization of integers, "sum-difference numbers" are defined. A positive integer n is a "sum-difference number" if there exist positive integers "x, y, w, z" such that n = xy = wz and x ? y = w + z. This paper characterizes all sum-difference numbers…

  1. Can vascular risk factors influence number and size of cerebral metastases? A 3D-MRI study in patients with different tumor entities.

    PubMed

    Nagel, Sandra; Berk, Benjamin-Andreas; Kortmann, Rolf-Dieter; Hoffmann, Karl-Titus; Seidel, Clemens

    2018-02-01

    There is increasing evidence that cerebral microangiopathy reduces number of brain metastases. Aim of this study was to analyse if vascular risk factors (arterial hypertension, diabetes mellitus, smoking, and hypercholesterolemia) or the presence of peripheral arterial occlusive disease (PAOD) can have an impact on number or size of brain metastases. 200 patients with pre-therapeutic 3D-brain MRI and available clinical data were analyzed retrospectively. Mean number of metastases (NoM) and mean diameter of metastases (mDM) were compared between patients with/without vascular risk factors (vasRF). No general correlation of vascular risk factors with brain metastases was found in this monocentric analysis of a patient cohort with several tumor types. Arterial hypertension, diabetes mellitus, hypercholesterolemia and smoking did not show an effect in uni- and multivariate analysis. In patients with PAOD the number of BM was lower than without PAOD. This was the case independent from cerebral microangiopathy but did not persist in multivariate analysis. From this first screening approach vascular risk factors do not appear to strongly influence brain metastasation. However, larger prospective multi-centric studies with better characterized severity of vascular risk are needed to more accurately detect effects of individual factors. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  3. Factor Structure and Correlates of the Dissociative Experiences Scale in a Large Offender Sample

    ERIC Educational Resources Information Center

    Ruiz, Mark A.; Poythress, Norman G.; Lilienfeld, Scott O.; Douglas, Kevin S.

    2008-01-01

    The authors examined the psychometric properties, factor structure, and construct validity of the Dissociative Experiences Scale (DES) in a large offender sample (N = 1,515). Although the DES is widely used with community and clinical samples, minimal work has examined offender samples. Participants were administered self-report and interview…

  4. The Main Suppressing Factors of Dry Forage Intake in Large-type Goats

    PubMed Central

    Van Thang, Tran; Sunagawa, Katsunori; Nagamine, Itsuki; Kishi, Tetsuya; Ogura, Go

    2012-01-01

    In large-type goats that were fed on dry forage twice daily, dry forage intake was markedly suppressed after 40 min of feeding had elapsed. The objective of this study was to determine whether or not marked decreases in dry forage intake after 40 min of feeding are mainly caused by the two factors, that is, ruminal distension and increased plasma osmolality induced thirst produced by dry forage feeding. Six large-type male esophageal- and ruminal-fistulated goats (crossbred Japanese Saanen/Nubian, aged 2 to 6 years, weighing 85.1±4.89 kg) were used in two experiments. The animals were fed ad libitum a diet of roughly crushed alfalfa hay cubes for 2 h from 10:00 to 12:00 am during two experiments. Water was withheld during feeding in both experiments but was available for a period of 30 min after completion of the 2 h feeding period. In experiment 1, saliva lost via the esophageal fistula was replenished by an intraruminal infusion of artificial parotid saliva (RIAPS) in sham feeding conditions (SFC) control, and the treatment was maintained under normal feeding conditions (NFC). In experiment 2, a RIAPS and non-insertion of a balloon (RIAPS-NB) control was conducted in the same manner as the SFC control of experiment 1. The intraruminal infusion of hypertonic solution and insertion of a balloon (RIHS-IB) treatment was carried out simultaneously to reproduce the effects of changing salt content and ruminal distension due to feed entering the rumen. The results of experiment 1 showed that due to the effects of multiple dry forage suppressing factors when feed boluses entered the rumen, eating rates in the NFC treatment decreased (p<0.05) after 40 min of feeding and cumulative dry forage intake for the 2 h feeding period reduced to 43.8% of the SFC control (p<0.01). The results of experiment 2 indicated that due to the two suppressing factors of ruminal distension and increased plasma osmolality induced thirst, eating rates in the RIHS-IB treatment were, as observed

  5. The leukemia-associated Rho guanine nucleotide exchange factor LARG is required for efficient replication stress signaling

    PubMed Central

    Beveridge, Ryan D; Staples, Christopher J; Patil, Abhijit A; Myers, Katie N; Maslen, Sarah; Skehel, J Mark; Boulton, Simon J; Collis, Spencer J

    2014-01-01

    We previously identified and characterized TELO2 as a human protein that facilitates efficient DNA damage response (DDR) signaling. A subsequent yeast 2-hybrid screen identified LARG; Leukemia-Associated Rho Guanine Nucleotide Exchange Factor (also known as Arhgef12), as a potential novel TELO2 interactor. LARG was previously shown to interact with Pericentrin (PCNT), which, like TELO2, is required for efficient replication stress signaling. Here we confirm interactions between LARG, TELO2 and PCNT and show that a sub-set of LARG co-localizes with PCNT at the centrosome. LARG-deficient cells exhibit replication stress signaling defects as evidenced by; supernumerary centrosomes, reduced replication stress-induced γH2AX and RPA nuclear foci formation, and reduced activation of the replication stress signaling effector kinase Chk1 in response to hydroxyurea. As such, LARG-deficient cells are sensitive to replication stress-inducing agents such as hydroxyurea and mitomycin C. Conversely we also show that depletion of TELO2 and the replication stress signaling kinase ATR leads to RhoA signaling defects. These data therefore reveal a level of crosstalk between the RhoA and DDR signaling pathways. Given that mutations in both ATR and PCNT can give rise to the related primordial dwarfism disorders of Seckel Syndrome and Microcephalic osteodysplastic primordial dwarfism type II (MOPDII) respectively, which both exhibit defects in ATR-dependent checkpoint signaling, these data also raise the possibility that mutations in LARG or disruption to RhoA signaling may be contributory factors to the etiology of a sub-set of primordial dwarfism disorders. PMID:25485589

  6. Growth of equilibrium structures built from a large number of distinct component types.

    PubMed

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  7. Copy number variation at the 7q11.23 segmental duplications is a susceptibility factor for the Williams-Beuren syndrome deletion

    PubMed Central

    Cuscó, Ivon; Corominas, Roser; Bayés, Mònica; Flores, Raquel; Rivera-Brugués, Núria; Campuzano, Victoria; Pérez-Jurado, Luis A.

    2008-01-01

    Large copy number variants (CNVs) have been recently found as structural polymorphisms of the human genome of still unknown biological significance. CNVs are significantly enriched in regions with segmental duplications or low-copy repeats (LCRs). Williams-Beuren syndrome (WBS) is a neurodevelopmental disorder caused by a heterozygous deletion of contiguous genes at 7q11.23 mediated by nonallelic homologous recombination (NAHR) between large flanking LCRs and facilitated by a structural variant of the region, a ∼2-Mb paracentric inversion present in 20%–25% of WBS-transmitting progenitors. We now report that eight out of 180 (4.44%) WBS-transmitting progenitors are carriers of a CNV, displaying a chromosome with large deletion of LCRs. The prevalence of this CNV among control individuals and non-transmitting progenitors is much lower (1%, n = 600), thus indicating that it is a predisposing factor for the WBS deletion (odds ratio 4.6-fold, P = 0.002). LCR duplications were found in 2.22% of WBS-transmitting progenitors but also in 1.16% of controls, which implies a non–statistically significant increase in WBS-transmitting progenitors. We have characterized the organization and breakpoints of these CNVs, encompassing ∼100–300 kb of genomic DNA and containing several pseudogenes but no functional genes. Additional structural variants of the region have also been defined, all generated by NAHR between different blocks of segmental duplications. Our data further illustrate the highly dynamic structure of regions rich in segmental duplications, such as the WBS locus, and indicate that large CNVs can act as susceptibility alleles for disease-associated genomic rearrangements in the progeny. PMID:18292220

  8. Controller certification: The generalized stability margin inference for a large number of MIMO controllers

    NASA Astrophysics Data System (ADS)

    Park, Jisang

    In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the

  9. How Math Anxiety Relates to Number-Space Associations.

    PubMed

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2016-01-01

    Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.

  10. ABFs, a family of ABA-responsive element binding factors.

    PubMed

    Choi, H; Hong, J; Ha, J; Kang, J; Kim, S Y

    2000-01-21

    Abscisic acid (ABA) plays an important role in environmental stress responses of higher plants during vegetative growth. One of the ABA-mediated responses is the induced expression of a large number of genes, which is mediated by cis-regulatory elements known as abscisic acid-responsive elements (ABREs). Although a number of ABRE binding transcription factors have been known, they are not specifically from vegetative tissues under induced conditions. Considering the tissue specificity of ABA signaling pathways, factors mediating ABA-dependent stress responses during vegetative growth phase may thus have been unidentified so far. Here, we report a family of ABRE binding factors isolated from young Arabidopsis plants under stress conditions. The factors, isolated by a yeast one-hybrid system using a prototypical ABRE and named as ABFs (ABRE binding factors) belong to a distinct subfamily of bZIP proteins. Binding site selection assay performed with one ABF showed that its preferred binding site is the strong ABRE, CACGTGGC. ABFs can transactivate an ABRE-containing reporter gene in yeast. Expression of ABFs is induced by ABA and various stress treatments, whereas their induction patterns are different from one another. Thus, a new family of ABRE binding factors indeed exists that have the potential to activate a large number of ABA/stress-responsive genes in Arabidopsis.

  11. Computer Use and Factors Related to Computer Use in Large Independent Secondary School Libraries.

    ERIC Educational Resources Information Center

    Currier, Heidi F.

    Survey results about the use of computers in independent secondary school libraries are reported, and factors related to the presence of computers are identified. Data are from 104 librarians responding to a questionnaire sent to a sample of 136 large (over 400 students) independent secondary schools. Data are analyzed descriptively to show the…

  12. SEM-PLS Analysis of Inhibiting Factors of Cost Performance for Large Construction Projects in Malaysia: Perspective of Clients and Consultants

    PubMed Central

    Memon, Aftab Hameed; Rahman, Ismail Abdul

    2014-01-01

    This study uncovered inhibiting factors to cost performance in large construction projects of Malaysia. Questionnaire survey was conducted among clients and consultants involved in large construction projects. In the questionnaire, a total of 35 inhibiting factors grouped in 7 categories were presented to the respondents for rating significant level of each factor. A total of 300 questionnaire forms were distributed. Only 144 completed sets were received and analysed using advanced multivariate statistical software of Structural Equation Modelling (SmartPLS v2). The analysis involved three iteration processes where several of the factors were deleted in order to make the model acceptable. The result of the analysis found that R 2 value of the model is 0.422 which indicates that the developed model has a substantial impact on cost performance. Based on the final form of the model, contractor's site management category is the most prominent in exhibiting effect on cost performance of large construction projects. This finding is validated using advanced techniques of power analysis. This vigorous multivariate analysis has explicitly found the significant category which consists of several causative factors to poor cost performance in large construction projects. This will benefit all parties involved in construction projects for controlling cost overrun. PMID:24693227

  13. SEM-PLS analysis of inhibiting factors of cost performance for large construction projects in Malaysia: perspective of clients and consultants.

    PubMed

    Memon, Aftab Hameed; Rahman, Ismail Abdul

    2014-01-01

    This study uncovered inhibiting factors to cost performance in large construction projects of Malaysia. Questionnaire survey was conducted among clients and consultants involved in large construction projects. In the questionnaire, a total of 35 inhibiting factors grouped in 7 categories were presented to the respondents for rating significant level of each factor. A total of 300 questionnaire forms were distributed. Only 144 completed sets were received and analysed using advanced multivariate statistical software of Structural Equation Modelling (SmartPLS v2). The analysis involved three iteration processes where several of the factors were deleted in order to make the model acceptable. The result of the analysis found that R(2) value of the model is 0.422 which indicates that the developed model has a substantial impact on cost performance. Based on the final form of the model, contractor's site management category is the most prominent in exhibiting effect on cost performance of large construction projects. This finding is validated using advanced techniques of power analysis. This vigorous multivariate analysis has explicitly found the significant category which consists of several causative factors to poor cost performance in large construction projects. This will benefit all parties involved in construction projects for controlling cost overrun.

  14. Historical Increase in the Number of Factors Measured by Commercial Tests of Cognitive Ability: Are We Overfactoring?

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.

    2007-01-01

    A historical increase in the number of factors purportedly measured by commercial tests of cognitive ability may result from four distinct pressures including: increasingly complex models of intelligence, test publishers' desires to provide clinically useful assessment instruments with greater interpretive value, test publishers' desires to…

  15. Managing the risks of risk management on large fires

    Treesearch

    Donald G. MacGregor; Armando González-Cabán

    2013-01-01

    Large fires pose risks to a number of important values, including the ecology, property and the lives of incident responders. A relatively unstudied aspect of fire management is the risks to which incident managers are exposed due to organizational and sociopolitical factors that put them in a position of, for example, potential liability or degradation of their image...

  16. Identifiability of conservative linear mechanical systems. [applied to large flexible spacecraft structures

    NASA Technical Reports Server (NTRS)

    Sirlin, S. W.; Longman, R. W.; Juang, J. N.

    1985-01-01

    With a sufficiently great number of sensors and actuators, any finite dimensional dynamic system is identifiable on the basis of input-output data. It is presently indicated that, for conservative nongyroscopic linear mechanical systems, the number of sensors and actuators required for identifiability is very large, where 'identifiability' is understood as a unique determination of the mass and stiffness matrices. The required number of sensors and actuators drops by a factor of two, given a relaxation of the identifiability criterion so that identification can fail only if the system parameters being identified lie in a set of measure zero. When the mass matrix is known a priori, this additional information does not significantly affect the requirements for guaranteed identifiability, though the number of parameters to be determined is reduced by a factor of two.

  17. The assessment of science: the relative merits of post-publication review, the impact factor, and the number of citations.

    PubMed

    Eyre-Walker, Adam; Stoletzki, Nina

    2013-10-01

    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.

  18. The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations

    PubMed Central

    Eyre-Walker, Adam; Stoletzki, Nina

    2013-01-01

    The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative. PMID:24115908

  19. Distribution of Disease-Associated Copy Number Variants across Distinct Disorders of Cognitive Development

    ERIC Educational Resources Information Center

    Pescosolido, Matthew F.; Gamsiz, Ece D.; Nagpal, Shailender; Morrow, Eric M.

    2013-01-01

    Objective: The purpose of the present study was to discover the extent to which distinct "DSM" disorders share large, highly recurrent copy number variants (CNVs) as susceptibility factors. We also sought to identify gene mechanisms common to groups of diagnoses and/or specific to a given diagnosis based on associations with CNVs. Method:…

  20. Observer variability in estimating numbers: An experiment

    USGS Publications Warehouse

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  1. Different brains process numbers differently: structural bases of individual differences in spatial and nonspatial number representations.

    PubMed

    Krause, Florian; Lindemann, Oliver; Toni, Ivan; Bekkering, Harold

    2014-04-01

    A dominant hypothesis on how the brain processes numerical size proposes a spatial representation of numbers as positions on a "mental number line." An alternative hypothesis considers numbers as elements of a generalized representation of sensorimotor-related magnitude, which is not obligatorily spatial. Here we show that individuals' relative use of spatial and nonspatial representations has a cerebral counterpart in the structural organization of the posterior parietal cortex. Interindividual variability in the linkage between numbers and spatial responses (faster left responses to small numbers and right responses to large numbers; spatial-numerical association of response codes effect) correlated with variations in gray matter volume around the right precuneus. Conversely, differences in the disposition to link numbers to force production (faster soft responses to small numbers and hard responses to large numbers) were related to gray matter volume in the left angular gyrus. This finding suggests that numerical cognition relies on multiple mental representations of analogue magnitude using different neural implementations that are linked to individual traits.

  2. Assessment of large copy number variants in patients with apparently isolated congenital left-sided cardiac lesions reveals clinically relevant genomic events.

    PubMed

    Hanchard, Neil A; Umana, Luis A; D'Alessandro, Lisa; Azamian, Mahshid; Poopola, Mojisola; Morris, Shaine A; Fernbach, Susan; Lalani, Seema R; Towbin, Jeffrey A; Zender, Gloria A; Fitzgerald-Butt, Sara; Garg, Vidu; Bowman, Jessica; Zapata, Gladys; Hernandez, Patricia; Arrington, Cammon B; Furthner, Dieter; Prakash, Siddharth K; Bowles, Neil E; McBride, Kim L; Belmont, John W

    2017-08-01

    Congenital left-sided cardiac lesions (LSLs) are a significant contributor to the mortality and morbidity of congenital heart disease (CHD). Structural copy number variants (CNVs) have been implicated in LSL without extra-cardiac features; however, non-penetrance and variable expressivity have created uncertainty over the use of CNV analyses in such patients. High-density SNP microarray genotyping data were used to infer large, likely-pathogenic, autosomal CNVs in a cohort of 1,139 probands with LSL and their families. CNVs were molecularly confirmed and the medical records of individual carriers reviewed. The gene content of novel CNVs was then compared with public CNV data from CHD patients. Large CNVs (>1 MB) were observed in 33 probands (∼3%). Six of these were de novo and 14 were not observed in the only available parent sample. Associated cardiac phenotypes spanned a broad spectrum without clear predilection. Candidate CNVs were largely non-recurrent, associated with heterozygous loss of copy number, and overlapped known CHD genomic regions. Novel CNV regions were enriched for cardiac development genes, including seven that have not been previously associated with human CHD. CNV analysis can be a clinically useful and molecularly informative tool in LSLs without obvious extra-cardiac defects, and may identify a clinically relevant genomic disorder in a small but important proportion of these individuals. © 2017 Wiley Periodicals, Inc.

  3. Factors associated with delayed bleeding after resection of large nonpedunculated colorectal polyps.

    PubMed

    Elliott, Timothy R; Tsiamoulos, Zacharias P; Thomas-Gibson, Siwan; Suzuki, Noriko; Bourikas, Leonidas A; Hart, Ailsa; Bassett, Paul; Saunders, Brian P

    2018-04-06

     Delayed bleeding is the most common significant complication after piecemeal endoscopic mucosal resection (p-EMR) of large nonpedunculated colorectal polyps (NPCPs). Risk factors for delayed bleeding are incompletely defined. We aimed to determine risk factors for delayed bleeding following p-EMR.  Data were analyzed from a prospective tertiary center audit of patients with NPCPs ≥ 20 mm who underwent p-EMR between 2010 and 2012. Patient, polyp, and procedure-related data were collected. Four post p-EMR defect factors were evaluated for interobserver agreement and included in analysis. Delayed bleeding severity was reported in accordance with guidelines. Predictors of bleeding were identified.  Delayed bleeding requiring hospitalization occurred after 22 of 330 procedures (6.7 %). A total of 11 patients required blood transfusion; of these, 4 underwent urgent colonoscopy, 1 underwent radiological embolization, and 1 required surgery. Interobserver agreement for identification of the four post p-EMR defect factors was moderate (kappa range 0.52 - 0.57). Factors associated with delayed bleeding were visible muscle fibers ( P  = 0.03) and the presence of a "cherry red spot" ( P  = 0.05) in the post p-EMR defect. Factors not associated with delayed bleeding were American Association of Anesthesiologists class, aspirin use, polyp size, site, and use of argon plasma coagulation.  Visible muscle fibers and the presence of a "cherry red spot" in the resection defect were associated with delayed bleeding after p-EMR. These findings suggest evaluation and photodocumentation of the post p-EMR defect is important and, when considered alongside other patient and procedural factors, may help to reduce the incidence and severity of delayed bleeding. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Number sense in the transition from natural to rational numbers.

    PubMed

    Van Hoof, Jo; Verschaffel, Lieven; Van Dooren, Wim

    2017-03-01

    Rational numbers are of critical importance both in mathematics and in other fields of science. However, they form a stumbling block for learners. One widely known source of the difficulty learners have with rational numbers is the natural number bias, that is the tendency to (inappropriately) apply natural number properties in rational number tasks. Still, it has been shown that a good understanding of natural numbers is highly predictive for mathematics achievement in general, and for performance on rational number tasks in particular. In this study, we further investigated the relation between learners' natural and rational number knowledge, specifically in cases where a natural number bias may lead to errors. Participants were 140 sixth graders from six different primary schools. Participants completed a symbolic and a non-symbolic natural number comparison task, a number line estimation task, and a rational number sense test. Learners' natural number knowledge was found to be a good predictor of their rational number knowledge. However, after first controlling for learners' general mathematics achievement, their natural number knowledge only predicted the subaspect of operations with rational numbers. The results of this study suggest that the relation between learners' natural and rational number knowledge can largely be explained by their relation with learners' general mathematics achievement. © 2016 The British Psychological Society.

  5. Number and placement of control system components considering possible failures. [for large space structures

    NASA Technical Reports Server (NTRS)

    Vander Velde, W. E.; Carignan, C. R.

    1984-01-01

    One of the first questions facing the designer of the control system for a large space structure is how many components actuators and sensors - to specify and where to place them on the structure. This paper presents a methodology which is intended to assist the designer in making these choices. A measure of controllability is defined which is a quantitative indication of how well the system can be controlled with a given set of actuators. Similarly, a measure of observability is defined which is a quantitative indication of how well the system can be observed with a given set of sensors. Then the effect of component unreliability is introduced by computing the average expected degree of controllability (observability) over the operating lifetime of the system accounting for the likelihood of various combinations of component failures. The problem of component location is resolved by optimizing this performance measure over the admissible set of locations. The variation of this optimized performance measure with number of actuators (sensors) is helpful in deciding how many components to use.

  6. Genetic Basis for Developmental Homeostasis of Germline Stem Cell Niche Number: A Network of Tramtrack-Group Nuclear BTB Factors

    PubMed Central

    Chalvet, Fabienne; Netter, Sophie; Dos Santos, Nicolas; Poisot, Emilie; Paces-Fessy, Mélanie; Cumenal, Delphine; Peronnet, Frédérique; Pret, Anne-Marie; Théodore, Laurent

    2012-01-01

    The potential to produce new cells during adult life depends on the number of stem cell niches and the capacity of stem cells to divide, and is therefore under the control of programs ensuring developmental homeostasis. However, it remains generally unknown how the number of stem cell niches is controlled. In the insect ovary, each germline stem cell (GSC) niche is embedded in a functional unit called an ovariole. The number of ovarioles, and thus the number of GSC niches, varies widely among species. In Drosophila, morphogenesis of ovarioles starts in larvae with the formation of terminal filaments (TFs), each made of 8–10 cells that pile up and sort in stacks. TFs constitute organizers of individual germline stem cell niches during larval and early pupal development. In the Drosophila melanogaster subgroup, the number of ovarioles varies interspecifically from 8 to 20. Here we show that pipsqueak, Trithorax-like, batman and the bric-à-brac (bab) locus, all encoding nuclear BTB/POZ factors of the Tramtrack Group, are involved in limiting the number of ovarioles in D. melanogaster. At least two different processes are differentially perturbed by reducing the function of these genes. We found that when the bab dose is reduced, sorting of TF cells into TFs was affected such that each TF contains fewer cells and more TFs are formed. In contrast, psq mutants exhibited a greater number of TF cells per ovary, with a normal number of cells per TF, thereby leading to formation of more TFs per ovary than in the wild type. Our results indicate that two parallel genetic pathways under the control of a network of nuclear BTB factors are combined in order to negatively control the number of germline stem cell niches. PMID:23185495

  7. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    NASA Astrophysics Data System (ADS)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  8. Factor Structure of the Psychopathic Personality Inventory (PPI): Findings from a Large Incarcerated Sample

    PubMed Central

    Neumann, Craig S.; Malterer, Melanie B.; Newman, Joseph P.

    2010-01-01

    Recent exploratory factor analysis (EFA) of the Psychopathic Personality Inventory (PPI; Lilienfeld, 1990) with a community sample suggested that the PPI subscales may be comprised of two higher-order factors (Benning et al., 2003). However, little research has examined the PPI structure in offenders. The current study attempted to replicate the Benning et al. two-factor solution using a large (N=1224) incarcerated male sample. Confirmatory factor analysis (CFA) of this model with the full sample resulted in poor model fit. Next, to identify a factor solution that would summarize the offender data, EFA was conducted using a split-half of the total sample, followed by an attempt to replicate the EFA solution via CFA with the other split-half sample. Using the recommendations of Prooijen and van der Kloot (2001) for recovering EFA solutions, model fit results provided some evidence that the EFA solution could be recovered via CFA. However, this model involved extensive cross-loadings of the subscales across three factors, suggesting item overlap across PPI subscales. In sum, the two-factor solution reported by Benning et al. (2003) was not a viable model for the current sample of offenders, and additional research is needed to elucidate the latent structure of the PPI. PMID:18557694

  9. Numbers and space: associations and dissociations.

    PubMed

    Nathan, Merav Ben; Shaki, Samuel; Salti, Moti; Algom, Daniel

    2009-06-01

    A cornerstone of contemporary research in numerical cognition is the surprising link found between numbers and space. In particular, people react faster and more accurately to small numbers with a left-hand key and to large numbers with a right-hand key. Because this contingency is found in a variety of tasks, it has been taken to support the automatic activation of magnitude as well as the notion of a mental number line arranged from left to right. The present study challenges the presence of a link between left-right location, on the one hand, and small-large number, on the other hand. We show that a link exists between space and relative magnitude, a relationship that might or might not be unique to numbers.

  10. [Projection of prisoner numbers].

    PubMed

    Metz, Rainer; Sohn, Werner

    2015-01-01

    The past and future development of occupancy rates in prisons is of crucial importance for the judicial administration of every country. Basic factors for planning the required penal facilities are seasonal fluctuations, minimum, maximum and average occupancy as well as the present situation and potential development of certain imprisonment categories. As the prisoner number of a country is determined by a complex set of interdependent conditions, it has turned out to be difficult to provide any theoretical explanations. The idea accepted in criminology for a long time that prisoner numbers are interdependent with criminal policy must be regarded as having failed. Statistical and time series analyses may help, however, to identify the factors having influenced the development of prisoner numbers in the past. The analyses presented here, first describe such influencing factors from a criminological perspective and then deal with their statistical identification and modelling. Using the development of prisoner numbers in Hesse as an example, it has been found that modelling methods in which the independent variables predict the dependent variable with a time lag are particularly helpful. A potential complication is, however, that for predicting the number of prisoners the different dynamics in German and foreign prisoners require the development of further models.

  11. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM) sleep hypersomnia.

    PubMed

    Sapin, Emilie; Bérod, Anne; Léger, Lucienne; Herman, Paul A; Luppi, Pierre-Hervé; Peyron, Christelle

    2010-07-26

    We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS) hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH) neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67) mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67)in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+), Fos-ir/MCH(+), and GAD(+)/MCH(+) double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+) neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  12. A Proposed Solution to the Problem with Using Completely Random Data to Assess the Number of Factors with Parallel Analysis

    ERIC Educational Resources Information Center

    Green, Samuel B.; Levy, Roy; Thompson, Marilyn S.; Lu, Min; Lo, Wen-Juo

    2012-01-01

    A number of psychometricians have argued for the use of parallel analysis to determine the number of factors. However, parallel analysis must be viewed at best as a heuristic approach rather than a mathematically rigorous one. The authors suggest a revision to parallel analysis that could improve its accuracy. A Monte Carlo study is conducted to…

  13. Types and numbers of sensilla on antennae and maxillary palps of small and large houseflies, Musca domestica (Diptera, Muscidae).

    PubMed

    Smallegange, Renate C; Kelling, Frits J; Den Otter, Cornelis J

    2008-12-01

    Houseflies, Musca domestica, obtained from a high-larval-density culture were significantly (ca. 1.5 times) smaller than those from a low-larval-density culture. The same held true for their antennae and maxillary palps. Structure, number, and distribution of sensilla on antennae and palps of small and large flies were investigated using Scanning electron microscopy and Transmission electron microscopy. In each funiculus three pits were present, two (Type I) consisting of several compartments and one (Type II) of one compartment. Four types of olfactory sensilla were detected: trichoid sensilla on the funiculi, basiconic sensilla on funiculi and palps, grooved sensilla on funiculi and in pits Type I, and clavate sensilla on funiculi and in pits Type II. Type I pits also contained striated sensilla (presumably hygroreceptors). Mechanosensory bristles were present on scapes, pedicels, and palps. Noninnervated microtrichia were found on the palps and all antennal segments. The large houseflies possessed nearly twice as much sensilla as the small flies. So far, we did not observe differences in behavior between small and large flies. We assumed that small flies, being olfactory less equipped than large flies, may be able to compensate for this by, e.g., visual cues or by their olfactory sensilla being more sensitive than those of large flies. To be able to answer these questions careful studies have to be done on the behavioral responses of small and large flies to environmental stimuli. In addition, electrophysiological studies should be performed to reveal whether the responses of individual sensilla of flies reared under different conditions have been changed. 2008 Wiley-Liss, Inc.

  14. Longitudinal Aerodynamic Characteristics to Large Angles of Attack of a Cruciform Missile Configuration at a Mach Number of 2

    NASA Technical Reports Server (NTRS)

    Spahr, J. R.

    1954-01-01

    The lift, pitching-moment, and drag characteristics of a missile configuration having a body of fineness ratio 9.33 and a cruciform triangular wing and tail of aspect ratio 4 were measured at a Mach number of 1.99 and a Reynolds number of 6.0 million, based on the body length. The tests were performed through an angle-of-attack range of -5 deg to 28 deg to investigate the effects on the aerodynamic characteristics of roll angle, wing-tail interdigitation, wing deflection, and interference among the components (body, wing, and tail). Theoretical lift and moment characteristics of the configuration and its components were calculated by the use of existing theoretical methods which have been modified for application to high angles of attack, and these characteristics are compared with experiment. The lift and drag characteristics of all combinations of the body, wing, and tail were independent of roll angle throughout the angle-of-attack range. The pitching-moment characteristics of the body-wing and body-wing-tail combinations, however, were influenced significantly by the roll angle at large angles of attack (greater than 10 deg). A roll from 0 deg (one pair of wing panels horizontal) to 45 deg caused a forward shift in the center of pressure which was of the same magnitude for both of these combinations, indicating that this shift originated from body-wing interference effects. A favorable lift-interference effect (lift of the combination greater than the sum of the lifts of the components) and a rearward shift in the center of pressure from a position corresponding to that for the components occurred at small angles of attack when the body was combined with either the exposed wing or tail surfaces. These lift and center-of-pressure interference effects were gradually reduced to zero as the angle of attack was increased to large values. The effect of wing-tail interference, which influenced primarily the pitching-moment characteristics, is dependent on the distance

  15. Large-D gravity and low-D strings.

    PubMed

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  16. Exploring factors influencing the strength of the safety-in-numbers effect.

    PubMed

    Elvik, Rune

    2017-03-01

    Several studies have found a so-called safety-in-numbers effect for vulnerable road users. This means that when the number of pedestrians or cyclists increases, the number of accidents involving these road users and motor vehicles increases less than in proportion to the number of pedestrians or cyclists. In other words, travel becomes safer for each pedestrian or cyclist the more pedestrians or cyclists there are. This finding is highly consistent, but estimates of the strength of the safety-in-numbers effect vary considerably. This paper shows that the strength of the safety-in-numbers effect is inversely related to the number of pedestrians and cyclists. A stronger safety-in-numbers is found when there are few pedestrians or cyclists than when there are many. This finding is counterintuitive and one would expect the opposite relationship. The relationship between the ratio of the number of motor vehicles to the number of pedestrians or cyclists and the strength of the safety-in-numbers effect is ambiguous. Possible explanations of these tendencies are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Rising breakeven load factors threaten airline finances

    DOT National Transportation Integrated Search

    2003-10-01

    Since 2000, most large passenger airlines suffered a sharp increase in their Breakeven Load Factor the number of seats they have to sell to cover operating expenses. Some carriers could not cover operating expenses even if they sold 100% of their...

  18. Social and Individual Frame Factors in L2 Learning: Comparative Aspects.

    ERIC Educational Resources Information Center

    Ekstrand, Lars H.

    A large number of factors are considered in their role in second language learning. Individual factors include language aptitude, personality, attitudes and motivation, and the role of the speaker's native language. Teacher factors involve the method of instruction, the sex of the teacher, and a teacher's training and competence, while…

  19. Association of insulin-related serum factors with colorectal polyp number and type in adult males

    PubMed Central

    Comstock, Sarah S.; Xu, Diana; Hortos, Kari; Kovan, Bruce; McCaskey, Sarah; Pathak, Dorothy R.; Fenton, Jenifer I.

    2014-01-01

    Background Dysregulated insulin signaling is thought to contribute to cancer risk. Methods To determine if insulin-related serum factors are associated with colon polyps, 126 asymptomatic men (48–65yr) were recruited at colonoscopy. Blood was collected. Odds ratios were determined using polytomous logistic regression for polyp number and type. Results Males with serum C-peptide concentration >3.3 ng/ml were 3.8 times more likely to have an adenoma relative to no polyp than those with C-peptide ≤1.8 ng/ml. As C-peptide tertile increased, an individual was 2 times more likely to have an adenoma (p=0.01) than no polyp. There were no associations between insulin-like growth factor or its binding proteins with polyp number or type. Males with soluble receptor for advanced glycation end products (sRAGE) concentration >120.4 pg/ml were 0.25 times less likely to have ≥3 polyps relative to no polyps compared to males with sRAGE ≤94.5 pg/ml. For each increase in sRAGE tertile, a man was 0.5 times less likely to have ≥3 polyps than no polyps (p=0.03). Compared to males with a serum vascular endothelial growth factor (VEGF) concentration ≤104.7 pg/ml, males with a serum VEGF concentration >184.2 pg/ml were 3.4 times more likely to have ≥3 polyps relative to no polyps. As the VEGF tertile increased, a man was 1.9 times more likely to have ≥3 polyps than no polyps (p=0.049). Conclusions Serum concentrations of C-peptide, sRAGE, and VEGF may indicate which men could benefit most from colonoscopy. Impact Identification of biomarkers could reduce medical costs through the elimination of colonoscopies on low-risk individuals. PMID:24962837

  20. Processing of Intentional and Automatic Number Magnitudes in Children Born Prematurely: Evidence From fMRI

    PubMed Central

    Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane

    2014-01-01

    This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes—indicating commonalities but also specificities of intentional and automatic number processing. PMID:25090014

  1. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  2. Porous capsules with a large number of active sites: nucleation/growth under confined conditions.

    PubMed

    Garai, Somenath; Rubčić, Mirta; Bögge, Hartmut; Gouzerh, Pierre; Müller, Achim

    2015-03-09

    This work deals with the generation of large numbers of active sites and with ensuing nucleation/ growth processes on the inside wall of the cavity of porous nanocapsules of the type (pentagon)12(linker)30≡{(Mo(VI))Mo(VI)5}12{Mo(V)2(ligand)}30. A first example refers to sulfur dioxide capture through displacement of acetate ligands, while the grafted sulfite ligands are able to trap {MoO3H}(+) units thereby forming unusual {(O2SO)3MoO3H}(5-) assemblies. A second example relates to the generation of open coordination sites through release of carbon dioxide upon mild acidification of a carbonate-type capsule. When the reaction is performed in the presence of heptamolybdate ions, MoO4(2-) ions enter the cavity where they bind to the inside wall while forming new types of polyoxomolybdate architectures, thereby extending the molybdenum oxide skeleton of the capsule. Parallels can be drawn with Mo-storage proteins and supported MoO3 catalysts, making the results relevant to molybdenum biochemistry and to catalysis. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Risk factors for lung function decline in a large cohort of young cystic fibrosis patients.

    PubMed

    Cogen, Jonathan; Emerson, Julia; Sanders, Don B; Ren, Clement; Schechter, Michael S; Gibson, Ronald L; Morgan, Wayne; Rosenfeld, Margaret

    2015-08-01

    To identify novel risk factors and corroborate previously identified risk factors for mean annual decline in FEV1% predicted in a large, contemporary, United States cohort of young cystic fibrosis (CF) patients. Retrospective observational study of participants in the EPIC Observational Study, who were Pseudomonas-negative and ≤12 years of age at enrollment in 2004-2006. The associations between potential demographic, clinical, and environmental risk factors evaluated during the baseline year and subsequent mean annual decline in FEV1 percent predicted were evaluated using generalized estimating equations. The 946 participants in the current analysis were followed for a mean of 6.2 (SD 1.3) years. Mean annual decline in FEV1% predicted was 1.01% (95%CI 0.85-1.17%). Children with one or no F508del mutations had a significantly smaller annual decline in FEV1 compared to F508del homozygotes. In a multivariable model, risk factors during the baseline year associated with a larger subsequent mean annual lung function decline included female gender, frequent or productive cough, low BMI (<66th percentile, median in the cohort), ≥1 pulmonary exacerbation, high FEV1 (≥115% predicted, in the top quartile), and respiratory culture positive for methicillin-sensitive Staphylococcus aureus, methicillin-resistant S. aureus, or Stenotrophomonas maltophilia. We have identified a range of risk factors for FEV1 decline in a large cohort of young, CF patients who were Pa negative at enrollment, including novel as well as previously identified characteristics. These results could inform the design of a clinical trial in which rate of FEV1 decline is the primary endpoint and identify high-risk groups that may benefit from closer monitoring. © 2015 Wiley Periodicals, Inc.

  4. Cascaded lattice Boltzmann method with improved forcing scheme for large-density-ratio multiphase flow at high Reynolds and Weber numbers.

    PubMed

    Lycett-Brown, Daniel; Luo, Kai H

    2016-11-01

    A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015)PLEEE81539-375510.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.

  5. Bioactive factors for tissue regeneration: state of the art.

    PubMed

    Ohba, Shinsuke; Hojo, Hironori; Chung, Ung-Il

    2012-07-01

    THERE ARE THREE COMPONENTS FOR THE CREATION OF NEW TISSUES: cell sources, scaffolds, and bioactive factors. Unlike conventional medical strategies, regenerative medicine requires not only analytical approaches but also integrative ones. Basic research has identified a number of bioactive factors that are necessary, but not sufficient, for organogenesis. In skeletal development, these factors include bone morphogenetic proteins (BMPs), transforming growth factor β TGF-β, Wnts, hedgehogs (Hh), fibroblast growth factors (FGFs), insulin-like growth factors (IGFs), SRY box-containing gene (Sox) 9, Sp7, and runt-related transcription factors (Runx). Clinical and preclinical studies have been extensively performed to apply the knowledge to bone and cartilage regeneration. Given the large number of findings obtained so far, it would be a good time for a multi-disciplinary, collaborative effort to optimize these known factors and develop appropriate drug delivery systems for delivering them.

  6. Low Reynolds number numerical solutions of chaotic flow

    NASA Technical Reports Server (NTRS)

    Pulliam, Thomas H.

    1989-01-01

    Numerical computations of two-dimensional flow past an airfoil at low Mach number, large angle of attack, and low Reynolds number are reported which show a sequence of flow states leading from single-period vortex shedding to chaos via the period-doubling mechanism. Analysis of the flow in terms of phase diagrams, Poincare sections, and flowfield variables are used to substantiate these results. The critical Reynolds number for the period-doubling bifurcations is shown to be sensitive to mesh refinement and the influence of large amounts of numerical dissipation. In extreme cases, large amounts of added dissipation can delay or completely eliminate the chaotic response. The effect of artificial dissipation at these low Reynolds numbers is to produce a new effective Reynolds number for the computations.

  7. Organizational factors affecting safety implementation in food companies in Thailand.

    PubMed

    Chinda, Thanwadee

    2014-01-01

    Thai food industry employs a massive number of skilled and unskilled workers. This may result in an industry with high incidences and accident rates. To improve safety and reduce the accident figures, this paper investigates factors influencing safety implementation in small, medium, and large food companies in Thailand. Five factors, i.e., management commitment, stakeholders' role, safety information and communication, supportive environment, and risk, are found important in helping to improve safety implementation. The statistical analyses also reveal that small, medium, and large food companies hold similar opinions on the risk factor, but bear different perceptions on the other 4 factors. It is also found that to improve safety implementation, the perceptions of safety goals, communication, feedback, safety resources, and supervision should be aligned in small, medium, and large companies.

  8. Multi-scalar influence of weather and climate on very large-fires in the Eastern United States

    Treesearch

    John T. Abatzoglou; Renaud Barbero; Crystal A. Kolden; Katherine C. Hegewisch; Narasimhan K. Larkin; Harry Podschwit

    2014-01-01

    A majority of area burned in the Eastern United States (EUS) results from a limited number of exceptionally large wildfires. Relationships between climatic conditions and the occurrence of very large-fires (VLF) in the EUS were examined using composite and climate-niche analyses that consider atmospheric factors across inter-annual, sub-seasonal and synoptic temporal...

  9. Latest COBE results, large-scale data, and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    One of the predictions of the inflationary scenario of cosmology is that the initial spectrum of primordial density fluctuations (PDFs) must have the Harrison-Zeldovich (HZ) form. Here, in order to test the inflationary scenario, predictions of the microwave background radiation (MBR) anisotropies measured by COBE are computed based on large-scale data for the universe and assuming Omega-1 and the HZ spectrum on large scales. It is found that the minimal scale where the spectrum can first enter the HZ regime is found, constraining the power spectrum of the mass distribution to within the bias factor b. This factor is determined and used to predict parameters of the MBR anisotropy field. For the spectrum of PDFs that reaches the HZ regime immediately after the scale accessible to the APM catalog, the numbers on MBR anisotropies are consistent with the COBE detections and thus the standard inflation can indeed be considered a viable theory for the origin of the large-scale structure in the universe.

  10. Copy number variation of functional RBMY1 is associated with sperm motility: an azoospermia factor-linked candidate for asthenozoospermia.

    PubMed

    Yan, Yuanlong; Yang, Xiling; Liu, Yunqiang; Shen, Ying; Tu, Wenling; Dong, Qiang; Yang, Dong; Ma, Yongyi; Yang, Yuan

    2017-07-01

    population. A difference in the distribution of RBMY1 copy number was observed between the group with normal sperm motility and the group with asthenozoospermia. A positive correlation between the RBMY1 copy dosage and sperm motility was identified, and the males with fewer than six copies of RBMY1 showed an elevated risk for asthenozoospermia relative to those with six RBMY1 copies, the most common dosage in the population. The RBMY1 copy dosage was positively correlated with its mRNA and protein level in the testis. Sperm with high motility were found to carry more RBMY1 protein than those with relatively low motility. The RBMY1 protein was confirmed to predominantly localize in the neck and mid-piece region of sperm as well as the principal piece of the sperm tail. Our population study completes a chain of evidence suggesting that RBMY1 influences the susceptibility of males to asthenozoospermia by modulating sperm motility. High sequence similarity between the RBMY1 functional copies and a large number of pseudogenes potentially reduces the accuracy of the copy number detection. The mechanism underlying the CNV in RBMY1 is still unclear, and the effect of the structural variations in the RBMY1 copy cluster on the copy dosage of other protein-coding genes located in the region cannot be excluded, which may potentially bias our observations. Asthenozoospermia is a multi-factor complex disease with a limited number of proven susceptibility genes. This study identified a novel genomic candidate independently contributing to the condition, enriching our understanding of the role of AZF-linked genes in male reproduction. Our finding provides insight into the physiological and pathological characteristics of RBMY1 in terms of sperm motility, supplies persuasive evidence of the significance of RBMY1 copy number analysis in the clinical counselling of male infertility resulting from asthenozoospermia. This work was funded by the National Natural Science Foundation of China (Nos

  11. Factors associated with lack of prenatal care in a large municipality

    PubMed Central

    da Rosa, Cristiane Quadrado; da Silveira, Denise Silva; da Costa, Juvenal Soares Dias

    2014-01-01

    OBJECTIVE To analyze the factors associated with a lack of prenatal care in a large municipality in southern Brazil. METHODS In this case-control age-matched study, 716 women were evaluated; of these, 179 did not receive prenatal care and 537 received prenatal care (controls). These women were identified using the Sistema Nacional de Informação sobre Nascidos Vivos (Live Birth Information System) of Pelotas, RS, Southern Brazil, between 2009 and 2010. Multivariate analysis was performed using conditional logistic regression to estimate the odds ratios (OR). RESULTS In the final model, the variables associated with a lack of prenatal care were the level of education, particularly when it was lesser than four years [OR 4.46; 95% confidence interval (CI) 1.92;10.36], being single (OR 3.61; 95%CI 1.85;7.04), and multiparity (OR 2.89; 95%CI 1.72;4.85). The prevalence of a lack of prenatal care among administrative regions varied between 0.7% and 3.9%. CONCLUSIONS The risk factors identified must be considered when planning actions for the inclusion of women in prenatal care by both the central management and healthcare teams. These indicated the municipal areas with greater deficits in prenatal care. The reorganization of the actions to identify women with risk factors in the community can be considered to be a starting point of this process. In addition, the integration of the activities of local programs that target the mother and child is essential to constantly identify pregnant women without prenatal care. PMID:26039401

  12. Large thermoelectric power factor from crystal symmetry-protected non-bonding orbital in half-Heuslers.

    PubMed

    Zhou, Jiawei; Zhu, Hangtian; Liu, Te-Huan; Song, Qichen; He, Ran; Mao, Jun; Liu, Zihang; Ren, Wuyang; Liao, Bolin; Singh, David J; Ren, Zhifeng; Chen, Gang

    2018-04-30

    Modern society relies on high charge mobility for efficient energy production and fast information technologies. The power factor of a material-the combination of electrical conductivity and Seebeck coefficient-measures its ability to extract electrical power from temperature differences. Recent advancements in thermoelectric materials have achieved enhanced Seebeck coefficient by manipulating the electronic band structure. However, this approach generally applies at relatively low conductivities, preventing the realization of exceptionally high-power factors. In contrast, half-Heusler semiconductors have been shown to break through that barrier in a way that could not be explained. Here, we show that symmetry-protected orbital interactions can steer electron-acoustic phonon interactions towards high mobility. This high-mobility regime enables large power factors in half-Heuslers, well above the maximum measured values. We anticipate that our understanding will spark new routes to search for better thermoelectric materials, and to discover high electron mobility semiconductors for electronic and photonic applications.

  13. Individual differences in non-verbal number acuity correlate with maths achievement.

    PubMed

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  14. Risk Factors for Infection After Shoulder Arthroscopy in a Large Medicare Population.

    PubMed

    Cancienne, Jourdan M; Brockmeier, Stephen F; Carson, Eric W; Werner, Brian C

    2018-03-01

    Shoulder arthroscopy is well established as a highly effective and safe procedure for the treatment for several shoulder disorders and is associated with an exceedingly low risk of infectious complications. Few data exist regarding risk factors for infection after shoulder arthroscopy, as previous studies were not adequately powered to evaluate for infection. To determine patient-related risk factors for infection after shoulder arthroscopy by using a large insurance database. Case-control study; Level of evidence, 3. The PearlDiver patient records database was used to query the 100% Medicare Standard Analytic Files from 2005 to 2014 for patients undergoing shoulder arthroscopy. Patients undergoing shoulder arthroscopy for a diagnosis of infection or with a history of prior infection were excluded. Postoperative infection within 90 days postoperatively was then assessed with International Classification of Diseases, Ninth Revision codes for a diagnosis of postoperative infection or septic shoulder arthritis or a procedure for these indications. A multivariate binomial logistic regression analysis was then utilized to evaluate the use of an intraoperative steroid injection, as well as numerous patient-related risk factors for postoperative infection. Adjusted odds ratios (ORs) and 95% CIs were calculated for each risk factor, with P < .05 considered statistically significant. A total of 530,754 patients met all inclusion and exclusion criteria. There were 1409 infections within 90 days postoperatively (0.26%). Revision shoulder arthroscopy was the most significant risk factor for infection (OR, 3.25; 95% CI, 2.7-4.0; P < .0001). Intraoperative steroid injection was also an independent risk factor for postoperative infection (OR, 1.46; 95% CI, 1.2-1.9; P = .002). There were also numerous independent patient-related risk factors for infection, the most significant of which were chronic anemia (OR, 1.58; 95% CI, 1.4-1.8; P < .0001), malnutrition (OR, 1.42; 95% CI, 1

  15. Low frequency of broadly neutralizing HIV antibodies during chronic infection even in quaternary epitope targeting antibodies containing large numbers of somatic mutations.

    PubMed

    Hicar, Mark D; Chen, Xuemin; Kalams, Spyros A; Sojar, Hakimuddin; Landucci, Gary; Forthal, Donald N; Spearman, Paul; Crowe, James E

    2016-02-01

    Neutralizing antibodies (Abs) are thought to be a critical component of an appropriate HIV vaccine response. It has been proposed that Abs recognizing conformationally dependent quaternary epitopes on the HIV envelope (Env) trimer may be necessary to neutralize diverse HIV strains. A number of recently described broadly neutralizing monoclonal Abs (mAbs) recognize complex and quaternary epitopes. Generally, many such Abs exhibit extensive numbers of somatic mutations and unique structural characteristics. We sought to characterize the native antibody (Ab) response against circulating HIV focusing on such conformational responses, without a prior selection based on neutralization. Using a capture system based on VLPs incorporating cleaved envelope protein, we identified a selection of B cells that produce quaternary epitope targeting Abs (QtAbs). Similar to a number of broadly neutralizing Abs, the Ab genes encoding these QtAbs showed extensive numbers of somatic mutations. However, when expressed as recombinant molecules, these Abs failed to neutralize virus or mediate ADCVI activity. Molecular analysis showed unusually high numbers of mutations in the Ab heavy chain framework 3 region of the variable genes. The analysis suggests that large numbers of somatic mutations occur in Ab genes encoding HIV Abs in chronically infected individuals in a non-directed, stochastic, manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Trapped atom number in millimeter-scale magneto-optical traps

    NASA Astrophysics Data System (ADS)

    Hoth, Gregory W.; Donley, Elizabeth A.; Kitching, John

    2012-06-01

    For compact cold-atom instruments, it is desirable to trap a large number of atoms in a small volume to maximize the signal-to-noise ratio. In MOTs with beam diameters of a centimeter or larger, the slowing force is roughly constant versus velocity and the trapped atom number scales as d^4. For millimeter-scale MOTs formed from pyramidal reflectors, a d^6 dependence has been observed [Pollack et al., Opt. Express 17, 14109 (2009)]. A d^6 scaling is expected for small MOTs, where the slowing force is proportional to the atom velocity. For a 1 mm diameter MOT, a d^6 scaling results in 10 atoms, and the difference between a d^4 and a d^6 dependence corresponds to a factor of 1000 in atom number and a factor of 30 in the signal-to-noise ratio. We have observed >10^4 atoms in 1 mm diameter MOTs, consistent with a d^4 dependence. We are currently performing measurements for sub-mm MOTs to determine where the d^4 to d^6 crossover occurs in our system. We are also exploring MOTs based on linear polarization, which can potentially produce stronger slowing forces due to stimulated emission [Emile et al., Europhys. Lett. 20, 687 (1992)]. It may be possible to trap more atoms in small volumes with this method, since high intensities can be easily achieved.

  17. ηc Hadroproduction at Large Hadron Collider Challenges NRQCD Factorization

    NASA Astrophysics Data System (ADS)

    Butenschoen, Mathias; He, Zhi-Guo; Kniehl, Bernd A.

    2017-03-01

    We report on our analysis [1] of prompt ηc meson production, measured by the LHCb Collaboration at the Large Hadron Collider, within the framework of non-relativistic QCD (NRQCD) factorization up to the sub-leading order in both the QCD coupling constant αs and the relative velocity v of the bound heavy quarks. We thereby convert various sets of J/ψ and χc,J long-distance matrix elements (LDMEs), determined by different groups in J/ψ and χc,J yield and polarization fits, to ηc and hc production LDMEs making use of the NRQCD heavy quark spin symmetry. The resulting predictions for ηc hadroproduction in all cases greatly overshoot the LHCb data, while the color-singlet model contributions alone would indeed be sufficient. We investigate the consequences for the universality of the LDMEs, and show how the observed tensions remain in follow-up works by other groups.

  18. Peptide arrays on cellulose support: SPOT synthesis, a time and cost efficient method for synthesis of large numbers of peptides in a parallel and addressable fashion.

    PubMed

    Hilpert, Kai; Winkler, Dirk F H; Hancock, Robert E W

    2007-01-01

    Peptide synthesis on cellulose using SPOT technology allows the parallel synthesis of large numbers of addressable peptides in small amounts. In addition, the cost per peptide is less than 1% of peptides synthesized conventionally on resin. The SPOT method follows standard fluorenyl-methoxy-carbonyl chemistry on conventional cellulose sheets, and can utilize more than 600 different building blocks. The procedure involves three phases: preparation of the cellulose membrane, stepwise coupling of the amino acids and cleavage of the side-chain protection groups. If necessary, peptides can be cleaved from the membrane for assays performed using soluble peptides. These features make this method an excellent tool for screening large numbers of peptides for many different purposes. Potential applications range from simple binding assays, to more sophisticated enzyme assays and studies with living microbes or cells. The time required to complete the protocol depends on the number and length of the peptides. For example, 400 9-mer peptides can be synthesized within 6 days.

  19. Low Reynolds number airfoil survey, volume 1

    NASA Technical Reports Server (NTRS)

    Carmichael, B. H.

    1981-01-01

    The differences in flow behavior two dimensional airfoils in the critical chordlength Reynolds number compared with lower and higher Reynolds number are discussed. The large laminar separation bubble is discussed in view of its important influence on critical Reynolds number airfoil behavior. The shortcomings of application of theoretical boundary layer computations which are successful at higher Reynolds numbers to the critical regime are discussed. The large variation in experimental aerodynamic characteristic measurement due to small changes in ambient turbulence, vibration, and sound level is illustrated. The difficulties in obtaining accurate detailed measurements in free flight and dramatic performance improvements at critical Reynolds number, achieved with various types of boundary layer tripping devices are discussed.

  20. Factors Inhibiting Hispanic Parents' School Involvement

    ERIC Educational Resources Information Center

    Smith, Jay; Stern, Kenneth; Shatrova, Zhanna

    2008-01-01

    Factors inhibiting Hispanic parental involvement in non-metropolitan area schools were studied. With the mandates of No Child Left Behind intensifying the need to improve the academic achievement of all at-risk groups of students in American schools, and with the relatively new phenomenon of large numbers of Hispanics settling in non-metropolitan…

  1. The association between the number of office visits and the control of cardiovascular risk factors in Iranian patients with type2 diabetes.

    PubMed

    Moradi, Sedighe; Sahebi, Zeinab; Ebrahim Valojerdi, Ameneh; Rohani, Farzaneh; Ebrahimi, Hooman

    2017-01-01

    Patients with diabetes type2 should receive regular medical care. We aimed at investigating the association between the number of office visits and improvement of their cardiovascular-risk factors. Four hundred and ninety patients with type 2 diabetes mellitus who were followed in a tertiary center were enrolled in this longitudinal study. The minimum follow up period was 3 years. Patient data were extracted from manual or electronic records. Sixty- four percent of cases were females, the mean age was 61 ± 12.45 years, and the mean disease duration was 6.5 ±7.9 years. The mean number of office visits was 2.69 ± 0.91 per year. Comparing the means of each of the cardio-vascular risk factors showed a significant decrease in all cardiovascular risk factors, while there was a significant weight gain over the same period. The association between changes in these parameters and the number of patients' office visits per year were not statistically significant. In patients with disease duration less than 5 years, each additional office visits by one visit per year was associated with a decrease in serum total cholesterol by 6.94 mg/dl. The mean number of office visits per year in patients older than 60 years old was more than younger patient (p = 0.001). The decrease in the mean values of the investigated parameters was statistically significant between the first year of follow up and the following years. Yet, these changes were not related to the mean number of patients' office visits per year, which may reflect the poor compliance of patients to treatment regardless of the number of their office visits.

  2. Long-term Effects of Large-volume Liposuction on Metabolic Risk Factors for Coronary Heart Disease

    PubMed Central

    Mohammed, B. Selma; Cohen, Samuel; Reeds, Dominic; Young, V. Leroy; Klein, Samuel

    2009-01-01

    Abdominal obesity is associated with metabolic risk factors for coronary heart disease (CHD). Although we previously found that using liposuction surgery to remove abdominal subcutaneous adipose tissue (SAT) did not result in metabolic benefits, it is possible that postoperative inflammation masked the beneficial effects. Therefore, this study provides a long-term evaluation of a cohort of subjects from our original study. Body composition and metabolic risk factors for CHD, including oral glucose tolerance, insulin resistance, plasma lipid profile, and blood pressure were evaluated in seven obese (39 ± 2 kg/m2) women before and at 10, 27, and 84–208 weeks after large-volume liposuction. Liposuction surgery removed 9.4 ± 1.8 kg of body fat (16 ± 2% of total fat mass; 6.1 ± 1.4 kg decrease in body weight), primarily from abdominal SAT; body composition and weight remained the same from 10 through 84–208 weeks. Metabolic endpoints (oral glucose tolerance, homeostasis model assessment of insulin resistance, blood pressure and plasma triglyceride (TG), high-density lipoprotein (HDL)-cholesterol, and low-density lipoprotein (LDL)-cholesterol concentrations) obtained at 10 through 208 weeks were not different from baseline and did not change over time. These data demonstrate that removal of a large amount of abdominal SAT by using liposuction does not improve CHD metabolic risk factors associated with abdominal obesity, despite a long-term reduction in body fat. PMID:18820648

  3. Experimental Surface Pressure Data Obtained on 65 deg Delta Wing Across Reynolds Number and Mach Number Ranges. Vol. 4: Large-radius leading edge

    NASA Technical Reports Server (NTRS)

    Chu, Julio; Luckring, James M.

    1996-01-01

    An experimental wind tunnel test of a 65 deg delta wing model with interchangeable leading edges was conducted in the Langley National Transonic Facility (NTF). The objective was to investigate the effects of Reynolds and Mach numbers on slender-wing leading-edge vortex flows with four values of wing leading-edge bluntness. Experimentally obtained pressure data are presented without analysis in tabulated and graphical formats across a Reynolds number range of 6 x 10(exp 6) to 120 x 10(exp 6) at a Mach number of 0.85 and across a Mach number range of 0.4 to 0.9 at Reynolds numbers of 6 x 10(exp 6) and 60 x 10(exp 6). Normal-force and pitching-moment coefficient plots for these Reynolds number and Mach number ranges are also presented.

  4. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  5. Why do we differ in number sense? Evidence from a genetically sensitive investigation☆

    PubMed Central

    Tosto, M.G.; Petrill, S.A.; Halberda, J.; Trzaskowski, M.; Tikhomirova, T.N.; Bogdanova, O.Y.; Ly, R.; Wilmer, J.B.; Naiman, D.Q.; Germine, L.; Plomin, R.; Kovas, Y.

    2014-01-01

    Basic intellectual abilities of quantity and numerosity estimation have been detected across animal species. Such abilities are referred to as ‘number sense’. For human species, individual differences in number sense are detectable early in life, persist in later development, and relate to general intelligence. The origins of these individual differences are unknown. To address this question, we conducted the first large-scale genetically sensitive investigation of number sense, assessing numerosity discrimination abilities in 837 pairs of monozygotic and 1422 pairs of dizygotic 16-year-old twin pairs. Univariate genetic analysis of the twin data revealed that number sense is modestly heritable (32%), with individual differences being largely explained by non-shared environmental influences (68%) and no contribution from shared environmental factors. Sex-Limitation model fitting revealed no differences between males and females in the etiology of individual differences in number sense abilities. We also carried out Genome-wide Complex Trait Analysis (GCTA) that estimates the population variance explained by additive effects of DNA differences among unrelated individuals. For 1118 unrelated individuals in our sample with genotyping information on 1.7 million DNA markers, GCTA estimated zero heritability for number sense, unlike other cognitive abilities in the same twin study where the GCTA heritability estimates were about 25%. The low heritability of number sense, observed in this study, is consistent with the directional selection explanation whereby additive genetic variance for evolutionary important traits is reduced. PMID:24696527

  6. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    NASA Technical Reports Server (NTRS)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  7. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  8. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  9. Copy number variations of E2F1: a new genetic risk factor for testicular cancer.

    PubMed

    Rocca, Maria Santa; Di Nisio, Andrea; Marchiori, Arianna; Ghezzi, Marco; Opocher, Giuseppe; Foresta, Carlo; Ferlin, Alberto

    2017-03-01

    Testicular germ cell tumor (TGCT) is one of the most heritable forms of cancer. In last years, many evidence suggested that constitutional genetic factors, mainly single nucleotide polymorphisms, can increase its risk. However, the possible contribution of copy number variations (CNVs) in TGCT susceptibility has not been substantially addressed. Indeed, an increasing number of studies have focused on the effect of CNVs on gene expression and on the role of these structural genetic variations as risk factors for different forms of cancer. E2F1 is a transcription factor that plays an important role in regulating cell growth, differentiation, apoptosis and response to DNA damage. Therefore, deficiency or overexpression of this protein might significantly influence fundamental biological processes involved in cancer development and progression, including TGCT. We analyzed E2F1 CNVs in 261 cases with TGCT and 165 controls. We found no CNVs in controls, but 17/261 (6.5%) cases showed duplications in E2F1 Blot analysis demonstrated higher E2F1 expression in testicular samples of TGCT cases with three copies of the gene. Furthermore, we observed higher phosphorylation of Akt and mTOR in samples with E2F1 duplication. Interestingly, normal, non-tumoral testicular tissue in patient with E2F1 duplication showed lower expression of E2F1 and lower AKT/mTOR phosphorylation with respect to adjacent tumor tissue. Furthermore, increased expression of E2F1 obtained in vitro in NTERA-2 testicular cell line induced increased AKT/mTOR phosphorylation. This study suggests for the first time an involvement of E2F1 CNVs in TGCT susceptibility and supports previous preliminary data on the importance of AKT/mTOR signaling pathway in this cancer. © 2017 Society for Endocrinology.

  10. Key factors influencing allied health research capacity in a large Australian metropolitan health district

    PubMed Central

    Alison, Jennifer A; Zafiropoulos, Bill; Heard, Robert

    2017-01-01

    Objective The aim of this study was to identify key factors affecting research capacity and engagement of allied health professionals working in a large metropolitan health service. Identifying such factors will assist in determining strategies for building research capacity in allied health. Materials and methods A total of 276 allied health professionals working within the Sydney Local Health District (SLHD) completed the Research Capacity in Context Tool (RCCT) that measures research capacity and culture across three domains: organization, team, and individual. An exploratory factor analysis was undertaken to identify common themes within each of these domains. Correlations were performed between demographic variables and the identified factors to determine possible relationships. Results Research capacity and culture success/skill levels were reported to be higher within the organization and team domains compared to the individual domain (median [interquartile range, IQR] 6 [5–8], 6 [5–8], 5 [3–7], respectively; Friedman χ2(2)=42.04, p<0.001). Exploratory factor analyses were performed to identify factors that were perceived by allied health respondents to affect research capacity. Factors identified within the organization domain were infrastructure for research (eg, funds and equipment) and research culture (eg, senior manager’s support for research); within the team domain the factors were research orientation (eg, dissemination of results at research seminars) and research support (eg, providing staff research training). Within the individual domain, only one factor was identified which was the research skill of the individual (eg, literature evaluation, submitting ethics applications and data analysis, and writing for publication). Conclusion The reported skill/success levels in research were lower for the individual domain compared to the organization or team domains. Key factors were identified in each domain that impacted on allied health

  11. Interpretation of clinical relevance of X-chromosome copy number variations identified in a large cohort of individuals with cognitive disorders and/or congenital anomalies.

    PubMed

    Willemsen, Marjolein H; de Leeuw, Nicole; de Brouwer, Arjan P M; Pfundt, Rolph; Hehir-Kwa, Jayne Y; Yntema, Helger G; Nillesen, Willy M; de Vries, Bert B A; van Bokhoven, Hans; Kleefstra, Tjitske

    2012-11-01

    Genome-wide array studies are now routinely being used in the evaluation of patients with cognitive disorders (CD) and/or congenital anomalies (CA). Therefore, inevitably each clinician is confronted with the challenging task of the interpretation of copy number variations detected by genome-wide array platforms in a diagnostic setting. Clinical interpretation of autosomal copy number variations is already challenging, but assessment of the clinical relevance of copy number variations of the X-chromosome is even more complex. This study provides an overview of the X-Chromosome copy number variations that we have identified by genome-wide array analysis in a large cohort of 4407 male and female patients. We have made an interpretation of the clinical relevance of each of these copy number variations based on well-defined criteria and previous reports in literature and databases. The prevalence of X-chromosome copy number variations in this cohort was 57/4407 (∼1.3%), of which 15 (0.3%) were interpreted as (likely) pathogenic. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  12. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  13. Prevalence, incidence and risk factors of carpal tunnel syndrome in a large footwear factory.

    PubMed

    Roquelaure, Y; Mariel, J; Dano, C; Fanello, S; Penneau-Fontbonne, D

    2001-01-01

    The study was conducted to assess the prevalence and incidence of carpal tunnel syndrome (CTS) in a large modern footwear factory and to identify factors predictive of CTS. To this end, 199 workers were examined in 1996, and 162 of them were re-examined in 1997. Ergonomic and psychosocial risk factors of CTS were assessed by workpost analysis and self-administered questionnaire. The prevalence of CTS at baseline in 1996 and in 1997 was 16.6% (95%CI: 11.4-21.7) and 11.7% (95%CI: 6.7-16.8), respectively. The incidence rate of CTS in 1997 was 11.7% (95%CI: 6.7-7.8). No specific type of job performance was associated with CTS. Obesity (OR = 4.4; 95%CI: 1.1-17.1) and psychological distress at baseline (OR = 4.3; 95%CI: 1.0-18.6) were strongly predictive of CTS. Rapid trigger movements of the fingers were also predictive of CTS (OR = 3.8; 95%CI: 1.0-17.2). A strict control of thework by superiors was negatively associatedwith CTS (OR = 0.5; 95%CI: 0.2-1.3). The prevalence and incidence of CTS in this workforce were largely higher than in the general population and numerous industries. The study highlights the role of psychological distress in workers exposed to a high level of physical exposure and psychological demand.

  14. Large Groups in the Boundary Waters Canoe Area - Their Numbers, Characteristics, and Impact

    Treesearch

    David W. Lime

    1972-01-01

    The impact of "large" parties in the BWCA is discussed in terms of their effect on the resource and on the experience of other visitors. The amount of use by large groups and the visitors most likely to be affected by a reduction in party size limit are described.

  15. Complex architecture of primes and natural numbers.

    PubMed

    García-Pérez, Guillermo; Serrano, M Ángeles; Boguñá, Marián

    2014-08-01

    Natural numbers can be divided in two nonoverlapping infinite sets, primes and composites, with composites factorizing into primes. Despite their apparent simplicity, the elucidation of the architecture of natural numbers with primes as building blocks remains elusive. Here, we propose a new approach to decoding the architecture of natural numbers based on complex networks and stochastic processes theory. We introduce a parameter-free non-Markovian dynamical model that naturally generates random primes and their relation with composite numbers with remarkable accuracy. Our model satisfies the prime number theorem as an emerging property and a refined version of Cramér's conjecture about the statistics of gaps between consecutive primes that seems closer to reality than the original Cramér's version. Regarding composites, the model helps us to derive the prime factors counting function, giving the probability of distinct prime factors for any integer. Probabilistic models like ours can help to get deeper insights about primes and the complex architecture of natural numbers.

  16. Increasing numbers of nonaneurysmal subarachnoid hemorrhage in the last 15 years: antithrombotic medication as reason and prognostic factor?

    PubMed

    Konczalla, Juergen; Kashefiolasl, Sepide; Brawanski, Nina; Senft, Christian; Seifert, Volker; Platz, Johannes

    2016-06-01

    OBJECT Subarachnoid hemorrhage (SAH) is usually caused by a ruptured intracranial aneurysm, but in some patients no source of hemorrhage can be detected. More recent data showed increasing numbers of cases of spontaneous nonaneurysmal SAH (NASAH). The aim of this study was to analyze factors, especially the use of antithrombotic medications such as systemic anticoagulation or antiplatelet agents (aCPs), influencing the increasing numbers of cases of NASAH and the clinical outcome. METHODS Between 1999 and 2013, 214 patients who were admitted to the authors' institution suffered from NASAH, 14% of all patients with SAH. Outcome was assessed according to the modified Rankin Scale (mRS) at 6 months. Risk factors were identified based on the outcome. RESULTS The number of patients with NASAH increased significantly in the last 15 years of the study period. There was a statistically significant increase in the rate of nonperimesencephalic (NPM)-SAH occurrence and aCP use, while the proportion of elderly patients remained stable. Favorable outcome (mRS 0-2) was achieved in 85% of cases, but patients treated with aCPs had a significantly higher risk for an unfavorable outcome. Further analysis showed that elderly patients, and especially the subgroup with a Fisher Grade 3 bleeding pattern, had a high risk for an unfavorable outcome, whereas the subgroup of NPM-SAH without a Fisher Grade 3 bleeding pattern had a favorable outcome, similar to perimesencephalic (PM)-SAH. CONCLUSIONS Over the years, a significant increase in the number of patients with NASAH has been observed. Also, the rate of aCP use has increased significantly. Risk factors for an unfavorable outcome were age > 65 years, Fisher Grade 3 bleeding pattern, and aCP use. Both "PM-SAH" and "NPM-SAH without a Fisher Grade 3 bleeding pattern" had excellent outcomes. Patients with NASAH and a Fisher Grade 3 bleeding pattern had a significantly higher risk for an unfavorable outcome and death. Therefore, for further

  17. An Approach to Engaging Students in a Large-Enrollment, Introductory STEM College Course

    ERIC Educational Resources Information Center

    Swap, Robert J.; Walter, Jonathan A.

    2015-01-01

    While it is clear that engagement between students and instructors positively affects learning outcomes, a number of factors make such engagement difficult to achieve in large-enrollment introductory courses. This has led to pessimism among some education professionals regarding the degree of engagement possible in these courses. In this paper we…

  18. Correction factors for ionization chamber measurements with the ‘Valencia’ and ‘large field Valencia’ brachytherapy applicators

    NASA Astrophysics Data System (ADS)

    Gimenez-Alventosa, V.; Gimenez, V.; Ballester, F.; Vijande, J.; Andreo, P.

    2018-06-01

    Treatment of small skin lesions using HDR brachytherapy applicators is a widely used technique. The shielded applicators currently available in clinical practice are based on a tungsten-alloy cup that collimates the source-emitted radiation into a small region, hence protecting nearby tissues. The goal of this manuscript is to evaluate the correction factors required for dose measurements with a plane-parallel ionization chamber typically used in clinical brachytherapy for the ‘Valencia’ and ‘large field Valencia’ shielded applicators. Monte Carlo simulations have been performed using the PENELOPE-2014 system to determine the absorbed dose deposited in a water phantom and in the chamber active volume with a Type A uncertainty of the order of 0.1%. The average energies of the photon spectra arriving at the surface of the water phantom differ by approximately 10%, being 384 keV for the ‘Valencia’ and 343 keV for the ‘large field Valencia’. The ionization chamber correction factors have been obtained for both applicators using three methods, their values depending on the applicator being considered. Using a depth-independent global chamber perturbation correction factor and no shift of the effective point of measurement yields depth-dose differences of up to 1% for the ‘Valencia’ applicator. Calculations using a depth-dependent global perturbation factor, or a shift of the effective point of measurement combined with a constant partial perturbation factor, result in differences of about 0.1% for both applicators. The results emphasize the relevance of carrying out detailed Monte Carlo studies for each shielded brachytherapy applicator and ionization chamber.

  19. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  20. Factors affecting the number and type of student research products for chemistry and physics students at primarily undergraduate institutions: A case study.

    PubMed

    Mellis, Birgit; Soto, Patricia; Bruce, Chrystal D; Lacueva, Graciela; Wilson, Anne M; Jayasekare, Rasitha

    2018-01-01

    For undergraduate students, involvement in authentic research represents scholarship that is consistent with disciplinary quality standards and provides an integrative learning experience. In conjunction with performing research, the communication of the results via presentations or publications is a measure of the level of scientific engagement. The empirical study presented here uses generalized linear mixed models with hierarchical bootstrapping to examine the factors that impact the means of dissemination of undergraduate research results. Focusing on the research experiences in physics and chemistry of undergraduates at four Primarily Undergraduate Institutions (PUIs) from 2004-2013, statistical analysis indicates that the gender of the student does not impact the number and type of research products. However, in chemistry, the rank of the faculty advisor and the venue of the presentation do impact the number of research products by undergraduate student, whereas in physics, gender match between student and advisor has an effect on the number of undergraduate research products. This study provides a baseline for future studies of discipline-based bibliometrics and factors that affect the number of research products of undergraduate students.

  1. Factors affecting the number and type of student research products for chemistry and physics students at primarily undergraduate institutions: A case study

    PubMed Central

    Soto, Patricia; Bruce, Chrystal D.; Lacueva, Graciela; Wilson, Anne M.; Jayasekare, Rasitha

    2018-01-01

    For undergraduate students, involvement in authentic research represents scholarship that is consistent with disciplinary quality standards and provides an integrative learning experience. In conjunction with performing research, the communication of the results via presentations or publications is a measure of the level of scientific engagement. The empirical study presented here uses generalized linear mixed models with hierarchical bootstrapping to examine the factors that impact the means of dissemination of undergraduate research results. Focusing on the research experiences in physics and chemistry of undergraduates at four Primarily Undergraduate Institutions (PUIs) from 2004–2013, statistical analysis indicates that the gender of the student does not impact the number and type of research products. However, in chemistry, the rank of the faculty advisor and the venue of the presentation do impact the number of research products by undergraduate student, whereas in physics, gender match between student and advisor has an effect on the number of undergraduate research products. This study provides a baseline for future studies of discipline-based bibliometrics and factors that affect the number of research products of undergraduate students. PMID:29698502

  2. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    NASA Astrophysics Data System (ADS)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  3. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  4. Factors controlling large-wood transport in a mountain river

    NASA Astrophysics Data System (ADS)

    Ruiz-Villanueva, Virginia; Wyżga, Bartłomiej; Zawiejska, Joanna; Hajdukiewicz, Maciej; Stoffel, Markus

    2016-11-01

    As with bedload transport, wood transport in rivers is governed by several factors such as flow regime, geomorphic configuration of the channel and floodplain, or wood size and shape. Because large-wood tends to be transported during floods, safety and logistical constraints make field measurements difficult. As a result, direct observation and measurements of the conditions of wood transport are scarce. This lack of direct observations and the complexity of the processes involved in wood transport may result in an incomplete understanding of wood transport processes. Numerical modelling provides an alternative approach to addressing some of the unknowns in the dynamics of large-wood in rivers. The aim of this study is to improve the understanding of controls governing wood transport in mountain rivers, combining numerical modelling and direct field observations. By defining different scenarios, we illustrate relationships between the rate of wood transport and discharge, wood size, and river morphology. We test these relationships for a wide, multithread reach and a narrower, partially channelized single-thread reach of the Czarny Dunajec River in the Polish Carpathians. Results indicate that a wide range of quantitative information about wood transport can be obtained from a combination of numerical modelling and field observations and from document contrasting patterns of wood transport in single- and multithread river reaches. On the one hand, log diameter seems to have a greater importance for wood transport in the multithread channel because of shallower flow, lower flow velocity, and lower stream power. Hydrodynamic conditions in the single-thread channel allow transport of large-wood pieces, whereas in the multithread reach, logs with diameters similar to water depth are not being moved. On the other hand, log length also exerts strong control on wood transport, more so in the single-thread than in the multithread reach. In any case, wood transport strongly

  5. Determining the Number of Factors to Retain in EFA: Using the SPSS R-Menu v2.0 to Make More Judicious Estimations

    ERIC Educational Resources Information Center

    Courtney, Matthew Gordon Ray

    2013-01-01

    Exploratory factor analysis (EFA) is a common technique utilized in the development of assessment instruments. The key question when performing this procedure is how to best estimate the number of factors to retain. This is especially important as under- or over-extraction may lead to erroneous conclusions. Although recent advancements have been…

  6. Why are small and large numbers enumerated differently? A limited-capacity preattentive stage in vision.

    PubMed

    Trick, L M; Pylyshyn, Z W

    1994-01-01

    "Subitizing," the process of enumeration when there are fewer than 4 items, is rapid (40-100 ms/item), effortless, and accurate. "Counting," the process of enumeration when there are more than 4 items, is slow (250-350 ms/item), effortful, and error-prone. Why is there a difference in the way the small and large numbers of items are enumerated? A theory of enumeration is proposed that emerges from a general theory of vision, yet explains the numeric abilities of preverbal infants, children, and adults. We argue that subitizing exploits a limited-capacity parallel mechanism for item individuation, the FINST mechanism, associated with the multiple target tracking task (Pylyshyn, 1989; Pylyshyn & Storm, 1988). Two kinds of evidence support the claim that subitizing relies on preattentive information, whereas counting requires spatial attention. First, whenever spatial attention is needed to compute a spatial relation (cf. Ullman, 1984) or to perform feature integration (cf. Treisman & Gelade, 1980), subitizing does not occur (Trick & Pylyshyn, 1993a). Second, the position of the attentional focus, as manipulated by cue validity, has a greater effect on counting than subitizing latencies (Trick & Pylyshyn, 1993b).

  7. CNVcaller: highly efficient and widely applicable software for detecting copy number variations in large populations

    PubMed Central

    Wang, Xihong; Zheng, Zhuqing; Cai, Yudong; Chen, Ting; Li, Chao; Fu, Weiwei

    2017-01-01

    Abstract Background The increasing amount of sequencing data available for a wide variety of species can be theoretically used for detecting copy number variations (CNVs) at the population level. However, the growing sample sizes and the divergent complexity of nonhuman genomes challenge the efficiency and robustness of current human-oriented CNV detection methods. Results Here, we present CNVcaller, a read-depth method for discovering CNVs in population sequencing data. The computational speed of CNVcaller was 1–2 orders of magnitude faster than CNVnator and Genome STRiP for complex genomes with thousands of unmapped scaffolds. CNV detection of 232 goats required only 1.4 days on a single compute node. Additionally, the Mendelian consistency of sheep trios indicated that CNVcaller mitigated the influence of high proportions of gaps and misassembled duplications in the nonhuman reference genome assembly. Furthermore, multiple evaluations using real sheep and human data indicated that CNVcaller achieved the best accuracy and sensitivity for detecting duplications. Conclusions The fast generalized detection algorithms included in CNVcaller overcome prior computational barriers for detecting CNVs in large-scale sequencing data with complex genomic structures. Therefore, CNVcaller promotes population genetic analyses of functional CNVs in more species. PMID:29220491

  8. Interaction between numbers and size during visual search.

    PubMed

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2017-05-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.

  9. External human factors in incident management team decisionmaking and their effect on large fire suppression expenditures

    Treesearch

    Janie Canton-Tompson; Krista M. Gebert; Brooke Thompson; Greg Jones; David Calkin; Geoff Donovan

    2008-01-01

    Large wildland fires are complex, costly events influenced by a vast array of physical, climatic, and social factors. Changing climate, fuel buildup due to past suppression, and increasing populations in the wildland-urban interface have all been blamed for the extreme fire seasons and rising suppression expenditures of recent years. With each high-cost year comes a...

  10. Thymoma: first large Indian experience.

    PubMed

    Rathod, S; Munshi, A; Paul, S; Ganesh, B; Prabhash, K; Agarwal, J P

    2014-01-01

    Thymoma is the most common tumor of the anterior mediastinum. Surgery is mainstay of treatment, with adjuvant radiation recommended for invasive thymoma. Because of rarity, prospective randomized trials may not be feasible even in multicentric settings hence the best possible evidence can be large series. Till date Thymoma has not been studied in Indian settings. All patients presenting to Thoracic disease management group at our Centre during 2006-2011 were screened. Sixty two patients' with histo-pathological confirmation of thymoma medical records could be retrieved and are presented in this study. Mosaoka staging and WHO classification was used. The clinical, therapeutic factors and follow up parameters were recorded and survival was calculated. Effects of prognostic factors were compared. Sixty two patients were identified (36M, 26F; age 22-84, median 51.5 years) and majorities (57%) of thymoma were stage I-II. WHO pathological subtype B was most common 30 (49%). Mean tumor size was smaller in patients with myasthenia (5.3cm) than the entire group (7.6cm). Neoadjuvant therapy was offered to five unresectable stages III or IV a patient's with 40% resectability rates. Median overall survival was 60 months (Inter quartile-range 3-44 months) with overall survival rate (OS) at three year being 90%. Resectable tumors had better outcomes (94%) than non resectable (81%) at three years. Mosaoka Stage was the only significant (P = 0.03) prognostic factor on multivariate analysis. This is first thymoma series from India with large number of patients where staging is an important prognostic factor and surgery is the mainstay of therapy. In Indian context aggressive multimodality treatment should be offered to advanced stage patients and which yields good survival rates and comparable.

  11. Small beetle, large-scale drivers: how regional and landscape factors affect outbreaks of the European spruce bark beetle

    PubMed Central

    Seidl, Rupert; Müller, Jörg; Hothorn, Torsten; Bässler, Claus; Heurich, Marco; Kautz, Markus

    2016-01-01

    Summary 1. Unprecedented bark beetle outbreaks have been observed for a variety of forest ecosystems recently, and damage is expected to further intensify as a consequence of climate change. In Central Europe, the response of ecosystem management to increasing infestation risk has hitherto focused largely on the stand level, while the contingency of outbreak dynamics on large-scale drivers remains poorly understood. 2. To investigate how factors beyond the local scale contribute to the infestation risk from Ips typographus (Col., Scol.), we analysed drivers across seven orders of magnitude in scale (from 103 to 1010 m2) over a 23-year period, focusing on the Bavarian Forest National Park. Time-discrete hazard modelling was used to account for local factors and temporal dependencies. Subsequently, beta regression was applied to determine the influence of regional and landscape factors, the latter characterized by means of graph theory. 3. We found that in addition to stand variables, large-scale drivers also strongly influenced bark beetle infestation risk. Outbreak waves were closely related to landscape-scale connectedness of both host and beetle populations as well as to regional bark beetle infestation levels. Furthermore, regional summer drought was identified as an important trigger for infestation pulses. Large-scale synchrony and connectivity are thus key drivers of the recently observed bark beetle outbreak in the area. 4. Synthesis and applications. Our multiscale analysis provides evidence that the risk for biotic disturbances is highly dependent on drivers beyond the control of traditional stand-scale management. This finding highlights the importance of fostering the ability to cope with and recover from disturbance. It furthermore suggests that a stronger consideration of landscape and regional processes is needed to address changing disturbance regimes in ecosystem management. PMID:27041769

  12. The Activity of Differentiation Factors Induces Apoptosis in Polyomavirus Large T-Expressing Myoblasts

    PubMed Central

    Fimia, Gian Maria; Gottifredi, Vanesa; Bellei, Barbara; Ricciardi, Maria Rosaria; Tafuri, Agostino; Amati, Paolo; Maione, Rossella

    1998-01-01

    It is commonly accepted that pathways that regulate proliferation/differentiation processes, if altered in their normal interplay, can lead to the induction of programmed cell death. In a previous work we reported that Polyoma virus Large Tumor antigen (PyLT) interferes with in vitro terminal differentiation of skeletal myoblasts by binding and inactivating the retinoblastoma antioncogene product. This inhibition occurs after the activation of some early steps of the myogenic program. In the present work we report that myoblasts expressing wild-type PyLT, when subjected to differentiation stimuli, undergo cell death and that this cell death can be defined as apoptosis. Apoptosis in PyLT-expressing myoblasts starts after growth factors removal, is promoted by cell confluence, and is temporally correlated with the expression of early markers of myogenic differentiation. The block of the initial events of myogenesis by transforming growth factor β or basic fibroblast growth factor prevents PyLT-induced apoptosis, while the acceleration of this process by the overexpression of the muscle-regulatory factor MyoD further increases cell death in this system. MyoD can induce PyLT-expressing myoblasts to accumulate RB, p21, and muscle- specific genes but is unable to induce G00 arrest. Several markers of different phases of the cell cycle, such as cyclin A, cdk-2, and cdc-2, fail to be down-regulated, indicating the occurrence of cell cycle progression. It has been frequently suggested that apoptosis can result from an unbalanced cell cycle progression in the presence of a contrasting signal, such as growth factor deprivation. Our data involve differentiation pathways, as a further contrasting signal, in the generation of this conflict during myoblast cell apoptosis. PMID:9614186

  13. Number Frequency in L1 Differentially Affects Immediate Serial Recall of Numbers in L2 Between Beginning and Intermediate Learners.

    PubMed

    Sumioka, Norihiko; Williams, Atsuko; Yamada, Jun

    2016-12-01

    A list number recall test in English (L2) was administered to both Japanese (L1) students with beginning-level English proficiency who attended evening high school and Japanese college students with intermediate-level English proficiency. The major findings were that, only for the high school group, the small numbers 1 and 2 in middle positions of lists were recalled better than the large numbers 8 and 9 and there was a significant correlation between number frequency in Japanese and recall performance. Equally intriguing was that in both groups for adjacent transposition errors, smaller numbers tended to appear in the first position and large numbers in the second; also, omission errors were commonly seen for larger numbers. These phenomena are interpreted as reflecting frequency and/or frequency-related effects. Briefly discussed were the bilingual short-term memory system, effects of number value, generality and implications of the findings, and weaknesses of the study.

  14. Clinical trials in "emerging markets": regulatory considerations and other factors.

    PubMed

    Singh, Romi; Wang, Ouhong

    2013-11-01

    Clinical studies are being placed in emerging markets as part of global drug development programs to access large pool of eligible patients and to benefit from a cost effective structure. However, over the last few years, the definition of "emerging markets" is being revisited, especially from a regulatory perspective. For purposes of this article, countries outside US, EU and the traditional "western countries" are discussed. Multiple factors are considered for placement of clinical studies such as adherence to Good Clinical Practice (GCP), medical infrastructure & standard of care, number of eligible patients, etc. This article also discusses other quantitative factors such as country's GDP, patent applications, healthcare expenditure, healthcare infrastructure, corruption, innovation, etc. These different factors and indexes are correlated to the number of clinical studies ongoing in the "emerging markets". R&D, healthcare expenditure, technology infrastructure, transparency, and level of innovation, show a significant correlation with the number of clinical trials being conducted in these countries. This is the first analysis of its kind to evaluate and correlate the various other factors to the number of clinical studies in a country. © 2013.

  15. Plasmodium copy number variation scan: gene copy numbers evaluation in haploid genomes.

    PubMed

    Beghain, Johann; Langlois, Anne-Claire; Legrand, Eric; Grange, Laura; Khim, Nimol; Witkowski, Benoit; Duru, Valentine; Ma, Laurence; Bouchier, Christiane; Ménard, Didier; Paul, Richard E; Ariey, Frédéric

    2016-04-12

    In eukaryotic genomes, deletion or amplification rates have been estimated to be a thousand more frequent than single nucleotide variation. In Plasmodium falciparum, relatively few transcription factors have been identified, and the regulation of transcription is seemingly largely influenced by gene amplification events. Thus copy number variation (CNV) is a major mechanism enabling parasite genomes to adapt to new environmental changes. Currently, the detection of CNVs is based on quantitative PCR (qPCR), which is significantly limited by the relatively small number of genes that can be analysed at any one time. Technological advances that facilitate whole-genome sequencing, such as next generation sequencing (NGS) enable deeper analyses of the genomic variation to be performed. Because the characteristics of Plasmodium CNVs need special consideration in algorithms and strategies for which classical CNV detection programs are not suited a dedicated algorithm to detect CNVs across the entire exome of P. falciparum was developed. This algorithm is based on a custom read depth strategy through NGS data and called PlasmoCNVScan. The analysis of CNV identification on three genes known to have different levels of amplification and which are located either in the nuclear, apicoplast or mitochondrial genomes is presented. The results are correlated with the qPCR experiments, usually used for identification of locus specific amplification/deletion. This tool will facilitate the study of P. falciparum genomic adaptation in response to ecological changes: drug pressure, decreased transmission, reduction of the parasite population size (transition to pre-elimination endemic area).

  16. In the Middle: Factors Affecting a Black Male's Decision to Join a Traditionally White Fraternity at a Large Diverse Institution

    ERIC Educational Resources Information Center

    Winkler, Matthew J.

    2014-01-01

    The purpose of this qualitative study is to examine the pre-college factors, attitudes, and experiences of black men who joined traditionally white fraternities (TWFs) at large public predominantly white institutions (PWIs) over approximately the past four decades. These factors, with special emphasis on issues of identity, self- and group-esteem,…

  17. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    NASA Astrophysics Data System (ADS)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  18. Dynamic analysis of an SDOF helicopter model featuring skid landing gear and an MR damper by considering the rotor lift factor and a Bingham number

    NASA Astrophysics Data System (ADS)

    Saleh, Muftah; Sedaghati, Ramin; Bhat, Rama

    2018-06-01

    The present study addresses the performance of a skid landing gear (SLG) system of a rotorcraft impacting the ground at a vertical sink rate of up to 4.5 ms‑1. The impact attitude is assumed to be level as per chapter 527 of the Airworthiness Manual of Transport Canada Civil Aviation and part 27 of the Federal Aviation Regulations of the US Federal Aviation Administration. A single degree of freedom helicopter model is investigated under different values of rotor lift factor, L. In this study, three SLG versions are evaluated: (a) standalone conventional SLG; (b) SLG equipped with a passive viscous damper; and (c) SLG incorporated a magnetorheological energy absorber (MREA). The non-dimensional solutions of the helicopter models show that the two former SLG systems suffer adaptability issues with variations in the impact velocity and the rotor lift factor. Therefore, the alternative successful choice is to employ the MREA. Two different optimum Bingham numbers for compression and rebound strokes are defined. A new chart, called the optimum Bingham number versus rotor lift factor ‘B{i}o-L’, is introduced in this study to correlate the optimum Bingham numbers to the variation in the rotor lift factor and to provide more accessibility from the perspective of control design. The chart shows that the optimum Bingham number for the compression stroke is inversely linearly proportional to the increase in the rotor lift factor. This alleviates the impact force on the system and reduces the amount of magnetorheological yield force that would be generated. On the contrary, the optimum Bingham number for the rebound stroke is found to be directly linearly proportional to the rotor lift factor. This ensures controllable attenuation of the restoring force of the linear spring element. This idea can be exploited to generate charts for different landing attitudes and sink rates. In this article, the response of the helicopter equipped with the conventional undamped, damped

  19. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    NASA Astrophysics Data System (ADS)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  20. Modeling the number of car theft using Poisson regression

    NASA Astrophysics Data System (ADS)

    Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura

    2016-10-01

    Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.

  1. Shor's quantum factoring algorithm on a photonic chip.

    PubMed

    Politi, Alberto; Matthews, Jonathan C F; O'Brien, Jeremy L

    2009-09-04

    Shor's quantum factoring algorithm finds the prime factors of a large number exponentially faster than any other known method, a task that lies at the heart of modern information security, particularly on the Internet. This algorithm requires a quantum computer, a device that harnesses the massive parallelism afforded by quantum superposition and entanglement of quantum bits (or qubits). We report the demonstration of a compiled version of Shor's algorithm on an integrated waveguide silica-on-silicon chip that guides four single-photon qubits through the computation to factor 15.

  2. Coordination number constraint models for hydrogenated amorphous Si deposited by catalytic chemical vapour deposition

    NASA Astrophysics Data System (ADS)

    Kawahara, Toshio; Tabuchi, Norikazu; Arai, Takashi; Sato, Yoshikazu; Morimoto, Jun; Matsumura, Hideki

    2005-02-01

    We measured structure factors of hydrogenated amorphous Si by x-ray diffraction and analysed the obtained structures using a reverse Monte Carlo (RMC) technique. A small shoulder in the measured structure factor S(Q) was observed on the larger Q side of the first peak. The RMC results with an unconstrained model did not clearly show the small shoulder. Adding constraints for coordination numbers 2 and 3, the small shoulder was reproduced and the agreement with the experimental data became better. The ratio of the constrained coordination numbers was consistent with the ratio of Si-H and Si-H2 bonds which was estimated by the Fourier transformed infrared spectra of the same sample. This shoulder and the oscillation of the corresponding pair distribution function g(r) at large r seem to be related to the low randomness of cat-CVD deposited a-Si:H.

  3. Factors associated with sustained remission in patients with rheumatoid arthritis.

    PubMed

    Martire, María Victoria; Marino Claverie, Lucila; Duarte, Vanesa; Secco, Anastasia; Mammani, Marta

    2015-01-01

    To find out the factors that are associated with sustained remission measured by DAS28 and boolean ACR EULAR 2011 criteria at the time of diagnosis of rheumatoid arthritis. Medical records of patients with rheumatoid arthritis in sustained remission according to DAS28 were reviewed. They were compared with patients who did not achieved values of DAS28<2.6 in any visit during the first 3 years after diagnosis. We also evaluated if patients achieved the boolean ACR/EULAR criteria. Variables analyzed: sex, age, smoking, comorbidities, rheumatoid factor, anti-CCP, ESR, CRP, erosions, HAQ, DAS28, extra-articular manifestations, time to initiation of treatment, involvement of large joints, number of tender joints, number of swollen joints, pharmacological treatment. Forty five patients that achieved sustained remission were compared with 44 controls. The variables present at diagnosis that significantly were associated with remission by DAS28 were: lower values of DAS28, HAQ, ESR, NTJ, NSJ, negative CRP, absence of erosions, male sex and absence of involvement of large joints. Only 24.71% achieved the boolean criteria. The variables associated with sustained remission by these criteria were: lower values of DAS28, HAQ, ESR, number of tender joints and number of swollen joints, negative CRP and absence of erosions. The factors associated with sustained remission were the lower baseline disease activity, the low degree of functional disability and lower joint involvement. We consider it important to recognize these factors to optimize treatment. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  4. Proton Form Factor Puzzle and the CEBAF Large Acceptance Spectrometer (CLAS) two-photon exchange experiment

    NASA Astrophysics Data System (ADS)

    Rimal, Dipak

    The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric (GE) and the magnetic ( GM) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton [R = sigma(e +p)/sigma(e+p)]. The ratio R was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of R on kinematic variables such as squared four-momentum transfer (Q2) and the virtual photon polarization parameter (epsilon). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH2) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the Q2 dependence of R at high epsilon(epsilon > 0.8) and the $epsilon dependence of R at approx 0.85 GeV2. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely

  5. The structure factor of primes

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Martelli, F.; Torquato, S.

    2018-03-01

    Although the prime numbers are deterministic, they can be viewed, by some measures, as pseudo-random numbers. In this article, we numerically study the pair statistics of the primes using statistical-mechanical methods, particularly the structure factor S(k) in an interval M ≤slant p ≤slant M + L with M large, and L/M smaller than unity. We show that the structure factor of the prime-number configurations in such intervals exhibits well-defined Bragg-like peaks along with a small ‘diffuse’ contribution. This indicates that primes are appreciably more correlated and ordered than previously thought. Our numerical results definitively suggest an explicit formula for the locations and heights of the peaks. This formula predicts infinitely many peaks in any non-zero interval, similar to the behavior of quasicrystals. However, primes differ from quasicrystals in that the ratio between the location of any two predicted peaks is rational. We also show numerically that the diffuse part decays slowly as M and L increases. This suggests that the diffuse part vanishes in an appropriate infinite-system-size limit.

  6. An evidential approach to problem solving when a large number of knowledge systems is available

    NASA Technical Reports Server (NTRS)

    Dekorvin, Andre

    1989-01-01

    Some recent problems are no longer formulated in terms of imprecise facts, missing data or inadequate measuring devices. Instead, questions pertaining to knowledge and information itself arise and can be phrased independently of any particular area of knowledge. The problem considered in the present work is how to model a problem solver that is trying to find the answer to some query. The problem solver has access to a large number of knowledge systems that specialize in diverse features. In this context, feature means an indicator of what the possibilities for the answer are. The knowledge systems should not be accessed more than once, in order to have truly independent sources of information. Moreover, these systems are allowed to run in parallel. Since access might be expensive, it is necessary to construct a management policy for accessing these knowledge systems. To help in the access policy, some control knowledge systems are available. Control knowledge systems have knowledge about the performance parameters status of the knowledge systems. In order to carry out the double goal of estimating what units to access and to answer the given query, diverse pieces of evidence must be fused. The Dempster-Shafer Theory of Evidence is used to pool the knowledge bases.

  7. CNVcaller: highly efficient and widely applicable software for detecting copy number variations in large populations.

    PubMed

    Wang, Xihong; Zheng, Zhuqing; Cai, Yudong; Chen, Ting; Li, Chao; Fu, Weiwei; Jiang, Yu

    2017-12-01

    The increasing amount of sequencing data available for a wide variety of species can be theoretically used for detecting copy number variations (CNVs) at the population level. However, the growing sample sizes and the divergent complexity of nonhuman genomes challenge the efficiency and robustness of current human-oriented CNV detection methods. Here, we present CNVcaller, a read-depth method for discovering CNVs in population sequencing data. The computational speed of CNVcaller was 1-2 orders of magnitude faster than CNVnator and Genome STRiP for complex genomes with thousands of unmapped scaffolds. CNV detection of 232 goats required only 1.4 days on a single compute node. Additionally, the Mendelian consistency of sheep trios indicated that CNVcaller mitigated the influence of high proportions of gaps and misassembled duplications in the nonhuman reference genome assembly. Furthermore, multiple evaluations using real sheep and human data indicated that CNVcaller achieved the best accuracy and sensitivity for detecting duplications. The fast generalized detection algorithms included in CNVcaller overcome prior computational barriers for detecting CNVs in large-scale sequencing data with complex genomic structures. Therefore, CNVcaller promotes population genetic analyses of functional CNVs in more species. © The Authors 2017. Published by Oxford University Press.

  8. The Number of Neutrinos and the Z Line Shape

    NASA Astrophysics Data System (ADS)

    Blondel, Alain

    2016-10-01

    The Standard Theory can fit any number of fermion families, as long as the number of leptons and quark families are the same. At the time of the conception of LEP, the number of such families was unknown, and it was feared that the Z resonance would be washed out by decaying into so many families of neutrinos! It took only a few weeks in the fall of 1989 to determine that the number is three. The next six years (from 1990 to 1995) were largely devoted to the accurate determination of the Z line shape, with a precision that outperformed the most optimistic expectations by a factor of 10. The tale of these measurements is a bona fide mystery novel, the precession of electrons being strangely perturbed by natural phenomena, such as tides, rain, hydroelectric power, fast trains, not to mention vertical electrostatic separators. The number hidden in the loops of this treasure hunt was 179, the first estimate of the mass of the top quark; then, once that was found, where predicted, the next number was close to zero: the logarithm of Higgs mass divided by that of the Z. Twenty years later, the quality of these measurements remains, but what they tell us is different: it is no longer about unknown parameters of the Standard Theory, it is about what lies beyond it. This is so acutely relevant, that CERN has launched the design study of a powerful Z, W, H and top factory.

  9. Large-Eddy Simulation of Conductive Flows at Low Magnetic Reynolds Number

    NASA Technical Reports Server (NTRS)

    Knaepen, B.; Moin, P.

    2003-01-01

    In this paper we study the LES method with dynamic procedure in the context of conductive flows subject to an applied external magnetic field at low magnetic Reynolds number R(sub m). These kind of flows are encountered in many industrial applications. For example, in the steel industry, applied magnetic fields can be used to damp turbulence in the casting process. In nuclear fusion devices (Tokamaks), liquid-lithium flows are used as coolant blankets and interact with the surrounding magnetic field that drives and confines the fusion plasma. Also, in experimental facilities investigating the dynamo effect, the flow consists of liquid-sodium for which the Prandtl number and, as a consequence, the magnetic Reynolds number is low. Our attention is focused here on the case of homogeneous (initially isotropic) decaying turbulence. The numerical simulations performed mimic the thought experiment described in Moffatt in which an initially homogeneous isotropic conductive flow is suddenly subjected to an applied magnetic field and freely decays without any forcing. Note that this flow was first studied numerically by Schumann. It is well known that in that case, extra damping of turbulence occurs due to the Joule effect and that the flow tends to become progressively independent of the coordinate along the direction of the magnetic field. Our comparison of filtered direct numerical simulation (DNS) predictions and LES predictions show that the dynamic Smagorinsky model enables one to capture successfully the flow with LES, and that it automatically incorporates the effect of the magnetic field on the turbulence. Our paper is organized as follows. In the next section we summarize the LES approach in the case of MHD turbulence at low R(sub m) and recall the definition of the dynamic Smagorinsky model. In Sec. 3 we describe the parameters of the numerical experiments performed and the code used. Section 4 is devoted to the comparison of filtered DNS results and LES results

  10. Static and dynamic pitching moment measurements on a family of elliptic cones at Mach number 11 in helium

    NASA Technical Reports Server (NTRS)

    Orlik-Rueckermann, K. J.; Laberge, J. G.

    1970-01-01

    Static and dynamic pitching moment measurements were made on a family of constant volume elliptic cones about two fixed axes of oscillation in the NAE helium hypersonic wind tunnel at a Mach number of 11 and at Reynolds numbers based on model length of up to 14 million. Viscous effects on the stability derivatives were investigated by varying the Reynolds number for certain models by a factor as large as 10. The models investigated comprised a 7.75 deg circular cone, elliptic cones of axis ratios 3 and 6, and an elliptic cone with conical protuberances.

  11. The Calculated Effect of Various Hydrodynamic and Aerodynamic Factors on the Take-Off of a Large Flying Boat

    NASA Technical Reports Server (NTRS)

    Olson, R.E.; Allison, J.M.

    1939-01-01

    Present designs for large flying boats are characterized by high wing loading, high aspect ratio, and low parasite drag. The high wing loading results in the universal use of flaps for reducing the takeoff and landing speeds. These factors have an effect on takeoff performance and influence to a certain extent the design of the hull. An investigation was made of the influence of various factors and design parameters on the takeoff performance of a hypothetical large flying boat by means of takeoff calculations. The parameters varied in the calculations were size of hull (load coefficient), wing setting, trim, deflection of flap, wing loading, aspect ratio, and parasite drag. The takeoff times and distances were calculated to the stalling speeds and the performance above these speeds was studied separately to determine piloting technique for optimum takeoff. The advantage of quick deflection of the flap at high water speeds is shown.

  12. A Confirmatory Approach to Examining the Factor Structure of the Strengths and Difficulties Questionnaire (SDQ): A Large Scale Cohort Study

    ERIC Educational Resources Information Center

    Niclasen, Janni; Skovgaard, Anne Mette; Andersen, Anne-Marie Nybo; Somhovd, Mikael Julius; Obel, Carsten

    2013-01-01

    The aim of this study was to examine the factor structure of the Strengths and Difficulties Questionnaire (SDQ) using a Structural Confirmatory Factor Analytic approach. The Danish translation of the SDQ was distributed to 71,840 parents and teachers of 5-7 and 10-12-year-old boys and girls from four large scale cohorts. Three theoretical models…

  13. [Numbers of lymph nodes in large intestinal resections for colorectal carcinoma].

    PubMed

    Motycka, V; Ferko, A; Tycová, V; Nikolov, Hadzi; Sotona, O; Cecka, F; Dusek, T; Chobola, M; Pospísil, I

    2010-03-01

    Precise evaluation of lymph nodes in the surgical specimen is crucial for the staging and subsequent decision about the adjuvant therapy in colorectal cancer. Prognosis of the patients can be assessed only in cases when at least 12 lymph nodes in the surgical specimen are examined. To evaluate the radicalism of resections for colorectal carcinoma after introducing laparoscopic approach. We compared all resections for primary colorectal cancer and rectal cancer (C 18-C20) performed in the Department of Surgery in University Hospital Hradec Králové in the years 2005 and 2008 and we evaluated numbers of examined lymph nodes in the surgical specimens. The patients with recurrent tumours and the patients with complete pathological response (negative histology) after neoadjuvant therapy were excluded from the study. 117 patients were included in the study in 2005, 2 of them were operated laparoscopically. 155 patients (more by 32.5%) were included in the study in 2008, 53 of them (34.2%) were operated laparoscopically. In tumours of the right part of the colon (C180-C184) treated by right hemicolectomy: on an average 7.9 (+/- 5.3) lymph nodes were examined in the specimens in 2005, and 15.3 (+/- 7.0) lymph nodes in 2008. In tumours of the left part of the colon (C185-C186) treated by left hemicolectomy: 6.5 (+/- 5.1) lymph nodes were examined in 2005, and 19.6 (+/- 15.6) in 2008. In tumours of the sigmoid colon (C187) 9.1 (+/- 6.9) lymph nodes were examined in 2005,and 15.4 (+/- 7.9) in 2008. In tumours of the rectosigmoid junction (C19) 8.0 (+/- 6.9) lymph nodes were examined in 2005, and 17.8 (+/- 11.2) in 2008. In rectal cancer (C20) 5.2 (+/- 4.5) lymph nodes were examined in 2005, and 13.6 (+/- 12.5) in 2008. There is a significant difference in a number of examined lymph nodes in patients without neodadjuvant treatment compared to those with neoadjuvant chemoradiotherapy and neoadjuvant radiotherapy. In 2005, in an average 3.7 (+/- 3.3) lymph nodes were removed in

  14. Ion-kinetic simulations of D- 3He gas-filled inertial confinement fusion target implosions with moderate to large Knudsen number

    DOE PAGES

    Larroche, O.; Rinderknecht, H. G.; Rosenberg, M. J.; ...

    2016-01-06

    Experiments designed to investigate the transition to non-collisional behavior in D 3He-gas inertial confinement fusion target implosions display increasingly large discrepancies with respect to simulations by standard hydrodynamics codes as the expected ion mean-free-paths λ c increase with respect to the target radius R (i.e., when the Knudsen number N K = λ c/R grows). To take properly into account large N K's, multi-ion-species Vlasov-Fokker-Planck computations of the inner gas in the capsules have been performed, for two different values of N K, one moderate and one large. The results, including nuclear yield, reactivity-weighted ion temperatures, nuclear emissivities, and surfacemore » brightness, have been compared with the experimental data and with the results of hydrodynamical simulations, some of which include an ad hocmodeling of kinetic effects. The experimental results are quite accurately rendered by the kinetic calculations in the smaller-N K case, much better than by the hydrodynamical calculations. The kinetic effects at play in this case are thus correctly understood. However, in the higher-N K case, the agreement is much worse. Furthermore, the remaining discrepancies are shown to arise from kinetic phenomena (e.g., inter-species diffusion) occurring at the gas-pusher interface, which should be investigated in the future work.« less

  15. Diffuse large B-cell lymphoma, not otherwise specified of the palate: A case report

    PubMed Central

    Pereira, Thaís SF.; Castro, Alexandre F.; Mesquita, Ricardo A.

    2013-01-01

    Diffuse large B-cell lymphoma (DLBCL) is the most frequent type of non-Hodgkin´s lymphoma found in oral and maxillofacial regions. A large number of cases may be biologically heterogeneous, which are commonly defined as DLBCL, not otherwise specified (NOS) by the World Health Organization (WHO-2008). The present case reports on an ulcer of raised and irregular edges, found on the border between the hard and soft palate, as the first and only manifestation of an extranodal non-Hodgkin lymphoma in an 85-year-old patient. Incisional biopsy was carried out, and the specimen revealed a proliferation of large lymphoid cells suggestive of diffuse large cell lymphoma. An immunohistochemical analysis was performed. EBV-RNA was assessed by in situ hybridization that also proved to be negative. Immunohistochemical and EBV analyses are important to avoid delays and inappropriate treatment strategies. Although advanced age is considered an adverse prognostic factor, early diagnosis did prove to be a key contributory factor in the cure of non-Hodgkin lymphoma. Key words:Diffuse large B-cell lymphoma, elderly, EBV. PMID:24455096

  16. Factors predictive of adverse events following endoscopic papillary large balloon dilation: results from a multicenter series.

    PubMed

    Park, Soo Jung; Kim, Jin Hong; Hwang, Jae Chul; Kim, Ho Gak; Lee, Don Haeng; Jeong, Seok; Cha, Sang-Woo; Cho, Young Deok; Kim, Hong Ja; Kim, Jong Hyeok; Moon, Jong Ho; Park, Sang-Heum; Itoi, Takao; Isayama, Hiroyuki; Kogure, Hirofumi; Lee, Se Joon; Jung, Kyo Tae; Lee, Hye Sun; Baron, Todd H; Lee, Dong Ki

    2013-04-01

    Lack of established guidelines for endoscopic papillary large balloon dilation (EPLBD) may be a reason for aversion of its use in removal of large common bile duct (CBD) stones. We sought to identify factors predictive of adverse events (AEs) following EPLBD. This multicenter retrospective study investigated 946 consecutive patients who underwent attempted removal of CBD stones ≥10 mm in size using EPLBD (balloon size 12-20 mm) with or without endoscopic sphincterotomy (EST) at 12 academic medical centers in Korea and Japan. Ninety-five (10.0 %) patients exhibited AEs including bleeding in 56, pancreatitis in 24, perforation in nine, and cholangitis in six; 90 (94.7 %) of these were classified as mild or moderate in severity. There were four deaths, three as a result of perforation and one due to delayed massive bleeding. Causative factors identified in fatal cases were full-EST and continued balloon inflation despite a persistent waist seen fluoroscopically. Multivariate analyses showed that cirrhosis (OR 8.03, p = 0.003), length of EST (full-EST: OR 6.22, p < 0.001) and stone size (≥16 mm: OR 4.00, p < 0.001) were associated with increased bleeding, and distal CBD stricture (OR 17.08, p < 0.001) was an independent predictor for perforation. On the other hand, balloon size was associated with deceased pancreatitis (≥14 mm: OR 0.27, p = 0.015). EPLBD appears to be a safe and effective therapeutic approach for retrieval of large stones in patients without distal CBD strictures and when performed without full-EST.

  17. La-doped SrTiO3 films with large cryogenic thermoelectric power factors

    NASA Astrophysics Data System (ADS)

    Cain, Tyler A.; Kajdos, Adam P.; Stemmer, Susanne

    2013-05-01

    The thermoelectric properties at temperatures between 10 K and 300 K of La-doped SrTiO3 thin films grown by hybrid molecular beam epitaxy (MBE) on undoped SrTiO3 substrates are reported. Below 50 K, the Seebeck coefficients exhibit very large magnitudes due to the influence of phonon drag. Combined with high carrier mobilities, exceeding 50 000 cm2 V-1 s-1 at 2 K for the films with the lowest carrier densities, this leads to thermoelectric power factors as high as 470 μWcm-1 K-2. The results are compared with other promising low temperature thermoelectric materials and discussed in the context of coupling with phonons in the undoped substrate.

  18. Operative factors associated with short-term outcome in horses with large colon volvulus: 47 cases from 2006 to 2013.

    PubMed

    Gonzalez, L M; Fogle, C A; Baker, W T; Hughes, F E; Law, J M; Motsinger-Reif, A A; Blikslager, A T

    2015-05-01

    There is an important need for objective parameters that accurately predict the outcome of horses with large colon volvulus. To evaluate the predictive value of a series of histomorphometric parameters on short-term outcome, as well as the impact of colonic resection on horses with large colon volvulus. Retrospective cohort study. Adult horses admitted to the Equine and Farm Animal Veterinary Center at North Carolina State University, Peterson and Smith and Chino Valley Equine Hospitals between 2006 and 2013 that underwent an exploratory coeliotomy, diagnosed with large colon volvulus of ≥360 degrees, where a pelvic flexure biopsy was obtained, and that recovered from general anaesthesia, were selected for inclusion in the study. Logistic regression was used to determine associations between signalment, histomorphometric measurements of interstitium-to-crypt ratio, degree of haemorrhage, percentage loss of luminal and glandular epithelium, as well as colonic resection with short-term outcome (discharge from the hospital). Pelvic flexure biopsies from 47 horses with large colon volvulus were evaluated. Factors that were significantly associated with short-term outcome on univariate logistic regression were Thoroughbred breed (P = 0.04), interstitium-to-crypt ratio >1 (P = 0.02) and haemorrhage score ≥3 (P = 0.005). Resection (P = 0.92) was not found to be associated significantly with short-term outcome. No combined factors increased the likelihood of death in forward stepwise logistic regression modelling. A digitally quantified measurement of haemorrhage area strengthened the association of haemorrhage with nonsurvival in cases of large colon volvulus. Histomorphometric measurements of interstitium-to-crypt ratio and degree of haemorrhage predict short-term outcome in cases of large colon volvulus. Resection was not associated with short-term outcome in horses selected for this study. Accurate quantification of mucosal haemorrhage at the time of surgery may

  19. Workplace Digital Health Is Associated with Improved Cardiovascular Risk Factors in a Frequency-Dependent Fashion: A Large Prospective Observational Cohort Study.

    PubMed

    Widmer, R Jay; Allison, Thomas G; Keane, Brendie; Dallas, Anthony; Bailey, Kent R; Lerman, Lilach O; Lerman, Amir

    2016-01-01

    Cardiovascular disease (CVD) is the leading cause of morbidity and mortality in the US. Emerging employer-sponsored work health programs (WHP) and Digital Health Intervention (DHI) provide monitoring and guidance based on participants' health risk assessments, but with uncertain success. DHI--mobile technology including online and smartphone interventions--has previously been found to be beneficial in reducing CVD outcomes and risk factors, however its use and efficacy in a large, multisite, primary prevention cohort has not been described to date. We analyzed usage of DHI and change in intermediate markers of CVD over the course of one year in 30,974 participants of a WHP across 81 organizations in 42 states between 2011 and 2014, stratified by participation log-ins categorized as no (n = 14,173), very low (<12/yr, n = 12,260), monthly (n = 3,360), weekly (n = 651), or semi-weekly (at least twice per week). We assessed changes in weight, waist circumference, body mass index (BMI), blood pressure, lipids, and glucose at one year, as a function of participation level. We utilized a Poisson regression model to analyze variables associated with increased participation. Those with the highest level of participation were slightly, but significantly (p<0.0001), older (48.3±11.2 yrs) than non-participants (47.7±12.2 yr) and more likely to be females (63.7% vs 37.3% p<0.0001). Significant improvements in weight loss were demonstrated with every increasing level of DHI usage with the largest being in the semi-weekly group (-3.39±1.06 lbs; p = 0.0013 for difference from weekly). Regression analyses demonstrated that greater participation in the DHI (measured by log-ins) was significantly associated with older age (p<0.001), female sex (p<0.001), and Hispanic ethnicity (p<0.001). The current study demonstrates the success of DHI in a large, community cohort to modestly reduce CVD risk factors in individuals with high participation rate. Furthermore, participants previously

  20. Study of 3-D Dynamic Roughness Effects on Flow Over a NACA 0012 Airfoil Using Large Eddy Simulations at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Guda, Venkata Subba Sai Satish

    There have been several advancements in the aerospace industry in areas of design such as aerodynamics, designs, controls and propulsion; all aimed at one common goal i.e. increasing efficiency --range and scope of operation with lesser fuel consumption. Several methods of flow control have been tried. Some were successful, some failed and many were termed as impractical. The low Reynolds number regime of 104 - 105 is a very interesting range. Flow physics in this range are quite different than those of higher Reynolds number range. Mid and high altitude UAV's, MAV's, sailplanes, jet engine fan blades, inboard helicopter rotor blades and wind turbine rotors are some of the aerodynamic applications that fall in this range. The current study deals with using dynamic roughness as a means of flow control over a NACA 0012 airfoil at low Reynolds numbers. Dynamic 3-D surface roughness elements on an airfoil placed near the leading edge aim at increasing the efficiency by suppressing the effects of leading edge separation like leading edge stall by delaying or totally eliminating flow separation. A numerical study of the above method has been carried out by means of a Large Eddy Simulation, a mathematical model for turbulence in Computational Fluid Dynamics, owing to the highly unsteady nature of the flow. A user defined function has been developed for the 3-D dynamic roughness element motion. Results from simulations have been compared to those from experimental PIV data. Large eddy simulations have relatively well captured the leading edge stall. For the clean cases, i.e. with the DR not actuated, the LES was able to reproduce experimental results in a reasonable fashion. However DR simulation results show that it fails to reattach the flow and suppress flow separation compared to experiments. Several novel techniques of grid design and hump creation are introduced through this study.

  1. [Formula: see text]-convergence, complete convergence, and weak laws of large numbers for asymptotically negatively associated random vectors with values in [Formula: see text].

    PubMed

    Ko, Mi-Hwa

    2018-01-01

    In this paper, based on the Rosenthal-type inequality for asymptotically negatively associated random vectors with values in [Formula: see text], we establish results on [Formula: see text]-convergence and complete convergence of the maximums of partial sums are established. We also obtain weak laws of large numbers for coordinatewise asymptotically negatively associated random vectors with values in [Formula: see text].

  2. Factors Affecting Mothers' Healthcare-Seeking Behaviour for Childhood Illnesses in a Rural Nigerian Setting

    ERIC Educational Resources Information Center

    Abdulraheem, I. S.; Parakoyi, D. B.

    2009-01-01

    Appropriate healthcare-seeking behaviour could prevent a significant number of child deaths and complications due to ill health. Improving mothers' care-seeking behaviour could also contribute in reducing a large number of child morbidity and mortality in developing countries. This article aims to determine factors affecting healthcare-seeking…

  3. Relationships between number and space processing in adults with and without dyscalculia.

    PubMed

    Mussolin, Christophe; Martin, Romain; Schiltz, Christine

    2011-09-01

    A large body of evidence indicates clear relationships between number and space processing in healthy and brain-damaged adults, as well as in children. The present paper addressed this issue regarding atypical math development. Adults with a diagnosis of dyscalculia (DYS) during childhood were compared to adults with average or high abilities in mathematics across two bisection tasks. Participants were presented with Arabic number triplets and had to judge either the number magnitude or the spatial location of the middle number relative to the two outer numbers. For the numerical judgment, adults with DYS were slower than both groups of control peers. They were also more strongly affected by the factors related to number magnitude such as the range of the triplets or the distance between the middle number and the real arithmetical mean. By contrast, adults with DYS were as accurate and fast as adults who never experienced math disability when they had to make a spatial judgment. Moreover, number-space congruency affected performance similarly in the three experimental groups. These findings support the hypothesis of a deficit of number magnitude representation in DYS with a relative preservation of some spatial mechanisms in DYS. Results are discussed in terms of direct and indirect number-space interactions. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. The calculated effect of various hydrodynamic and aerodynamic factors on the take-off of a large flying boat

    NASA Technical Reports Server (NTRS)

    Olson, R E; Allison, J M

    1940-01-01

    Report presents the results of an investigation made to determine the influence of various factors on the take-off performance of a hypothetical large flying boat by means of take-off calculations. The factors varied in the calculations were size of hull (load coefficient), wing setting, trim, deflection of flap, wing loading, aspect ratio, and parasite drag. The take-off times and distances were calculated to the stalling speeds and the performance above these speeds was separately studied to determine piloting technique for optimum take-off.

  5. Symbolic Number Skills Predict Growth in Nonsymbolic Number Skills in Kindergarteners

    ERIC Educational Resources Information Center

    Lyons, Ian M.; Bugden, Stephanie; Zheng, Samuel; De Jesus, Stefanie; Ansari, Daniel

    2018-01-01

    There is currently considerable discussion about the relative influences of evolutionary and cultural factors in the development of early numerical skills. In particular, there has been substantial debate and study of the relationship between approximate, nonverbal (approximate magnitude system [AMS]) and exact, symbolic (symbolic number system…

  6. Algorithm to calculate proportional area transformation factors for digital geographic databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.

    1983-01-01

    A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less

  7. Production of large numbers of hybridomas producing monoclonal antibodies against rat IgE using mast cell-deficient w/wv and sl/sld strains of mice.

    PubMed

    Rup, B J

    1989-08-15

    A number of different mouse strains and immunization protocols were used to attempt to make monoclonal antibodies against rat IgE for use in studies of the structure, biological activities and regulation of this class of antibody. Successful production of large numbers of monoclonal antibodies was achieved when mast cell deficient (w/wv and sl/sld) but not conventional (BALB/c, CAF1 or SJL) mice were used. These results suggest that the poor response of conventional strains of mice to rat IgE may be due to the presence of mast cells bearing high affinity receptors for IgE in these mice.

  8. Factors modifying the response of large animals to low-intensity radiation exposure

    NASA Technical Reports Server (NTRS)

    Page, N. P.; Still, E. T.

    1972-01-01

    In assessing the biological response to space radiation, two of the most important modifying factors are dose protraction and dose distribution to the body. Studies are reported in which sheep and swine were used to compare the hematology and lethality response resulting from radiation exposure encountered in a variety of forms, including acute (high dose-rate), chronic (low dose-rate), combinations of acute and chronic, and whether received as a continuous or as fractionated exposure. While sheep and swine are basically similar in response to acute radiation, their sensitivity to chronic irradiation is markedly different. Sheep remain relatively sensitive as the radiation exposure is protracted while swine are more resistant and capable of surviving extremely large doses of chronic irradiation. This response to chronic irradiation correlated well with changes in radiosensitivity and recovery following an acute, sublethal exposure.

  9. In silico identification of conserved microRNAs in large number of diverse plant species

    PubMed Central

    Sunkar, Ramanjulu; Jagadeeswaran, Guru

    2008-01-01

    Background MicroRNAs (miRNAs) are recently discovered small non-coding RNAs that play pivotal roles in gene expression, specifically at the post-transcriptional level in plants and animals. Identification of miRNAs in large number of diverse plant species is important to understand the evolution of miRNAs and miRNA-targeted gene regulations. Now-a-days, publicly available databases play a central role in the in-silico biology. Because, at least ~21 miRNA families are conserved in higher plants, a homology based search using these databases can help identify orthologs or paralogs in plants. Results We searched all publicly available nucleotide databases of genome survey sequences (GSS), high-throughput genomics sequences (HTGS), expressed sequenced tags (ESTs) and nonredundant (NR) nucleotides and identified 682 miRNAs in 155 diverse plant species. We found more than 15 conserved miRNA families in 11 plant species, 10 to14 families in 10 plant species and 5 to 9 families in 29 plant species. Nineteen conserved miRNA families were identified in important model legumes such as Medicago, Lotus and soybean. Five miRNA families – miR319, miR156/157, miR169, miR165/166 and miR394 – were found in 51, 45, 41, 40 and 40 diverse plant species, respectively. miR403 homologs were found in 16 dicots, whereas miR437 and miR444 homologs, as well as the miR396d/e variant of the miR396 family, were found only in monocots, thus providing large-scale authenticity for the dicot- and monocot-specific miRNAs. Furthermore, we provide computational and/or experimental evidence for the conservation of 6 newly found Arabidopsis miRNA homologs (miR158, miR391, miR824, miR825, miR827 and miR840) and 2 small RNAs (small-85 and small-87) in Brassica spp. Conclusion Using all publicly available nucleotide databases, 682 miRNAs were identified in 155 diverse plant species. By combining the expression analysis with the computational approach, we found that 6 miRNAs and 2 small RNAs that have

  10. Effect of Repeat Copy Number on Variable-Number Tandem Repeat Mutations in Escherichia coli O157:H7

    PubMed Central

    Vogler, Amy J.; Keys, Christine; Nemoto, Yoshimi; Colman, Rebecca E.; Jay, Zack; Keim, Paul

    2006-01-01

    Variable-number tandem repeat (VNTR) loci have shown a remarkable ability to discriminate among isolates of the recently emerged clonal pathogen Escherichia coli O157:H7, making them a very useful molecular epidemiological tool. However, little is known about the rates at which these sequences mutate, the factors that affect mutation rates, or the mechanisms by which mutations occur at these loci. Here, we measure mutation rates for 28 VNTR loci and investigate the effects of repeat copy number and mismatch repair on mutation rate using in vitro-generated populations for 10 E. coli O157:H7 strains. We find single-locus rates as high as 7.0 × 10−4 mutations/generation and a combined 28-locus rate of 6.4 × 10−4 mutations/generation. We observed single- and multirepeat mutations that were consistent with a slipped-strand mispairing mutation model, as well as a smaller number of large repeat copy number mutations that were consistent with recombination-mediated events. Repeat copy number within an array was strongly correlated with mutation rate both at the most mutable locus, O157-10 (r2 = 0.565, P = 0.0196), and across all mutating loci. The combined locus model was significant whether locus O157-10 was included (r2 = 0.833, P < 0.0001) or excluded (r2 = 0.452, P < 0.0001) from the analysis. Deficient mismatch repair did not affect mutation rate at any of the 28 VNTRs with repeat unit sizes of >5 bp, although a poly(G) homomeric tract was destabilized in the mutS strain. Finally, we describe a general model for VNTR mutations that encompasses insertions and deletions, single- and multiple-repeat mutations, and their relative frequencies based upon our empirical mutation rate data. PMID:16740932

  11. Effect of repeat copy number on variable-number tandem repeat mutations in Escherichia coli O157:H7.

    PubMed

    Vogler, Amy J; Keys, Christine; Nemoto, Yoshimi; Colman, Rebecca E; Jay, Zack; Keim, Paul

    2006-06-01

    Variable-number tandem repeat (VNTR) loci have shown a remarkable ability to discriminate among isolates of the recently emerged clonal pathogen Escherichia coli O157:H7, making them a very useful molecular epidemiological tool. However, little is known about the rates at which these sequences mutate, the factors that affect mutation rates, or the mechanisms by which mutations occur at these loci. Here, we measure mutation rates for 28 VNTR loci and investigate the effects of repeat copy number and mismatch repair on mutation rate using in vitro-generated populations for 10 E. coli O157:H7 strains. We find single-locus rates as high as 7.0 x 10(-4) mutations/generation and a combined 28-locus rate of 6.4 x 10(-4) mutations/generation. We observed single- and multirepeat mutations that were consistent with a slipped-strand mispairing mutation model, as well as a smaller number of large repeat copy number mutations that were consistent with recombination-mediated events. Repeat copy number within an array was strongly correlated with mutation rate both at the most mutable locus, O157-10 (r2= 0.565, P = 0.0196), and across all mutating loci. The combined locus model was significant whether locus O157-10 was included (r2= 0.833, P < 0.0001) or excluded (r2= 0.452, P < 0.0001) from the analysis. Deficient mismatch repair did not affect mutation rate at any of the 28 VNTRs with repeat unit sizes of >5 bp, although a poly(G) homomeric tract was destabilized in the mutS strain. Finally, we describe a general model for VNTR mutations that encompasses insertions and deletions, single- and multiple-repeat mutations, and their relative frequencies based upon our empirical mutation rate data.

  12. An investigation into the factors that encourage learner participation in a large group medical classroom.

    PubMed

    Moffett, Jennifer; Berezowski, John; Spencer, Dustine; Lanning, Shari

    2014-01-01

    Effective lectures often incorporate activities that encourage learner participation. A challenge for educators is how to facilitate this in the large group lecture setting. This study investigates the individual student characteristics involved in encouraging (or dissuading) learners to interact, ask questions, and make comments in class. Students enrolled in a Doctor of Veterinary Medicine program at Ross University School of Veterinary Medicine, St Kitts, were invited to complete a questionnaire canvassing their participation in the large group classroom. Data from the questionnaire were analyzed using Excel (Microsoft, Redmond, WA, USA) and the R software environment (http://www.r-project.org/). One hundred and ninety-two students completed the questionnaire (response rate, 85.7%). The results showed statistically significant differences between male and female students when asked to self-report their level of participation (P=0.011) and their confidence to participate (P<0.001) in class. No statistically significant difference was identified between different age groups of students (P=0.594). Student responses reflected that an "aversion to public speaking" acted as the main deterrent to participating during a lecture. Female participants were 3.56 times more likely to report a fear of public speaking than male participants (odds ratio 3.56, 95% confidence interval 1.28-12.33, P=0.01). Students also reported "smaller sizes of class and small group activities" and "other students participating" as factors that made it easier for them to participate during a lecture. In this study, sex likely played a role in learner participation in the large group veterinary classroom. Male students were more likely to participate in class and reported feeling more confident to participate than female students. Female students in this study commonly identified aversion to public speaking as a factor which held them back from participating in the large group lecture setting

  13. [Central nervous system relapse in diffuse large B cell lymphoma: Risk factors].

    PubMed

    Sancho, Juan-Manuel; Ribera, Josep-Maria

    2016-01-15

    Central nervous system (CNS) involvement by lymphoma is a complication associated, almost invariably, with a poor prognosis. The knowledge of the risk factors for CNS relapse is important to determine which patients could benefit from prophylaxis. Thus, patients with very aggressive lymphomas (such as lymphoblastic lymphoma or Burkitt's lymphoma) must systematically receive CNS prophylaxis due to a high CNS relapse rate (25-30%), while in patients with indolent lymphoma (such as follicular lymphoma or marginal lymphoma) prophylaxis is unnecessary. However, the question about CNS prophylaxis in patients with diffuse large B-cell lymphoma (DLBCL), the most common type of lymphoma, remains controversial. The information available is extensive, mainly based on retrospective and heterogeneous studies. There seems that immunochemotherapy based on rituximab reduces the CNS relapse rate. On the other hand, patients with increased serum lactate dehydrogenase plus more than one extranodal involvement seem to have a higher risk of CNS relapse, but a prophylaxis strategy based only on the presence of these 2 factors does not prevent all CNS relapses. Patients with involvement of testes or breast have high risk of CNS relapse and prophylaxis is mandatory. Finally, CNS prophylaxis could be considered in patients with DLBCL and renal or epidural space involvement, as well as in those cases with MYC rearrangements, although additional studies are necessary. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  14. Drug testing and flow cytometry analysis on a large number of uniform sized tumor spheroids using a microfluidic device

    NASA Astrophysics Data System (ADS)

    Patra, Bishnubrata; Peng, Chien-Chung; Liao, Wei-Hao; Lee, Chau-Hwang; Tung, Yi-Chung

    2016-02-01

    Three-dimensional (3D) tumor spheroid possesses great potential as an in vitro model to improve predictive capacity for pre-clinical drug testing. In this paper, we combine advantages of flow cytometry and microfluidics to perform drug testing and analysis on a large number (5000) of uniform sized tumor spheroids. The spheroids are formed, cultured, and treated with drugs inside a microfluidic device. The spheroids can then be harvested from the device without tedious operation. Due to the ample cell numbers, the spheroids can be dissociated into single cells for flow cytometry analysis. Flow cytometry provides statistical information in single cell resolution that makes it feasible to better investigate drug functions on the cells in more in vivo-like 3D formation. In the experiments, human hepatocellular carcinoma cells (HepG2) are exploited to form tumor spheroids within the microfluidic device, and three anti-cancer drugs: Cisplatin, Resveratrol, and Tirapazamine (TPZ), and their combinations are tested on the tumor spheroids with two different sizes. The experimental results suggest the cell culture format (2D monolayer vs. 3D spheroid) and spheroid size play critical roles in drug responses, and also demonstrate the advantages of bridging the two techniques in pharmaceutical drug screening applications.

  15. Application of HB17, an Arabidopsis class II homeodomain-leucine zipper transcription factor, to regulate chloroplast number and photosynthetic capacity.

    PubMed

    Hymus, Graham J; Cai, Suqin; Kohl, Elizabeth A; Holtan, Hans E; Marion, Colleen M; Tiwari, Shiv; Maszle, Don R; Lundgren, Marjorie R; Hong, Melissa C; Channa, Namitha; Loida, Paul; Thompson, Rebecca; Taylor, J Philip; Rice, Elena; Repetti, Peter P; Ratcliffe, Oliver J; Reuber, T Lynne; Creelman, Robert A

    2013-11-01

    Transcription factors are proposed as suitable targets for the control of traits such as yield or food quality in plants. This study reports the results of a functional genomics research effort that identified ATHB17, a transcription factor from the homeodomain-leucine zipper class II family, as a novel target for the enhancement of photosynthetic capacity. It was shown that ATHB17 is expressed natively in the root quiescent centre (QC) from Arabidopsis embryos and seedlings. Analysis of the functional composition of genes differentially expressed in the QC from a knockout mutant (athb17-1) compared with its wild-type sibling revealed the over-representation of genes involved in auxin stimulus, embryo development, axis polarity specification, and plastid-related processes. While no other phenotypes were observed in athb17-1 plants, overexpression of ATHB17 produced a number of phenotypes in Arabidopsis including enhanced chlorophyll content. Image analysis of isolated mesophyll cells of 35S::ATHB17 lines revealed an increase in the number of chloroplasts per unit cell size, which is probably due to an increase in the number of proplastids per meristematic cell. Leaf physiological measurements provided evidence of improved photosynthetic capacity in 35S::ATHB17 lines on a per unit leaf area basis. Estimates of the capacity for ribulose-1,5-bisphosphate-saturated and -limited photosynthesis were significantly higher in 35S::ATHB17 lines.

  16. Application of HB17, an Arabidopsis class II homeodomain-leucine zipper transcription factor, to regulate chloroplast number and photosynthetic capacity

    PubMed Central

    Kohl, Elizabeth A.; Tiwari, Shiv; Lundgren, Marjorie R.; Channa, Namitha; Creelman, Robert A.

    2013-01-01

    Transcription factors are proposed as suitable targets for the control of traits such as yield or food quality in plants. This study reports the results of a functional genomics research effort that identified ATHB17, a transcription factor from the homeodomain-leucine zipper class II family, as a novel target for the enhancement of photosynthetic capacity. It was shown that ATHB17 is expressed natively in the root quiescent centre (QC) from Arabidopsis embryos and seedlings. Analysis of the functional composition of genes differentially expressed in the QC from a knockout mutant (athb17-1) compared with its wild-type sibling revealed the over-representation of genes involved in auxin stimulus, embryo development, axis polarity specification, and plastid-related processes. While no other phenotypes were observed in athb17-1 plants, overexpression of ATHB17 produced a number of phenotypes in Arabidopsis including enhanced chlorophyll content. Image analysis of isolated mesophyll cells of 35S::ATHB17 lines revealed an increase in the number of chloroplasts per unit cell size, which is probably due to an increase in the number of proplastids per meristematic cell. Leaf physiological measurements provided evidence of improved photosynthetic capacity in 35S::ATHB17 lines on a per unit leaf area basis. Estimates of the capacity for ribulose-1,5-bisphosphate-saturated and -limited photosynthesis were significantly higher in 35S::ATHB17 lines. PMID:24006420

  17. Computational strategies for alternative single-step Bayesian regression models with large numbers of genotyped and non-genotyped animals.

    PubMed

    Fernando, Rohan L; Cheng, Hao; Golden, Bruce L; Garrick, Dorian J

    2016-12-08

    Two types of models have been used for single-step genomic prediction and genome-wide association studies that include phenotypes from both genotyped animals and their non-genotyped relatives. The two types are breeding value models (BVM) that fit breeding values explicitly and marker effects models (MEM) that express the breeding values in terms of the effects of observed or imputed genotypes. MEM can accommodate a wider class of analyses, including variable selection or mixture model analyses. The order of the equations that need to be solved and the inverses required in their construction vary widely, and thus the computational effort required depends upon the size of the pedigree, the number of genotyped animals and the number of loci. We present computational strategies to avoid storing large, dense blocks of the MME that involve imputed genotypes. Furthermore, we present a hybrid model that fits a MEM for animals with observed genotypes and a BVM for those without genotypes. The hybrid model is computationally attractive for pedigree files containing millions of animals with a large proportion of those being genotyped. We demonstrate the practicality on both the original MEM and the hybrid model using real data with 6,179,960 animals in the pedigree with 4,934,101 phenotypes and 31,453 animals genotyped at 40,214 informative loci. To complete a single-trait analysis on a desk-top computer with four graphics cards required about 3 h using the hybrid model to obtain both preconditioned conjugate gradient solutions and 42,000 Markov chain Monte-Carlo (MCMC) samples of breeding values, which allowed making inferences from posterior means, variances and covariances. The MCMC sampling required one quarter of the effort when the hybrid model was used compared to the published MEM. We present a hybrid model that fits a MEM for animals with genotypes and a BVM for those without genotypes. Its practicality and considerable reduction in computing effort was

  18. Microglial numbers attain adult levels after undergoing a rapid decrease in cell number in the third postnatal week.

    PubMed

    Nikodemova, Maria; Kimyon, Rebecca S; De, Ishani; Small, Alissa L; Collier, Lara S; Watters, Jyoti J

    2015-01-15

    During postnatal development, microglia, CNS resident innate immune cells, are essential for synaptic pruning, neuronal apoptosis and remodeling. During this period microglia undergo morphological and phenotypic transformations; however, little is known about how microglial number and density is regulated during postnatal CNS development. We found that after an initial increase during the first 14 postnatal days, microglial numbers in mouse brain began declining in the third postnatal week and were reduced by 50% by 6weeks of age; these "adult" levels were maintained until at least 9months of age. Microglial CD11b levels increased, whereas CD45 and ER-MP58 declined between P10 and adulthood, consistent with a maturing microglial phenotype. Our data indicate that both increased microglial apoptosis and a decreased proliferative capacity contribute to the developmental reduction in microglial numbers. We found no correlation between developmental reductions in microglial numbers and brain mRNA levels of Cd200, Cx3Cl1, M-Csf or Il-34. We tested the ability of M-Csf-overexpression, a key growth factor promoting microglial proliferation and survival, to prevent microglial loss in the third postnatal week. Mice overexpressing M-Csf in astrocytes had higher numbers of microglia at all ages tested. However, the developmental decline in microglial numbers still occurred, suggesting that chronically elevated M-CSF is unable to overcome the developmental decrease in microglial numbers. Whereas the identity of the factor(s) regulating microglial number and density during development remains to be determined, it is likely that microglia respond to a "maturation" signal since the reduction in microglial numbers coincides with CNS maturation. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Benzothienobenzothiophene-Based Molecular Conductors: High Conductivity, Large Thermoelectric Power Factor, and One-Dimensional Instability.

    PubMed

    Kiyota, Yasuhiro; Kadoya, Tomofumi; Yamamoto, Kaoru; Iijima, Kodai; Higashino, Toshiki; Kawamoto, Tadashi; Takimiya, Kazuo; Mori, Takehiko

    2016-03-23

    On the basis of an excellent transistor material, [1]benzothieno[3,2-b][1]benzothiophene (BTBT), a series of highly conductive organic metals with the composition of (BTBT)2XF6 (X = P, As, Sb, and Ta) are prepared and the structural and physical properties are investigated. The room-temperature conductivity amounts to 4100 S cm(-1) in the AsF6 salt, corresponding to the drift mobility of 16 cm(2) V(-1) s(-1). Owing to the high conductivity, this salt shows a thermoelectric power factor of 55-88 μW K(-2) m(-1), which is a large value when this compound is regarded as an organic thermoelectric material. The thermoelectric power and the reflectance spectrum indicate a large bandwidth of 1.4 eV. These salts exhibit an abrupt resistivity jump under 200 K, which turns to an insulating state below 60 K. The paramagnetic spin susceptibility, and the Raman and the IR spectra suggest 4kF charge-density waves as an origin of the low-temperature insulating state.

  20. Conditional Random Fields for Fast, Large-Scale Genome-Wide Association Studies

    PubMed Central

    Huang, Jim C.; Meek, Christopher; Kadie, Carl; Heckerman, David

    2011-01-01

    Understanding the role of genetic variation in human diseases remains an important problem to be solved in genomics. An important component of such variation consist of variations at single sites in DNA, or single nucleotide polymorphisms (SNPs). Typically, the problem of associating particular SNPs to phenotypes has been confounded by hidden factors such as the presence of population structure, family structure or cryptic relatedness in the sample of individuals being analyzed. Such confounding factors lead to a large number of spurious associations and missed associations. Various statistical methods have been proposed to account for such confounding factors such as linear mixed-effect models (LMMs) or methods that adjust data based on a principal components analysis (PCA), but these methods either suffer from low power or cease to be tractable for larger numbers of individuals in the sample. Here we present a statistical model for conducting genome-wide association studies (GWAS) that accounts for such confounding factors. Our method scales in runtime quadratic in the number of individuals being studied with only a modest loss in statistical power as compared to LMM-based and PCA-based methods when testing on synthetic data that was generated from a generalized LMM. Applying our method to both real and synthetic human genotype/phenotype data, we demonstrate the ability of our model to correct for confounding factors while requiring significantly less runtime relative to LMMs. We have implemented methods for fitting these models, which are available at http://www.microsoft.com/science. PMID:21765897

  1. Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2016-12-01

    The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.

  2. Features of precision slot cutting with a large number of calibers using the radiation of a single-mode fiber laser

    NASA Astrophysics Data System (ADS)

    Vitshas, A. A.; Zelentsov, A. G.; Lopota, V. A.; Menakhin, V. P.; Panchenko, V. P.; Soroka, A. M.

    2014-02-01

    The results of the experimental and theoretical investigations aimed at determining the characteristics and features of precision slot cutting with a large number of calibers in sheets of low-carbon steel using the radiation of a single-mode fiber laser with pulse power up to 1 kW are presented. The description of the experimental installation, performance conditions of investigations, and variable parameters are described. Precision cutting of low-carbon steel up to 10 mm with the number of calibers ranging from 30 to 70 at a slot width of ≈60 μm is performed for the first time. Such cutting occurs only in the pulsed-periodic mode using single-mode radiation with a pulse duration of 2-3 ms, a pulse ratio of 2-4, and oxygen, whose influence differs in principle both in various cut regions over the sheet thickness and from cutting with a CO2 laser. The cutting velocity (100-50 mm/min) of sheet steel up to thicknesses of 10 mm with deep channeling, roughness parameters, hardness of the cut surface, which insignificantly (by ≈20%) exceeds the hardness of untreated steel, the phase structure of steel, and the scales of their varying inside metal are measured. The efficiency (≈3%) of precision cutting and the efficiency of transportation of radiation (25%) in large-caliber slot orifices in the "waveguide" mode are determined by the experimental data. The useful specific energy contribution of the laser radiation is w l = N l/( hbv) ≈ 2 × 1012 J/m2 for all studied thicknesses of sheet samples accurate to 20%. A qualitative model of the laser-oxygen precision cutting with deep channeling, which explains the cyclic and interrupting character of cutting and necessity of using oxygen as the cutting gas, is proposed.

  3. SU-F-J-205: Effect of Cone Beam Factor On Cone Beam CT Number Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, W; Hua, C; Farr, J

    Purpose: To examine the suitability of a Catphan™ 700 phantom for image quality QA of a cone beam computed tomography (CBCT) system deployed for proton therapy. Methods: Catphan phantoms, particularly Catphan™ 504, are commonly used in image quality QA for CBCT. As a newer product, Catphan™ 700 offers more tissue equivalent inserts which may be useful for generating the electron density – CT number curve for CBCT based treatment planning. The sensitometry-and-geometry module used in Catphan™ 700 is located at the end of the phantom and after the resolution line pair module. In Catphan™ 504 the line pair module ismore » located at the end of the phantom and after the sensitometry-and-geometry module. To investigate the effect of difference in location on CT number accuracy due to the cone beam factor, we scanned the Catphan™ 700 with the central plane of CBCT at the center of the phantom, line pair and sensitometry-andgeometry modules of the phantom, respectively. The protocol head and thorax scan modes were used. For each position, scans were repeated 4 times. Results: For the head scan mode, the standard deviation (SD) of the CT numbers of each insert under 4 repeated scans was up to 20 HU, 11 HU, and 11 HU, respectively, for the central plane of CBCT located at the center of the phantom, line pair, and sensitometry-and-geometry modules of the phantom. The mean of the SD was 9.9 HU, 5.7 HU, and 5.9 HU, respectively. For the thorax mode, the mean of the SD was 4.5 HU, 4.4 HU, and 4.4 HU, respectively. The assessment of image quality based on resolution and spatial linearity was not affected by imaging location changes. Conclusion: When the Catphan™ 700 was aligned to the center of imaging region, the CT number accuracy test may not meet expectations. We recommend reconfiguration of the modules.« less

  4. Factor structure and reliability of the depression, anxiety and stress scales in a large Portuguese community sample.

    PubMed

    Vasconcelos-Raposo, José; Fernandes, Helder Miguel; Teixeira, Carla M

    2013-01-01

    The purpose of the present study was to assess the factor structure and reliability of the Depression, Anxiety and Stress Scales (DASS-21) in a large Portuguese community sample. Participants were 1020 adults (585 women and 435 men), with a mean age of 36.74 (SD = 11.90) years. All scales revealed good reliability, with Cronbach's alpha values between .80 (anxiety) and .84 (depression). The internal consistency of the total score was .92. Confirmatory factor analysis revealed that the best-fitting model (*CFI = .940, *RMSEA = .038) consisted of a latent component of general psychological distress (or negative affectivity) plus orthogonal depression, anxiety and stress factors. The Portuguese version of the DASS-21 showed good psychometric properties (factorial validity and reliability) and thus can be used as a reliable and valid instrument for measuring depression, anxiety and stress symptoms.

  5. Exploratory Factor Analysis as a Construct Validation Tool: (Mis)applications in Applied Linguistics Research

    ERIC Educational Resources Information Center

    Karami, Hossein

    2015-01-01

    Factor analysis has been frequently exploited in applied research to provide evidence about the underlying factors in various measurement instruments. A close inspection of a large number of studies published in leading applied linguistic journals shows that there is a misconception among applied linguists as to the relative merits of exploratory…

  6. Unit Reynolds number, Mach number and pressure gradient effects on laminar-turbulent transition in two-dimensional boundary layers

    NASA Astrophysics Data System (ADS)

    Risius, Steffen; Costantini, Marco; Koch, Stefan; Hein, Stefan; Klein, Christian

    2018-05-01

    The influence of unit Reynolds number (Re_1=17.5× 106-80× 106 {m}^{-1}), Mach number (M= 0.35-0.77) and incompressible shape factor (H_{12} = 2.50-2.66) on laminar-turbulent boundary layer transition was systematically investigated in the Cryogenic Ludwieg-Tube Göttingen (DNW-KRG). For this investigation the existing two-dimensional wind tunnel model, PaLASTra, which offers a quasi-uniform streamwise pressure gradient, was modified to reduce the size of the flow separation region at its trailing edge. The streamwise temperature distribution and the location of laminar-turbulent transition were measured by means of temperature-sensitive paint (TSP) with a higher accuracy than attained in earlier measurements. It was found that for the modified PaLASTra model the transition Reynolds number (Re_{ {tr}}) exhibits a linear dependence on the pressure gradient, characterized by H_{12}. Due to this linear relation it was possible to quantify the so-called `unit Reynolds number effect', which is an increase of Re_{ {tr}} with Re_1. By a systematic variation of M, Re_1 and H_{12} in combination with a spectral analysis of freestream disturbances, a stabilizing effect of compressibility on boundary layer transition, as predicted by linear stability theory, was detected (`Mach number effect'). Furthermore, two expressions were derived which can be used to calculate the transition Reynolds number as a function of the amplitude of total pressure fluctuations, Re_1 and H_{12}. To determine critical N-factors, the measured transition locations were correlated with amplification rates, calculated by incompressible and compressible linear stability theory. By taking into account the spectral level of total pressure fluctuations at the frequency of the most amplified Tollmien-Schlichting wave at transition location, the scatter in the determined critical N-factors was reduced. Furthermore, the receptivity coefficients dependence on incidence angle of acoustic waves was used to

  7. Cold spray nozzle mach number limitation

    NASA Astrophysics Data System (ADS)

    Jodoin, B.

    2002-12-01

    The classic one-dimensional isentropic flow approach is used along with a two-dimensional axisymmetric numerical model to show that the exit Mach number of a cold spray nozzle should be limited due to two factors. To show this, the two-dimensional model is validated with experimental data. Although both models show that the stagnation temperature is an important limiting factor, the one-dimensional approach fails to show how important the shock-particle interactions are at limiting the nozzle Mach number. It is concluded that for an air nozzle spraying solid powder particles, the nozzle Mach number should be set between 1.5 and 3 to limit the negative effects of the high stagnation temperature and of the shock-particle interactions.

  8. Implementation factors affecting the large-scale deployment of digital health and well-being technologies: A qualitative study of the initial phases of the 'Living-It-Up' programme.

    PubMed

    Agbakoba, Ruth; McGee-Lennon, Marilyn; Bouamrane, Matt-Mouley; Watson, Nicholas; Mair, Frances S

    2016-12-01

    Little is known about the factors which facilitate or impede the large-scale deployment of health and well-being consumer technologies. The Living-It-Up project is a large-scale digital intervention led by NHS 24, aiming to transform health and well-being services delivery throughout Scotland. We conducted a qualitative study of the factors affecting the implementation and deployment of the Living-It-Up services. We collected a range of data during the initial phase of deployment, including semi-structured interviews (N = 6); participant observation sessions (N = 5) and meetings with key stakeholders (N = 3). We used the Normalisation Process Theory as an explanatory framework to interpret the social processes at play during the initial phases of deployment.Initial findings illustrate that it is clear - and perhaps not surprising - that the size and diversity of the Living-It-Up consortium made implementation processes more complex within a 'multi-stakeholder' environment. To overcome these barriers, there is a need to clearly define roles, tasks and responsibilities among the consortium partners. Furthermore, varying levels of expectations and requirements, as well as diverse cultures and ways of working, must be effectively managed. Factors which facilitated implementation included extensive stakeholder engagement, such as co-design activities, which can contribute to an increased 'buy-in' from users in the long term. An important lesson from the Living-It-Up initiative is that attempting to co-design innovative digital services, but at the same time, recruiting large numbers of users is likely to generate conflicting implementation priorities which hinder - or at least substantially slow down - the effective rollout of services at scale.The deployment of Living-It-Up services is ongoing, but our results to date suggest that - in order to be successful - the roll-out of digital health and well-being technologies at scale requires a delicate and pragmatic trade

  9. Number-space associations without language: Evidence from preverbal human infants and non-human animal species.

    PubMed

    Rugani, Rosa; de Hevia, Maria-Dolores

    2017-04-01

    It is well known that humans describe and think of numbers as being represented in a spatial configuration, known as the 'mental number line'. The orientation of this representation appears to depend on the direction of writing and reading habits present in a given culture (e.g., left-to-right oriented in Western cultures), which makes this factor an ideal candidate to account for the origins of the spatial representation of numbers. However, a growing number of studies have demonstrated that non-verbal subjects (preverbal infants and non-human animals) spontaneously associate numbers and space. In this review, we discuss evidence showing that pre-verbal infants and non-human animals associate small numerical magnitudes with short spatial extents and left-sided space, and large numerical magnitudes with long spatial extents and right-sided space. Together this evidence supports the idea that a more biologically oriented view can account for the origins of the 'mental number line'. In this paper, we discuss this alternative view and elaborate on how culture can shape a core, fundamental, number-space association.

  10. Small scale exact coherent structures at large Reynolds numbers in plane Couette flow

    NASA Astrophysics Data System (ADS)

    Eckhardt, Bruno; Zammert, Stefan

    2018-02-01

    The transition to turbulence in plane Couette flow and several other shear flows is connected with saddle node bifurcations in which fully three-dimensional, nonlinear solutions to the Navier-Stokes equation, so-called exact coherent states (ECS), appear. As the Reynolds number increases, the states undergo secondary bifurcations and their time-evolution becomes increasingly more complex. Their spatial complexity, in contrast, remains limited so that these states cannot contribute to the spatial complexity and cascade to smaller scales expected for higher Reynolds numbers. We here present families of scaling ECS that exist on ever smaller scales as the Reynolds number is increased. We focus in particular on two such families for plane Couette flow, one centered near the midplane and the other close to a wall. We discuss their scaling and localization properties and the bifurcation diagrams. All solutions are localized in the wall-normal direction. In the spanwise and downstream direction, they are either periodic or localized as well. The family of scaling ECS localized near a wall is reminiscent of attached eddies, and indicates how self-similar ECS can contribute to the formation of boundary layer profiles.

  11. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    NASA Astrophysics Data System (ADS)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  12. The radial transmission line as a broad-band shielded exposure system for microwave irradiation of large numbers of culture flasks.

    PubMed

    Moros, E G; Straube, W L; Pickard, W F

    1999-01-01

    The problem of simultaneously exposing large numbers of culture flasks at nominally equivalent incident power densities and with good thermal control is considered, and the radial transmission line (RTL) is proposed as a solution. The electromagnetic design of this structure is discussed, and an extensively bench-tested realization is described. Referred to 1 W of net forward power, the following specific absorption rate (SAR) data were obtained: at 835.62 MHz, 16.0+/-2.5 mW/kg (mean+/-SD) with range (11-22); at 2450 MHz, 245+/-50 mW/kg with range (130-323). Radio-frequency interference from an RTL driven at roughly 100 W is so low as to be compatible with a cellular base station only 500 m distant. To avoid potential confounding by temperature differences among as many as 144 T-75 flasks distributed over 9 RTLs (six irradiates and three shams), temperature within all flasks was controlled to 37.0+/-0.3 degrees C. Experience with over two years of trouble-free operation suggests that the RTL offers a robust, logistically friendly, and environmentally satisfactory solution to the problem of large-scale in vitro experiments in bioelectromagnetics.

  13. Ocular Angiogenesis: Vascular Endothelial Growth Factor and Other Factors.

    PubMed

    Rubio, Roman G; Adamis, Anthony P

    2016-01-01

    Systematic study of the mechanisms underlying pathological ocular neovascularization has yielded a wealth of knowledge about pro- and anti-angiogenic factors that modulate diseases such as neovascular age-related macular degeneration. The evidence implicating vascular endothelial growth factor (VEGF) in particular has led to the development of a number of approved anti-VEGF therapies. Additional proangiogenic targets that have emerged as potential mediators of ocular neovascularization include hypoxia-inducible factor-1, angiopoietin-2, platelet-derived growth factor-B and components of the alternative complement pathway. As for VEGF, knowledge of these factors has led to a product pipeline of many more novel agents that are in various stages of clinical development in the setting of ocular neovascularization. These agents are represented by a range of drug classes and, in addition to novel small- and large-molecule VEGF inhibitors, include gene therapies, small interfering RNA agents and tyrosine kinase inhibitors. In addition, combination therapy is beginning to emerge as a strategy to improve the efficacy of individual therapies. Thus, a variety of agents, whether administered alone or as adjunctive therapy with agents targeting VEGF, offer the promise of expanding the range of treatments for ocular neovascular diseases. © 2016 S. Karger AG, Basel.

  14. CNV-seq, a new method to detect copy number variation using high-throughput sequencing.

    PubMed

    Xie, Chao; Tammi, Martti T

    2009-03-06

    DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.

  15. Number of Diverticulitis Episodes Before Resection and Factors Associated With Earlier Interventions

    PubMed Central

    Simianu, Vlad V.; Fichera, Alessandro; Bastawrous, Amir L.; Davidson, Giana H.; Florence, Michael G.; Thirlby, Richard C.; Flum, David R.

    2016-01-01

    IMPORTANCE Despite professional recommendations to delay elective colon resection for patients with uncomplicated diverticulitis, early surgery (after <3 preceding episodes) appears to be common. Several factors have been suggested to contribute to early surgery, including increasing numbers of younger patients, a lower threshold to operate laparoscopically, and growing recognition of “smoldering” (or nonrecovering) diverticulitis episodes. However, the relevance of these factors in early surgery has not been well tested, and most prior studies have focused on hospitalizations, missing outpatient events and making it difficult to assess guideline adherence in earlier interventions. OBJECTIVE To describe patterns of episodes of diverticulitis before surgery and factors associated with earlier interventions using inpatient, outpatient, and antibiotic prescription claims. DESIGN, SETTING, AND PARTICIPANTS This investigation was a nationwide retrospective cohort study from January 1, 2009, to December 31, 2012. The dates of the analysis were July 2014 to May 2015. Participants were immunocompetent adult patients (age range, 18-64 years) with incident, uncomplicated diverticulitis. EXPOSURE Elective colectomy for diverticulitis. MAIN OUTCOMES AND MEASURES Inpatient, outpatient, and antibiotic prescription claims for diverticulitis captured in the MarketScan (Truven Health Analytics) databases. RESULTS Of 87 461 immunocompetent patients having at least 1 claim for diverticulitis, 6.4% (n = 5604) underwent a resection. The final study cohort comprised 3054 nonimmunocompromised patients who underwent elective resection for uncomplicated diverticulitis, of whom 55.6% (n = 1699) were male. Before elective surgery, they had a mean (SD) of 1.0 (0.9) inpatient claims, 1.5 (1.5) outpatient claims, and 0.5 (1.2) antibiotic prescription claims related to diverticulitis. Resection occurred after fewer than 3 episodes in 94.9% (2897 of 3054) of patients if counting inpatient

  16. Large meniscus extrusion ratio is a poor prognostic factor of conservative treatment for medial meniscus posterior root tear.

    PubMed

    Kwak, Yoon-Ho; Lee, Sahnghoon; Lee, Myung Chul; Han, Hyuk-Soo

    2018-03-01

    The purpose of this study was to find a prognostic factor of medial meniscus posterior root tear (MMPRT) for surgical decision making. Eighty-eight patients who were diagnosed as acute or subacute MMPRT without severe degeneration of the meniscus were treated conservatively for 3 months. Fifty-seven patients with MMPRT showed good response to conservative treatment (group 1), while the remaining 31 patients who failed to conservative treatment (group 2) received arthroscopic meniscus repair. Their demographic characteristics and radiographic features including hip-knee-ankle angle, joint line convergence angle, Kellgren-Lawrence grade in plain radiographs, meniscus extrusion (ME) ratio (ME-medial femoral condyle ratio, ME-medial tibial plateau ratio, ME-meniscus width ratio), the location of bony edema, and cartilage lesions in MRI were compared. Receiver operating characteristic (ROC) curve analysis was also performed to determine the cut-off values of risk factors. The degree of ME-medial femoral condyle and medial tibia plateau ratio of group 2 was significantly higher than group 1 (0.08 and 0.07 vs. 0.1 and 0.09, respectively, both p < 0.001). No significant (n.s.) difference in other variables was found between the two groups. On ROC curve analysis, ME-medial femoral condyle ratio was confirmed as the most reliable prognostic factor of conservative treatment for MMPRT (area under ROC = 0.8). The large meniscus extrusion ratio was the most reliable poor prognostic factor of conservative treatment for MMPRT. Therefore, for MMPRT patients with large meniscus extrusion, early surgical repair could be considered as the primary treatment option. III.

  17. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  18. Operative factors associated with short-term outcome in horses with large colon volvulus: 47 cases from 2006 to 2013

    PubMed Central

    Gonzalez, L. M.; Fogle, C. A.; Baker, W. T.; Hughes, F. E.; Law, J. M.; Motsinger-Reif, A. A.; Blikslager, A. T.

    2014-01-01

    Summary Reasons for performing the study There is an important need for objective parameters that accurately predict the outcome of horses with large colon volvulus. Objectives To evaluate the predictive value of a series of histomorphometric parameters on short-term outcome, as well as the impact of colonic resection on horses with large colon volvulus. Study Design Retrospective cohort study Methods Adult horses admitted to the Equine and Farm Animal Veterinary Center at North Carolina State University, Peterson & Smith and Chino Valley Equine Hospitals between 2006–2013 undergoing an exploratory celiotomy, diagnosed with large colon volvulus of ≥360 degrees, where a pelvic flexure biopsy was obtained, and that recovered from general anaesthesia, were selected for inclusion in the study. Logistic regression was used to determine associations between signalment, histomorphometric measurements of interstitial: crypt ratio, degree of haemorrhage, percentage loss of luminal and glandular epithelium, as well as colonic resection with short-term outcome (discharge from the hospital). Results Pelvic flexure biopsies from 47 horses with large colon volvulus were evaluated. Factors that were significantly associated with short-term outcome on univariate logistic regression were Thoroughbred breed (P = 0.04), interstitial: crypt ratio >1 (P = 0.02) and haemorrhage score ≥3 (P = 0.005). Resection (P = 0.92) was not found to be significantly associated with short-term outcome. No combined factors increased the likelihood of death in forward stepwise logistic regression modelling. A digitally quantified haemorrhage area measurement strengthened the association of haemorrhage with non-survival in cases of large colon volvulus. Conclusions Histomorphometric measurements of interstitial: crypt ratio and degree of haemorrhage predict short-term outcome in cases of large colon volvulus. Resection was not associated with short-term outcome in horses selected for this study

  19. A study of the effectiveness of machine learning methods for classification of clinical interview fragments into a large number of categories.

    PubMed

    Hasan, Mehedi; Kotov, Alexander; Carcone, April; Dong, Ming; Naar, Sylvie; Hartlieb, Kathryn Brogan

    2016-08-01

    This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Exploratory factor analysis of pathway copy number data with an application towards the integration with gene expression data.

    PubMed

    van Wieringen, Wessel N; van de Wiel, Mark A

    2011-05-01

    Realizing that genes often operate together, studies into the molecular biology of cancer shift focus from individual genes to pathways. In order to understand the regulatory mechanisms of a pathway, one must study its genes at all molecular levels. To facilitate such study at the genomic level, we developed exploratory factor analysis for the characterization of the variability of a pathway's copy number data. A latent variable model that describes the call probability data of a pathway is introduced and fitted with an EM algorithm. In two breast cancer data sets, it is shown that the first two latent variables of GO nodes, which inherit a clear interpretation from the call probabilities, are often related to the proportion of aberrations and a contrast of the probabilities of a loss and of a gain. Linking the latent variables to the node's gene expression data suggests that they capture the "global" effect of genomic aberrations on these transcript levels. In all, the proposed method provides an possibly insightful characterization of pathway copy number data, which may be fruitfully exploited to study the interaction between the pathway's DNA copy number aberrations and data from other molecular levels like gene expression.

  1. Next-generation prognostic assessment for diffuse large B-cell lymphoma

    PubMed Central

    Staton, Ashley D; Kof, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts. PMID:26289217

  2. Next-generation prognostic assessment for diffuse large B-cell lymphoma.

    PubMed

    Staton, Ashley D; Koff, Jean L; Chen, Qiushi; Ayer, Turgay; Flowers, Christopher R

    2015-01-01

    Current standard of care therapy for diffuse large B-cell lymphoma (DLBCL) cures a majority of patients with additional benefit in salvage therapy and autologous stem cell transplant for patients who relapse. The next generation of prognostic models for DLBCL aims to more accurately stratify patients for novel therapies and risk-adapted treatment strategies. This review discusses the significance of host genetic and tumor genomic alterations seen in DLBCL, clinical and epidemiologic factors, and how each can be integrated into risk stratification algorithms. In the future, treatment prediction and prognostic model development and subsequent validation will require data from a large number of DLBCL patients to establish sufficient statistical power to correctly predict outcome. Novel modeling approaches can augment these efforts.

  3. Ticks as a factor in nest desertion of California brown pelicans

    USGS Publications Warehouse

    King, Kirke A.; Keith, James O.; Mitchell, Christine A.; Keirans, James E.

    1977-01-01

    In summary, our observations suggest that O. denmarki may be an important environmental factor influencing the distribution and success of Brown Pelican nests in the Gulf of California. More information on these relationships may be unobtainable without seriously disturbing and destroying large numbers of nests.

  4. Multiscale factors affecting human attitudes toward snow leopards and wolves.

    PubMed

    Suryawanshi, Kulbhushansingh R; Bhatia, Saloni; Bhatnagar, Yash Veer; Redpath, Stephen; Mishra, Charudutt

    2014-12-01

    The threat posed by large carnivores to livestock and humans makes peaceful coexistence between them difficult. Effective implementation of conservation laws and policies depends on the attitudes of local residents toward the target species. There are many known correlates of human attitudes toward carnivores, but they have only been assessed at the scale of the individual. Because human societies are organized hierarchically, attitudes are presumably influenced by different factors at different scales of social organization, but this scale dependence has not been examined. We used structured interview surveys to quantitatively assess the attitudes of a Buddhist pastoral community toward snow leopards (Panthera uncia) and wolves (Canis lupus). We interviewed 381 individuals from 24 villages within 6 study sites across the high-elevation Spiti Valley in the Indian Trans-Himalaya. We gathered information on key explanatory variables that together captured variation in individual and village-level socioeconomic factors. We used hierarchical linear models to examine how the effect of these factors on human attitudes changed with the scale of analysis from the individual to the community. Factors significant at the individual level were gender, education, and age of the respondent (for wolves and snow leopards), number of income sources in the family (wolves), agricultural production, and large-bodied livestock holdings (snow leopards). At the community level, the significant factors included the number of smaller-bodied herded livestock killed by wolves and mean agricultural production (wolves) and village size and large livestock holdings (snow leopards). Our results show that scaling up from the individual to higher levels of social organization can highlight important factors that influence attitudes of people toward wildlife and toward formal conservation efforts in general. Such scale-specific information can help managers apply conservation measures at

  5. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    PubMed

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  6. Fasting insulin at baseline influences the number of cardiometabolic risk factors and R-R interval at 3years in a healthy population: the RISC Study.

    PubMed

    Pataky, Z; Golay, A; Laville, M; Disse, E; Mitrakou, A; Guidone, C; Gabriel, R; Bobbioni-Harsch, E

    2013-09-01

    This was a cross-sectional and longitudinal study of factors contributing to the number of cardiometabolic risk factors, common carotid artery intima-media thickness (CCA-IMT) and R-R interval in clinically healthy subjects without diabetes. Anthropometric and cardiometabolic parameters were measured in the Relationship between Insulin Sensitivity and Cardiovascular Disease (RISC) Study cohort at baseline (n=1211) and 3years later (n=974). At baseline, insulin sensitivity was assessed by the euglycaemic clamp technique. The CCA-IMT was echographically measured and the R-R interval was electrocardiographically evaluated at baseline and at the 3-year follow-up. Higher baseline BMI, fasting insulin and tobacco use as well as greater changes in BMI and fasting insulin but lower adiponectin levels, were associated with a greater number of cardiometabolic risk factors at the 3-year follow-up independently of insulin sensitivity (all P<0.02). The CCA-IMT increased with the number of cardiometabolic risk factors (P=0.008), but was not related to fasting insulin, whereas higher fasting insulinaemia and its 3-year changes were significantly associated with a smaller R-R interval (P=0.005 and P=0.002, respectively). These relationships were independent of baseline age, gender, BMI, adiponectin, insulin sensitivity, tobacco use and physical activity. In clinically healthy subjects, fasting insulinaemia, adiponectin and lifestyle parameters are related to the presence of one or two cardiometabolic risk factors before criteria for the metabolic syndrome are met. These results underline the importance of fasting insulinaemia as an independent cardiometabolic risk factor at an early stage of disease development in a healthy general population. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  7. Turbulent pipe flow at extreme Reynolds numbers.

    PubMed

    Hultmark, M; Vallikivi, M; Bailey, S C C; Smits, A J

    2012-03-02

    Both the inherent intractability and complex beauty of turbulence reside in its large range of physical and temporal scales. This range of scales is captured by the Reynolds number, which in nature and in many engineering applications can be as large as 10(5)-10(6). Here, we report turbulence measurements over an unprecedented range of Reynolds numbers using a unique combination of a high-pressure air facility and a new nanoscale anemometry probe. The results reveal previously unknown universal scaling behavior for the turbulent velocity fluctuations, which is remarkably similar to the well-known scaling behavior of the mean velocity distribution.

  8. DNA copy number gains at loci of growth factors and their receptors in salivary gland adenoid cystic carcinoma.

    PubMed

    Vékony, Hedy; Ylstra, Bauke; Wilting, Saskia M; Meijer, Gerrit A; van de Wiel, Mark A; Leemans, C René; van der Waal, Isaäc; Bloemena, Elisabeth

    2007-06-01

    Adenoid cystic carcinoma (ACC) is a malignant salivary gland tumor with a high mortality rate due to late, distant metastases. This study aimed at unraveling common genetic abnormalities associated with ACC. Additionally, chromosomal changes were correlated with patient characteristics and survival. Microarray-based comparative genomic hybridization was done to a series of 18 paraffin-embedded primary ACCs using a genome-wide scanning BAC array. A total of 238 aberrations were detected, representing more gains than losses (205 versus 33, respectively). Most frequent gains (>60%) were observed at 9q33.3-q34.3, 11q13.3, 11q23.3, 19p13.3-p13.11, 19q12-q13.43, 21q22.3, and 22q13.33. These loci harbor numerous growth factor [fibroblast growth factor (FGF) and platelet-derived growth factor (PDGF)] and growth factors receptor (FGFR3 and PDGFRbeta) genes. Gains at the FGF(R) regions occurred significantly more frequently in the recurred/metastasized ACCs compared with indolent ACCs. Furthermore, patients with 17 or more chromosomal aberrations had a significantly less favorable outcome than patients with fewer chromosomal aberrations (log-rank = 5.2; P = 0.02). Frequent DNA copy number gains at loci of growth factors and their receptors suggest their involvement in ACC initiation and progression. Additionally, the presence of FGFR3 and PDGFRbeta in increased chromosomal regions suggests a possible role for autocrine stimulation in ACC tumorigenesis.

  9. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    NASA Astrophysics Data System (ADS)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  10. The Isolation and Enrichment of Large Numbers of Highly Purified Mouse Spleen Dendritic Cell Populations and Their In Vitro Equivalents.

    PubMed

    Vremec, David

    2016-01-01

    Dendritic cells (DCs) form a complex network of cells that initiate and orchestrate immune responses against a vast array of pathogenic challenges. Developmentally and functionally distinct DC subtypes differentially regulate T-cell function. Importantly it is the ability of DC to capture and process antigen, whether from pathogens, vaccines, or self-components, and present it to naive T cells that is the key to their ability to initiate an immune response. Our typical isolation procedure for DC from murine spleen was designed to efficiently extract all DC subtypes, without bias and without alteration to their in vivo phenotype, and involves a short collagenase digestion of the tissue, followed by selection for cells of light density and finally negative selection for DC. The isolation procedure can accommodate DC numbers that have been artificially increased via administration of fms-like tyrosine kinase 3 ligand (Flt3L), either directly through a series of subcutaneous injections or by seeding with an Flt3L secreting murine melanoma. Flt3L may also be added to bone marrow cultures to produce large numbers of in vitro equivalents of the spleen DC subsets. Total DC, or their subsets, may be further purified using immunofluorescent labeling and flow cytometric cell sorting. Cell sorting may be completely bypassed by separating DC subsets using a combination of fluorescent antibody labeling and anti-fluorochrome magnetic beads. Our procedure enables efficient separation of the distinct DC subsets, even in cases where mouse numbers or flow cytometric cell sorting time is limiting.

  11. Symbolic Numerical Distance Effect Does Not Reflect the Difference between Numbers.

    PubMed

    Krajcsi, Attila; Kojouharova, Petia

    2017-01-01

    In a comparison task, the larger the distance between the two numbers to be compared, the better the performance-a phenomenon termed as the numerical distance effect. According to the dominant explanation, the distance effect is rooted in a noisy representation, and performance is proportional to the size of the overlap between the noisy representations of the two values. According to alternative explanations, the distance effect may be rooted in the association between the numbers and the small-large categories, and performance is better when the numbers show relatively high differences in their strength of association with the small-large properties. In everyday number use, the value of the numbers and the association between the numbers and the small-large categories strongly correlate; thus, the two explanations have the same predictions for the distance effect. To dissociate the two potential sources of the distance effect, in the present study, participants learned new artificial number digits only for the values between 1 and 3, and between 7 and 9, thus, leaving out the numbers between 4 and 6. It was found that the omitted number range (the distance between 3 and 7) was considered in the distance effect as 1, and not as 4, suggesting that the distance effect does not follow the values of the numbers predicted by the dominant explanation, but it follows the small-large property association predicted by the alternative explanations.

  12. Sex Differences and the Factor of Time in Solving Vandenberg and Kuse Mental Rotation Problems

    ERIC Educational Resources Information Center

    Peters, M.

    2005-01-01

    In accounting for the well-established sex differences on mental rotation tasks that involve cube stimuli of the Shepard and Metzler (Shepard & Metzler, 1971) kind, performance factors are frequently invoked. Three studies are presented that examine performance factors. In Study 1, analyses of the performance of a large number of subjects…

  13. Psychosocial work factors and sickness absence in 31 countries in Europe.

    PubMed

    Niedhammer, Isabelle; Chastang, Jean-François; Sultan-Taïeb, Hélène; Vermeylen, Greet; Parent-Thirion, Agnès

    2013-08-01

    The studies on the associations between psychosocial work factors and sickness absence have rarely included a large number of factors and European data. The objective was to examine the associations between a large set of psychosocial work factors following well-known and emergent concepts and sickness absence in Europe. The study population consisted of 14,881 male and 14,799 female workers in 31 countries from the 2005 European Working Conditions Survey. Psychosocial work factors included the following: decision latitude, psychological demands, social support, physical violence, sexual harassment, discrimination, bullying, long working hours, shift and night work, job insecurity, job promotion and work-life imbalance. Covariates were as follows: age, occupation, economic activity, employee/self-employed status and physical, chemical, biological and biomechanical exposures. Statistical analysis was performed using multilevel negative binomial hurdle models to study the occurrence and duration of sickness absence. In the models, including all psychosocial work factors together and adjustment for covariates, high psychological demands, discrimination, bullying, low-job promotion and work-life imbalance for both genders and physical violence for women were observed as risk factors of the occurrence of sickness absence. Bullying and shift work increased the duration of absence among women. Bullying had the strongest association with sickness absence. Various psychosocial work factors were found to be associated with sickness absence. A less conservative analysis exploring each factor separately provided a still higher number of risk factors. Preventive measures should take psychosocial work environment more comprehensively into account to reduce sickness absence and improve health at work at European level.

  14. Detrending Algorithms in Large Time Series: Application to TFRM-PSES Data

    NASA Astrophysics Data System (ADS)

    del Ser, D.; Fors, O.; Núñez, J.; Voss, H.; Rosich, A.; Kouprianov, V.

    2015-07-01

    Certain instrumental effects and data reduction anomalies introduce systematic errors in photometric time series. Detrending algorithms such as the Trend Filtering Algorithm (TFA; Kovács et al. 2004) have played a key role in minimizing the effects caused by these systematics. Here we present the results obtained after applying the TFA, Savitzky & Golay (1964) detrending algorithms, and the Box Least Square phase-folding algorithm (Kovács et al. 2002) to the TFRM-PSES data (Fors et al. 2013). Tests performed on these data show that by applying these two filtering methods together the photometric RMS is on average improved by a factor of 3-4, with better efficiency towards brighter magnitudes, while applying TFA alone yields an improvement of a factor 1-2. As a result of this improvement, we are able to detect and analyze a large number of stars per TFRM-PSES field which present some kind of variability. Also, after porting these algorithms to Python and parallelizing them, we have improved, even for large data samples, the computational performance of the overall detrending+BLS algorithm by a factor of ˜10 with respect to Kovács et al. (2004).

  15. Large-scale Cross-modality Search via Collective Matrix Factorization Hashing.

    PubMed

    Ding, Guiguang; Guo, Yuchen; Zhou, Jile; Gao, Yue

    2016-09-08

    By transforming data into binary representation, i.e., Hashing, we can perform high-speed search with low storage cost, and thus Hashing has collected increasing research interest in the recent years. Recently, how to generate Hashcode for multimodal data (e.g., images with textual tags, documents with photos, etc) for large-scale cross-modality search (e.g., searching semantically related images in database for a document query) is an important research issue because of the fast growth of multimodal data in the Web. To address this issue, a novel framework for multimodal Hashing is proposed, termed as Collective Matrix Factorization Hashing (CMFH). The key idea of CMFH is to learn unified Hashcodes for different modalities of one multimodal instance in the shared latent semantic space in which different modalities can be effectively connected. Therefore, accurate cross-modality search is supported. Based on the general framework, we extend it in the unsupervised scenario where it tries to preserve the Euclidean structure, and in the supervised scenario where it fully exploits the label information of data. The corresponding theoretical analysis and the optimization algorithms are given. We conducted comprehensive experiments on three benchmark datasets for cross-modality search. The experimental results demonstrate that CMFH can significantly outperform several state-of-the-art cross-modality Hashing methods, which validates the effectiveness of the proposed CMFH.

  16. Tests of Sunspot Number Sequences: 4. Discontinuities Around 1946 in Various Sunspot Number and Sunspot-Group-Number Reconstructions

    NASA Astrophysics Data System (ADS)

    Lockwood, M.; Owens, M. J.; Barnard, L.

    2016-11-01

    We use five test data series to search for, and quantify, putative discontinuities around 1946 in five different annual-mean sunspot-number or sunspot-group-number data sequences. The data series tested are the original and new versions of the Wolf/Zürich/International sunspot number composite [R_{{ISNv1}} and R_{{ISNv2}}] (respectively Clette et al. in Adv. Space Res. 40, 919, 2007 and Clette et al. in The Solar Activity Cycle 35, Springer, New York, 2015); the corrected version of R ISNv1 proposed by Lockwood, Owens, and Barnard ( J. Geophys. Res. 119, 5193, 2014a) [R C]; the new "backbone" group-number composite proposed by Svalgaard and Schatten ( Solar Phys. 291, 2016) [R_{{BB}}]; and the new group-number composite derived by Usoskin et al. ( Solar Phys. 291, 2016) [R_{{UEA}}]. The test data series used are the group-number [NG] and total sunspot area [A G] from the Royal Observatory, Greenwich/Royal Greenwich Observatory (RGO) photoheliographic data; the Ca K index from the recent re-analysis of Mount Wilson Observatory (MWO) spectroheliograms in the Calcium ii K ion line; the sunspot-group-number from the MWO sunspot drawings [N_{{MWO}}]; and the dayside ionospheric F2-region critical frequencies measured by the Slough ionosonde [foF2]. These test data all vary in close association with sunspot numbers, in some cases non-linearly. The tests are carried out using both the before-and-after fit-residual comparison method and the correlation method of Lockwood, Owens, and Barnard, applied to annual mean data for intervals iterated to minimise errors and to eliminate uncertainties associated with the precise date of the putative discontinuity. It is not assumed that the correction required is by a constant factor, nor even linear in sunspot number. It is shown that a non-linear correction is required by RC, R_{BB}, and R_{{ISNv1}}, but not by R_{{ISNv2}} or R_{{UEA}}. The five test datasets give very similar results in all cases. By multiplying the probability

  17. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    PubMed

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  18. The effect of parental factors in children with large cup-to-disc ratios.

    PubMed

    Park, Hae-Young Lopilly; Ha, Min Ji; Shin, Sun Young

    2017-01-01

    To investigate large cup-to-disc ratios (CDR) in children and to determine the relationship between parental CDR and clinical characteristics associated with glaucoma. Two hundred thirty six children aged 6 to 12 years with CDR ≥ 0.6 were enrolled in this study. Subjects were classified into two groups based on parental CDR: disc suspect children with disc suspect (CDR ≥0.6) parents and disc suspect children without disc suspect parents. Ocular variables were compared between the two groups. Of the 236 disc suspect children, 100 (42.4%) had at least one disc suspect parent. Intraocular pressure (IOP) was higher in disc suspect children with disc suspect parents (16.52±2.66 mmHg) than in disc suspect children without disc suspect parents (14.38±2.30 mmHg, p = 0.023). In the group with disc suspect parents, vertical CDR significantly correlated with IOP (R = -0.325, p = 0.001), average retinal nerve fiber layer (RNFL) thickness (R = -0.319, p = 0.001), rim area (R = -0.740, p = 0.001), and cup volume (R = 0.499, p = 0.001). However, spherical equivalent (R = 0.333, p = 0.001), AL (R = -0.223, p = 0.009), and disc area (R = 0.325, p = 0.001) significantly correlated with vertical CDR in disc suspect children without disc suspect parents, in contrast to those with disc suspect parents. Larger vertical CDR was associated with the presence of disc suspect parents (p = 0.001), larger disc area (p = 0.001), thinner rim area (p = 0.001), larger average CDR (p = 0.001), and larger cup volume (p = 0.021). Family history of large CDR was a significant factor associated with large vertical CDR in children. In children with disc suspect parents, there were significant correlations between IOP and average RNFL thickness and vertical CDR.

  19. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries.

    PubMed

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-09-01

    Individual and organizational factors are the factors influencing traumatic occupational injuries. The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries' severity (P < 0.05). Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents' severity in large construction industries.

  20. Factors associated with daily walking of dogs.

    PubMed

    Westgarth, Carri; Christian, Hayley E; Christley, Robert M

    2015-05-19

    Regular physical activity is beneficial to the health of both people and animals. The role of regular exercise undertaken together, such as dog walking, is a public health interest of mutual benefit. Exploration of barriers and incentives to regular dog walking by owners is now required so that effective interventions to promote it can be designed. This study explored a well-characterised cross-sectional dataset of 276 dogs and owners from Cheshire, UK, for evidence of factors associated with the dog being walked once or more per day. Factors independently associated with daily walking included: number of dogs owned (multiple (vs. single) dogs negatively associated); size (medium and possibly large dogs (vs. small) positively associated); and number of people in the household (more people negatively associated). Furthermore, a number of factors related to the dog-owner relationship and the dog's behaviour were associated with daily walking, including: having acquired the dog for a hobby (positively associated); dog lying on furniture (positively associated); dog lying on laps (negatively associated); growling at household members (negatively associated); and playing chase games with the dog (negatively associated). These findings are consistent with the hypothesis that the strength and nature of the human-dog relationship incentivises dog walking, and that behavioural and demographic factors may affect dog walking via this mechanism. Future studies need to investigate how dog demographic and behavioural factors, plus owner behavioural factors and perceptions of the dog, influence the dog-human relationship in respect to the perceived support and motivation a dog can provide for walking.

  1. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  2. Human factors in air traffic control: problems at the interfaces.

    PubMed

    Shouksmith, George

    2003-10-01

    The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.

  3. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    PubMed

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  4. Multi-factorial analysis of class prediction error: estimating optimal number of biomarkers for various classification rules.

    PubMed

    Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter

    2010-12-01

    Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).

  5. What caused a large number of fatalities in the Tohoku earthquake?

    NASA Astrophysics Data System (ADS)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  6. Factors influencing the perceived quality of clinical supervision of occupational therapists in a large Australian state.

    PubMed

    Martin, Priya; Kumar, Saravana; Lizarondo, Lucylynn; Tyack, Zephanie

    2016-10-01

    Clinical supervision is important for effective health service delivery, professional development and practice. Despite its importance there is a lack of evidence regarding the factors that improve its quality. This study aimed to investigate the factors that influence the quality of clinical supervision of occupational therapists employed in a large public sector health service covering mental health, paediatrics, adult physical and other practice areas. A mixed method, sequential explanatory study design was used consisting of two phases. This article reports the quantitative phase (Phase One) which involved administration of the Manchester Clinical Supervision Scale (MCSS-26) to 207 occupational therapists. Frequency of supervision sessions, choice of supervisor and the type of supervision were found to be the predictor variables with a positive and significant influence on the quality of clinical supervision. Factors such as age, length of supervision and the area of practice were found to be the predictor variables with a negative and significant influence on the quality of clinical supervision. Factors that influence the perceived quality of clinical supervision among occupational therapists have been identified. High quality clinical supervision is an important component of clinical governance and has been shown to be beneficial to practitioners, patients and the organisation. Information on factors that make clinical supervision effective identified in this study can be added to existing supervision training and practices to improve the quality of clinical supervision. © 2016 Occupational Therapy Australia.

  7. Microwave Readout Techniques for Very Large Arrays of Nuclear Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ullom, Joel

    During this project, we transformed the use of microwave readout techniques for nuclear sensors from a speculative idea to reality. The core of the project consisted of the development of a set of microwave electronics able to generate and process large numbers of microwave tones. The tones can be used to probe a circuit containing a series of electrical resonances whose frequency locations and widths depend on the state of a network of sensors, with one sensor per resonance. The amplitude and phase of the tones emerging from the circuit are processed by the same electronics and are reduced tomore » the sensor signals after two demodulation steps. This approach allows a large number of sensors to be interrogated using a single pair of coaxial cables. We successfully developed hardware, firmware, and software to complete a scalable implementation of these microwave control electronics and demonstrated their use in two areas. First, we showed that the electronics can be used at room temperature to read out a network of diverse sensor types relevant to safeguards or process monitoring. Second, we showed that the electronics can be used to measure large numbers of ultrasensitive cryogenic sensors such as gamma-ray microcalorimeters. In particular, we demonstrated the undegraded readout of up to 128 channels and established a path to even higher multiplexing factors. These results have transformed the prospects for gamma-ray spectrometers based on cryogenic microcalorimeter arrays by enabling spectrometers whose collecting areas and count rates can be competitive with high purity germanium but with 10x better spectral resolution.« less

  8. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. On counting the rational numbers

    NASA Astrophysics Data System (ADS)

    Almada, Carlos

    2010-12-01

    In this study, we show how to construct a function from the set ? of natural numbers that explicitly counts the set ? of all positive rational numbers using a very intuitive approach. The function has the appeal of Cantor's function and it has the advantage that any high school student can understand the main idea at a glance without any prior knowledge of the Unique Prime Factorization Theorem or other nonelementary results. Unlike Cantor's function, the one we propose makes it very easy to determine what rational number, in unreduced form, is in a given position on the list and vice versa.

  10. The Number Density of Quiescent Compact Galaxies at Intermediate Redshift

    NASA Astrophysics Data System (ADS)

    Damjanov, Ivana; Hwang, Ho Seong; Geller, Margaret J.; Chilingarian, Igor

    2014-09-01

    Massive compact systems at 0.2 < z < 0.6 are the missing link between the predominantly compact population of massive quiescent galaxies at high redshift and their analogs and relics in the local volume. The evolution in number density of these extreme objects over cosmic time is the crucial constraining factor for the models of massive galaxy assembly. We select a large sample of ~200 intermediate-redshift massive compacts from the Baryon Oscillation Spectroscopic Survey (BOSS) spectroscopy by identifying point-like Sloan Digital Sky Survey photometric sources with spectroscopic signatures of evolved redshifted galaxies. A subset of our targets have publicly available high-resolution ground-based images that we use to augment the dynamical and stellar population properties of these systems by their structural parameters. We confirm that all BOSS compact candidates are as compact as their high-redshift massive counterparts and less than half the size of similarly massive systems at z ~ 0. We use the completeness-corrected numbers of BOSS compacts to compute lower limits on their number densities in narrow redshift bins spanning the range of our sample. The abundance of extremely dense quiescent galaxies at 0.2 < z < 0.6 is in excellent agreement with the number densities of these systems at high redshift. Our lower limits support the models of massive galaxy assembly through a series of minor mergers over the redshift range 0 < z < 2.

  11. Estimation of the emission factors of particle number and mass fractions from traffic at a site where mean vehicle speeds vary over short distances

    NASA Astrophysics Data System (ADS)

    Jones, Alan M.; Harrison, Roy M.

    Emission factors for particle number in three size ranges (11-30; 30-100 and >100 nm) as well as for PM 2.5, PM 2.5-10 and PM 10 mass have been estimated separately for heavy and light-duty vehicles in a heavily trafficked street canyon in London where traffic speeds vary considerably over short distances. Emissions of NO x were estimated from published emission factors, and emissions of other pollutants estimated from their ratio to NO x in the roadside concentration after subtraction of the simultaneously measured urban background. The estimated emission factors are compared with other published data. Despite many differences in the design and implementation of the various studies, the results for particulate matter are broadly similar. Estimates of particle number emissions in this study for light-duty vehicles are very close to other published data, whilst those for heavy-duty vehicles are lower than in the more comparable studies. It is suggested that a contributory factor may be the introduction of diesel particle oxidation traps on some of the bus fleet in London. Estimates of emission factors for particle mass (PM 2.5 and PM 2.5-10) are within the range of other published data, and total mass emissions estimated from the ratio of concentration to NO x are tolerably close to those estimated using emission factors from the National Atmospheric Emissions Inventory (NAEI). However, the method leads to an estimate of carbon monoxide emissions 3-6 times larger than that derived using the NAEI factors.

  12. [The Role of Resilience Factors in Informal Caregivers of Dementia Patients - A Review on Selected Factors].

    PubMed

    Kunzler, Angela; Skoluda, Nadine; Nater, Urs

    2018-01-01

    In the face of demographic change, the informal care of dementia patients is becoming increasingly important. However, due to dementia symptoms as well as persisting care demands, this subgroup of informal caregivers is confronted with a large number of stressors resulting in chronic stress and impaired physical and mental health in many caregivers. Based on the current research on resilience (i. e., maintaining or regaining health despite stress and adversities), there is increasing interest in identifying resilience factors that may serve as resources to cope with informal care and protect caregivers against health problems. The review discusses the role of resilience factors in the association between ongoing caregiving stress and health. In analyzing the current state of research on resilience factors for dementia caregivers, we focus on the factors self-efficacy, relationship quality, and social support. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Prandtl-number Effects in High-Rayleigh-number Spherical Convection

    NASA Astrophysics Data System (ADS)

    Orvedahl, Ryan J.; Calkins, Michael A.; Featherstone, Nicholas A.; Hindman, Bradley W.

    2018-03-01

    Convection is the predominant mechanism by which energy and angular momentum are transported in the outer portion of the Sun. The resulting overturning motions are also the primary energy source for the solar magnetic field. An accurate solar dynamo model therefore requires a complete description of the convective motions, but these motions remain poorly understood. Studying stellar convection numerically remains challenging; it occurs within a parameter regime that is extreme by computational standards. The fluid properties of the convection zone are characterized in part by the Prandtl number \\Pr = ν/κ, where ν is the kinematic viscosity and κ is the thermal diffusion; in stars, \\Pr is extremely low, \\Pr ≈ 10‑7. The influence of \\Pr on the convective motions at the heart of the dynamo is not well understood since most numerical studies are limited to using \\Pr ≈ 1. We systematically vary \\Pr and the degree of thermal forcing, characterized through a Rayleigh number, to explore its influence on the convective dynamics. For sufficiently large thermal driving, the simulations reach a so-called convective free-fall state where diffusion no longer plays an important role in the interior dynamics. Simulations with a lower \\Pr generate faster convective flows and broader ranges of scales for equivalent levels of thermal forcing. Characteristics of the spectral distribution of the velocity remain largely insensitive to changes in \\Pr . Importantly, we find that \\Pr plays a key role in determining when the free-fall regime is reached by controlling the thickness of the thermal boundary layer.

  14. Risk Factors for Measles Virus Infection Among Adults During a Large Outbreak in Postelimination Era in Mongolia, 2015.

    PubMed

    Hagan, José E; Takashima, Yoshihiro; Sarankhuu, Amarzaya; Dashpagma, Otgonbayar; Jantsansengee, Baigalmaa; Pastore, Roberta; Nyamaa, Gunregjav; Yadamsuren, Buyanjargal; Mulders, Mick N; Wannemuehler, Kathleen A; Anderson, Raydel; Bankamp, Bettina; Rota, Paul; Goodson, James L

    2017-12-05

    In 2015, a large nationwide measles outbreak occurred in Mongolia, with very high incidence in the capital city of Ulaanbaatar and among young adults. We conducted an outbreak investigation including a matched case-control study of risk factors for laboratory-confirmed measles among young adults living in Ulaanbaatar. Young adults with laboratory-confirmed measles, living in the capital city of Ulaanbaatar, were matched with 2-3 neighborhood controls. Conditional logistic regression was used to estimate adjusted matched odds ratios (aMORs) for risk factors, with 95% confidence intervals. During March 1-September 30, 2015, 20 077 suspected measles cases were reported; 14 010 cases were confirmed. Independent risk factors for measles included being unvaccinated (adjusted matched odds ratio [aMOR] 2.0, P < .01), being a high school graduate without college education (aMOR 2.6, P < .01), remaining in Ulaanbaatar during the outbreak (aMOR 2.5, P < .01), exposure to an inpatient healthcare facility (aMOR 4.5 P < .01), and being born outside of Ulaanbaatar (aMOR 1.8, P = .02). This large, nationwide outbreak shortly after verification of elimination had high incidence among young adults, particularly those born outside the national capital. In addition, findings indicated that nosocomial transmission within health facilities helped amplify the outbreak. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  15. Extrinsic and intrinsic factors controlling spermatogonial stem cell self-renewal and differentiation.

    PubMed

    Mei, Xing-Xing; Wang, Jian; Wu, Ji

    2015-01-01

    Spermatogonial stem cells (SSCs), the stem cells responsible for male fertility, are one of a small number of cells with the abilities of both self-renewal and generation of large numbers of haploid cells. Technology improvements, most importantly, transplantation assays and in vitro culture systems have greatly expanded our understanding of SSC self-renewal and differentiation. Many important molecules crucial for the balance between self-renewal and differentiation have been recently identified although the exact mechanism(s) remain largely undefined. In this review, we give a brief introduction to SSCs, and then focus on extrinsic and intrinsic factors controlling SSCs self-renewal and differentiation.

  16. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    PubMed

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    NASA Astrophysics Data System (ADS)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 <= Ra <= 4 ×108 . The momentum transfer response to these system parameters generates a fluid flow in the closed cell with a macroscopic flow Reynolds number that takes values in the range of 1 . 8 ×103 <= Re <= 4 . 6 ×104 . It is shown that particularly the viscous boundary layers for the largest Ra are highly transitional and obey some properties that are directly comparable to transitional channel flows at friction Reynolds numbers below 100. This work is supported by the Deutsche Forschungsgemeinschaft.

  18. How small firms contrast with large firms regarding perceptions, practices, and needs in the U.S

    Treesearch

    Urs Buehlmann; Matthew Bumgardner; Michael Sperber

    2013-01-01

    As many larger secondary woodworking firms have moved production offshore and been adversely impacted by the recent housing downturn, smaller firms have become important to driving U.S. hardwood demand. This study compared and contrasted small and large firms on a number of factors to help determine the unique characteristics of small firms and to provide insights into...

  19. Large Print Bibliography, 1990.

    ERIC Educational Resources Information Center

    South Dakota State Library, Pierre.

    This bibliography lists materials that are available in large print format from the South Dakota State Library. The annotated entries are printed in large print and include the title of the material and its author, call number, publication date, and type of story or subject area covered. Some recorded items are included in the list. The entries…

  20. Impact of primary antibiotic resistance on the effectiveness of sequential therapy for Helicobacter pylori infection: lessons from a 5-year study on a large number of strains.

    PubMed

    Gatta, L; Scarpignato, C; Fiorini, G; Belsey, J; Saracino, I M; Ricci, C; Vaira, D

    2018-05-01

    The increasing prevalence of strains resistant to antimicrobial agents is a critical issue in the management of Helicobacter pylori (H. pylori) infection. (1) To evaluate the prevalence of primary resistance to clarithromycin, metronidazole and levofloxacin (2) to assess the effectiveness of sequential therapy on resistant strains (3) to identify the minimum number of subjects to enrol for evaluating the effectiveness of an eradication regimen in patients harbouring resistant strains. Consecutive 1682 treatment naïve H. pylori-positive patients referred for upper GI endoscopy between 2010 and 2015 were studied and resistances assessed by E-test. Sequential therapy was offered, effectiveness evaluated and analysed. H. pylori-primary resistance to antimicrobials tested was high, and increased between 2010 and 2015. Eradication rates were (estimates and 95% CIs): 97.3% (95.6-98.4) in strains susceptible to clarithromycin and metronidazole; 96.1% (91.7-98.2) in strains resistant to metronidazole but susceptible to clarithromycin; 93.4% (88.2-96.4) in strains resistant to clarithromycin but susceptible to metronidazole; 83.1% (77.7-87.3) in strains resistant to clarithromycin and metronidazole. For any treatment with a 75%-85% eradication rate, some 98-144 patients with resistant strains need to be studied to get reliable information on effectiveness in these patients. H. pylori-primary resistance is increasing and represents the most critical factor affecting effectiveness. Sequential therapy eradicated 83% of strains resistant to clarithromycin and metronidazole. Reliable estimates of the effectiveness of a given regimen in patients harbouring resistant strains can be obtained only by assessing a large number of strains. © 2018 John Wiley & Sons Ltd.

  1. Calibrating the mental number line.

    PubMed

    Izard, Véronique; Dehaene, Stanislas

    2008-03-01

    Human adults are thought to possess two dissociable systems to represent numbers: an approximate quantity system akin to a mental number line, and a verbal system capable of representing numbers exactly. Here, we study the interface between these two systems using an estimation task. Observers were asked to estimate the approximate numerosity of dot arrays. We show that, in the absence of calibration, estimates are largely inaccurate: responses increase monotonically with numerosity, but underestimate the actual numerosity. However, insertion of a few inducer trials, in which participants are explicitly (and sometimes misleadingly) told that a given display contains 30 dots, is sufficient to calibrate their estimates on the whole range of stimuli. Based on these empirical results, we develop a model of the mapping between the numerical symbols and the representations of numerosity on the number line.

  2. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  3. Large-scale fluctuations in the number density of galaxies in independent surveys of deep fields

    NASA Astrophysics Data System (ADS)

    Shirokov, S. I.; Lovyagin, N. Yu.; Baryshev, Yu. V.; Gorokhov, V. L.

    2016-06-01

    New arguments supporting the reality of large-scale fluctuations in the density of the visible matter in deep galaxy surveys are presented. A statistical analysis of the radial distributions of galaxies in the COSMOS and HDF-N deep fields is presented. Independent spectral and photometric surveys exist for each field, carried out in different wavelength ranges and using different observing methods. Catalogs of photometric redshifts in the optical (COSMOS-Zphot) and infrared (UltraVISTA) were used for the COSMOS field in the redshift interval 0.1 < z < 3.5, as well as the zCOSMOS (10kZ) spectroscopic survey and the XMM-COSMOS and ALHAMBRA-F4 photometric redshift surveys. The HDFN-Zphot and ALHAMBRA-F5 catalogs of photometric redshifts were used for the HDF-N field. The Pearson correlation coefficient for the fluctuations in the numbers of galaxies obtained for independent surveys of the same deep field reaches R = 0.70 ± 0.16. The presence of this positive correlation supports the reality of fluctuations in the density of visible matter with sizes of up to 1000 Mpc and amplitudes of up to 20% at redshifts z ~ 2. The absence of correlations between the fluctuations in different fields (the correlation coefficient between COSMOS and HDF-N is R = -0.20 ± 0.31) testifies to the independence of structures visible in different directions on the celestial sphere. This also indicates an absence of any influence from universal systematic errors (such as "spectral voids"), which could imitate the detection of correlated structures.

  4. Common genetic factors among sexual orientation, gender nonconformity, and number of sex partners in female twins: implications for the evolution of homosexuality.

    PubMed

    Burri, Andrea; Spector, Tim; Rahman, Qazi

    2015-04-01

    Homosexuality is a stable population-level trait in humans that lowers direct fitness and yet is substantially heritable, resulting in a so-called Darwinian "paradox." Evolutionary models have proposed that polymorphic genes influencing homosexuality confer a reproductive benefit to heterosexual carriers, thus offsetting the fitness costs associated with persistent homosexuality. This benefit may consist of a "sex typicality" intermediate phenotype. However, there are few empirical tests of this hypothesis using genetically informative data in humans. This study aimed to test the hypothesis that common genetic factors can explain the association between measures of sex typicality, mating success, and homosexuality in a Western (British) sample of female twins. Here, we used data from 996 female twins (498 twin pairs) comprising 242 full dizygotic pairs and 256 full monozygotic pairs (mean age 56.8) and 1,555 individuals whose co-twin did not participate. Measures of sexual orientation, sex typicality (recalled childhood gender nonconformity), and mating success (number of lifetime sexual partners) were completed. Variables were subject to multivariate variance component analysis. We found that masculine women are more likely to be nonheterosexual, report more sexual partners, and, when heterosexual, also report more sexual partners. Multivariate twin modeling showed that common genetic factors explained the relationship between sexual orientation, sex typicality, and mating success through a shared latent factor. Our findings suggest that genetic factors responsible for nonheterosexuality are shared with genetic factors responsible for the number of lifetime sexual partners via a latent sex typicality phenotype in human females. These results may have implications for evolutionary models of homosexuality but are limited by potential mediating variables (such as personality traits) and measurement issues. © 2015 International Society for Sexual Medicine.

  5. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    NASA Astrophysics Data System (ADS)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  6. Simian Virus 40 Large T Antigen Interacts with Human TFIIB-Related Factor and Small Nuclear RNA-Activating Protein Complex for Transcriptional Activation of TATA-Containing Polymerase III Promoters

    PubMed Central

    Damania, Blossom; Mital, Renu; Alwine, James C.

    1998-01-01

    The TATA-binding protein (TBP) is common to the basal transcription factors of all three RNA polymerases, being associated with polymerase-specific TBP-associated factors (TAFs). Simian virus 40 large T antigen has previously been shown to interact with the TBP-TAFII complexes, TFIID (B. Damania and J. C. Alwine, Genes Dev. 10:1369–1381, 1996), and the TBP-TAFI complex, SL1 (W. Zhai, J. Tuan, and L. Comai, Genes Dev. 11:1605–1617, 1997), and in both cases these interactions are critical for transcriptional activation. We show a similar mechanism for activation of the class 3 polymerase III (pol III) promoter for the U6 RNA gene. Large T antigen can activate this promoter, which contains a TATA box and an upstream proximal sequence element but cannot activate the TATA-less, intragenic VAI promoter (a class 2, pol III promoter). Mutants of large T antigen that cannot activate pol II promoters also fail to activate the U6 promoter. We provide evidence that large T antigen can interact with the TBP-containing pol III transcription factor human TFIIB-related factor (hBRF), as well as with at least two of the three TAFs in the pol III-specific small nuclear RNA-activating protein complex (SNAPc). In addition, we demonstrate that large T antigen can cofractionate and coimmunoprecipitate with the hBRF-containing complex TFIIIB derived from HeLa cells infected with a recombinant adenovirus which expresses large T antigen. Hence, similar to its function with pol I and pol II promoters, large T antigen interacts with TBP-containing, basal pol III transcription factors and appears to perform a TAF-like function. PMID:9488448

  7. Determining the optimal number of independent components for reproducible transcriptomic data analysis.

    PubMed

    Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei

    2017-09-11

    Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.

  8. Modeling of Individual and Organizational Factors Affecting Traumatic Occupational Injuries Based on the Structural Equation Modeling: A Case Study in Large Construction Industries

    PubMed Central

    Mohammadfam, Iraj; Soltanzadeh, Ahmad; Moghimbeigi, Abbas; Akbarzadeh, Mehdi

    2016-01-01

    Background Individual and organizational factors are the factors influencing traumatic occupational injuries. Objectives The aim of the present study was the short path analysis of the severity of occupational injuries based on individual and organizational factors. Materials and Methods The present cross-sectional analytical study was implemented on traumatic occupational injuries within a ten-year timeframe in 13 large Iranian construction industries. Modeling and data analysis were done using the structural equation modeling (SEM) approach and the IBM SPSS AMOS statistical software version 22.0, respectively. Results The mean age and working experience of the injured workers were 28.03 ± 5.33 and 4.53 ± 3.82 years, respectively. The portions of construction and installation activities of traumatic occupational injuries were 64.4% and 18.1%, respectively. The SEM findings showed that the individual, organizational and accident type factors significantly were considered as effective factors on occupational injuries’ severity (P < 0.05). Conclusions Path analysis of occupational injuries based on the SEM reveals that individual and organizational factors and their indicator variables are very influential on the severity of traumatic occupational injuries. So, these should be considered to reduce occupational accidents’ severity in large construction industries. PMID:27800465

  9. Drosophila mitochondrial transcription factor B1 modulates mitochondrial translation but not transcription or DNA copy number in Schneider cells.

    PubMed

    Matsushima, Yuichi; Adán, Cristina; Garesse, Rafael; Kaguni, Laurie S

    2005-04-29

    We report the cloning and molecular analysis of Drosophila mitochondrial transcription factor (d-mtTF) B1. An RNA interference (RNAi) construct was designed that reduces expression of d-mtTFB1 to 5% of its normal level in Schneider cells. In striking contrast with our previous study on d-mtTFB2, we found that RNAi knock-down of d-mtTFB1 does not change the abundance of specific mitochondrial RNA transcripts, nor does it affect the copy number of mitochondrial DNA. In a corollary manner, overexpression of d-mtTFB1 did not increase either the abundance of mitochondrial RNA transcripts or mitochondrial DNA copy number. Our data suggest that, unlike d-mtTFB2, d-mtTFB1 does not have a critical role in either transcription or regulation of the copy number of mitochondrial DNA. Instead, because we found that RNAi knockdown of d-mtTFB1 reduces mitochondrial protein synthesis, we propose that it serves its primary role in modulating translation. Our work represents the first study to document the role of mtTFB1 in vivo and establishes clearly functional differences between mtTFB1 and mtTFB2.

  10. Factors affecting wood energy consumption by U.S. households

    Treesearch

    Nianfu Song; Francisco X. Aguilar; Stephen R. Shifley; Michael E. Goerndt

    2012-01-01

    About 23% of energy derived from woody sources in the U.S. was consumed by households, of which 70% was used by households in rural areas in 2005. We investigated factors affecting household-level wood energy consumption in the four continental U.S. regions using data from the U.S. Residential Energy Consumption Survey. To account for a large number of zero...

  11. Genome-wide analysis of CNV (copy number variation) and their associations with narcolepsy in a Japanese population.

    PubMed

    Yamasaki, Maria; Miyagawa, Taku; Toyoda, Hiromi; Khor, Seik-Soon; Koike, Asako; Nitta, Aino; Akiyama, Kumi; Sasaki, Tsukasa; Honda, Yutaka; Honda, Makoto; Tokunaga, Katsushi

    2014-05-01

    In humans, narcolepsy with cataplexy (narcolepsy) is a sleep disorder that is characterized by sleepiness, cataplexy and rapid eye movement (REM) sleep abnormalities. Narcolepsy is caused by a reduction in the number of neurons that produce hypocretin (orexin) neuropeptide. Both genetic and environmental factors contribute to the development of narcolepsy.Rare and large copy number variations (CNVs) reportedly play a role in the etiology of a number of neuropsychiatric disorders. Narcolepsy is considered a neurological disorder; therefore, we sought to investigate any possible association between rare and large CNVs and human narcolepsy. We used DNA microarray data and a CNV detection software application, PennCNV-Affy, to detect CNVs in 426 Japanese narcoleptic patients and 562 healthy individuals. Overall, we found a significant enrichment of rare and large CNVs (frequency ≤1%, size ≥100 kb) in the patients (case-control ratio of CNV count=1.54, P=5.00 × 10(-4)). Next, we extended a region-based association analysis by including CNVs with its size ≥30 kb. Rare and large CNVs in PARK2 region showed a significant association with narcolepsy. Four patients were assessed to carry duplications of the gene region, whereas no controls carried the duplication, which was further confirmed by quantitative PCR assay. This duplication was also found in 2 essential hypersomnia (EHS) patients out of 171 patients. Furthermore, a pathway analysis revealed enrichments of gene disruptions by rare and large CNVs in immune response, acetyltransferase activity, cell cycle regulation and regulation of cell development. This study constitutes the first report on the risk association between multiple rare and large CNVs and the pathogenesis of narcolepsy. In the future, replication studies are needed to confirm the associations.

  12. Impact Factors Show Increased Use of AGU Journals in 2008

    NASA Astrophysics Data System (ADS)

    Ford, Barbara Meyers

    2009-07-01

    The latest numbers released from Journal Citation Reports (JCR), published annually by Thomson Reuters, show large increases in the impact factor (IF) for several AGU journals. IFs are one way for publishers to know that readers have found their journals useful and of value in research. A journal's IF is calculated by taking the total number of citations to articles published by a given journal in the past 2 years and dividing it by the total number of papers published by the journal in the same time period. More generally, it can be seen as the frequency with which articles in a journal have been cited over the past year. The numbers speak for themselves (see Table 1).

  13. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    NASA Technical Reports Server (NTRS)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  14. Desired Numbers of Children, Fertility Preferences and Related Factors among Couples Who Referred to Pre-Marriage Counseling in Alborz Province, Iran.

    PubMed

    Lotfi, Razieh; Rajabi Naeeni, Masoumeh Rajabi Naeeni; Rezaei, Nasrin; Farid, Malihe; Tizvir, Afsoon

    2017-10-01

    The Islamic Republic of Iran has experienced a dramatic decrease in fertility rates in the past three decades. One of the main issues in the field of fertility is the couple's preferences and the desire to bear children. This study aimed to determine desired number of children, fertility preference, and related factors among people referring pre-marriage counseling to clarify their presumed behavior in case of fertility. This study was a descriptive analytic cross-sectional survey, conducted during 8 months. The participants were 300 couples came to pre-marriage counseling centers of two health centers of Karaj and asked to complete a 22 items questionnaire about of demographic characteristics, participants' interest, preference about fertility, and economic situation. Majority of the males were between the ages of 20-30 years (66.6%) while majority of the females were below 25 years of age (57%). About 17 percent of men and 22.3 percent of women stated that they want to have 1 child and equally 52.7 percent of men and 52.7 percent of women wanted to have 2 children. The only factor that contributed to the female participant's decision for a desirable number of children was the number of siblings that they have. In male participants with an increasing age at marriage and aspiration for higher educational level, the time interval between marriage and the birth of the first child has increased. There was a convergence in desired number of children in male and female participants. Majority of the participants express their desire to have only one or two children in future but in considering the fact that what one desires does not always come into reality, the risk of reduced fertility is generally present in the community. Appropriate policies should be implemented in order to create a favorable environment for children. Copyright© by Royan Institute. All rights reserved.

  15. [Analysis of factors related to the number of mesenchymal stem cells derived from synovial fluid of the temporomandibular joint].

    PubMed

    Sun, Y P; Zheng, Y H; Zhang, Z G

    2017-06-09

    Objective: To analyze related factors on the number of mesenchymal stem cells in the synovial fluid of the temporomandibular joint (TMJ) and provide an research basis for understanding of the source and biological role of mesenchymal stem cells derived from synovial fluid in TMJ. Methods: One hundred and twenty-two synovial fluid samples from 91 temporomandibular disorders (TMD) patients who visited in Department of TMJ Center, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University from March 2013 to December 2013 were collected in this study, and 6 TMJ synovial fluid samples from 6 normal volunteers who were studying in the North Campus of Sun Yat-sen University were also collected, so did their clinical information. Then the relation between the number of mesenchymal stem cells derived from synovial fluid and the health status of the joints, age of donor, disc perforation, condylar bony destruction, blood containing and visual analogue scale score of pain were investigated using Mann-Whitney U test and Spearman rank correlation test. Results: The number of mesenchymal stem cells derived from synovial fluid had no significant relation with visual analogue scale score of pain ( r= 0.041, P= 0.672), blood containing ( P= 0.063), condylar bony destruction ( P= 0.371). Linear correlation between the number of mesenchymal stem cells derived from synovial fluid and age of donor was very week ( r= 0.186, P= 0.043). The number of mesenchymal stem cells up-regulated when the joint was in a disease state ( P= 0.001). The disc perforation group had more mesenchymal stem cells in synovial fluid than without disc perforation group ( P= 0.042). Conclusions: The number of mesenchymal stem cells derived from synovial fluid in TMJ has no correlation with peripheral blood circulation and condylar bony destruction, while has close relation with soft tissue structure damage of the joint.

  16. Diabetic foot complications and their risk factors from a large retrospective cohort study.

    PubMed

    Al-Rubeaan, Khalid; Al Derwish, Mohammad; Ouizi, Samir; Youssef, Amira M; Subhani, Shazia N; Ibrahim, Heba M; Alamri, Bader N

    2015-01-01

    Foot complications are considered to be a serious consequence of diabetes mellitus, posing a major medical and economical threat. Identifying the extent of this problem and its risk factors will enable health providers to set up better prevention programs. Saudi National Diabetes Registry (SNDR), being a large database source, would be the best tool to evaluate this problem. This is a cross-sectional study of a cohort of 62,681 patients aged ≥ 25 years from SNDR database, selected for studying foot complications associated with diabetes and related risk factors. The overall prevalence of diabetic foot complications was 3.3% with 95% confidence interval (95% CI) of (3.16%-3.44%), whilst the prevalences of foot ulcer, gangrene, and amputations were 2.05% (1.94%-2.16%), 0.19% (0.16%-0.22%), and 1.06% (0.98%-1.14%), respectively. The prevalence of foot complications increased with age and diabetes duration predominantly amongst the male patients. Diabetic foot is more commonly seen among type 2 patients, although it is more prevalent among type 1 diabetic patients. The Univariate analysis showed Charcot joints, peripheral vascular disease (PVD), neuropathy, diabetes duration ≥ 10 years, insulin use, retinopathy, nephropathy, age ≥ 45 years, cerebral vascular disease (CVD), poor glycemic control, coronary artery disease (CAD), male gender, smoking, and hypertension to be significant risk factors with odds ratio and 95% CI at 42.53 (18.16-99.62), 14.47 (8.99-23.31), 12.06 (10.54-13.80), 7.22 (6.10-8.55), 4.69 (4.28-5.14), 4.45 (4.05-4.89), 2.88 (2.43-3.40), 2.81 (2.31-3.43), 2.24 (1.98-2.45), 2.02 (1.84-2.22), 1.54 (1.29-1.83), and 1.51 (1.38-1.65), respectively. Risk factors for diabetic foot complications are highly prevalent; they have put these complications at a higher rate and warrant primary and secondary prevention programs to minimize morbidity and mortality in addition to economic impact of the complications. Other measurements, such as decompression of

  17. Data-Driven Identification of Risk Factors of Patient Satisfaction at a Large Urban Academic Medical Center.

    PubMed

    Li, Li; Lee, Nathan J; Glicksberg, Benjamin S; Radbill, Brian D; Dudley, Joel T

    2016-01-01

    The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is the first publicly reported nationwide survey to evaluate and compare hospitals. Increasing patient satisfaction is an important goal as it aims to achieve a more effective and efficient healthcare delivery system. In this study, we develop and apply an integrative, data-driven approach to identify clinical risk factors that associate with patient satisfaction outcomes. We included 1,771 unique adult patients who completed the HCAHPS survey and were discharged from the inpatient Medicine service from 2010 to 2012. We collected 266 clinical features including patient demographics, lab measurements, medications, disease categories, and procedures. We developed and applied a data-driven approach to identify risk factors that associate with patient satisfaction outcomes. We identify 102 significant risk factors associating with 18 surveyed questions. The most significantly recurrent clinical risk factors were: self-evaluation of health, education level, Asian, White, treatment in BMT oncology division, being prescribed a new medication. Patients who were prescribed pregabalin were less satisfied particularly in relation to communication with nurses and pain management. Explanation of medication usage was associated with communication with nurses (q = 0.001); however, explanation of medication side effects was associated with communication with doctors (q = 0.003). Overall hospital rating was associated with hospital environment, communication with doctors, and communication about medicines. However, patient likelihood to recommend hospital was associated with hospital environment, communication about medicines, pain management, and communication with nurse. Our study identified a number of putatively novel clinical risk factors for patient satisfaction that suggest new opportunities to better understand and manage patient satisfaction. Hospitals can use a data-driven approach to

  18. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    NASA Astrophysics Data System (ADS)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  19. Large Data at Small Universities: Astronomical processing using a computer classroom

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  20. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    PubMed Central

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  1. Factors Predictive of Healing in Large Rotator Cuff Tears: Is It Possible to Predict Retear Preoperatively?

    PubMed

    Jeong, Ho Yeon; Kim, Hwan Jin; Jeon, Yoon Sang; Rhee, Yong Girl

    2018-03-01

    Many studies have identified risk factors that cause retear after rotator cuff repair. However, it is still questionable whether retears can be predicted preoperatively. To determine the risk factors related to retear after arthroscopic rotator cuff repair and to evaluate whether it is possible to predict the occurrence of retear preoperatively. Case-control study; Level of evidence, 3. This study enrolled 112 patients who underwent arthroscopic rotator cuff repair with single-row technique for a large-sized tear, defined as a tear with a mediolateral length of 3 to 5 cm. All patients underwent routine magnetic resonance imaging (MRI) at 9 months postoperatively to assess tendon integrity. The sample included 61 patients (54.5%) in the healed group and 51 (45.5%) in the retear group. In multivariate analysis, the independent predictors of retears were supraspinatus muscle atrophy ( P < .001) and fatty infiltration of the infraspinatus ( P = .027), which could be preoperatively measured by MRI. A significant difference was found between the two groups in sex, the acromiohumeral interval, tendon tension, and preoperative or intraoperative mediolateral tear length and musculotendinous junction position in univariate analysis. However, these variables were not independent predictors in multivariate analysis. The cutoff values of occupation ratio of supraspinatus and fatty infiltration of the infraspinatus were 43% and grade 2, respectively. The occupation ratio of supraspinatus <43% and grade ≥2 fatty infiltration of the infraspinatus were the strongest predictors of retear, with an area under the curve of 0.908, sensitivity of 98.0%, and specificity of 83.6% (accuracy = 90.2%). In patients with large rotator cuff tears, it was possible to predict the retear before rotator cuff repair regardless of intraoperative factors. The retear could be predicted most effectively when the occupation ratio of supraspinatus was <43% or the fatty infiltration of infraspinatus was

  2. Source apportionment of stack emissions from research and development facilities using positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Ballinger, Marcel Y.; Larson, Timothy V.

    2014-12-01

    Research and development (R&D) facility emissions are difficult to characterize due to their variable processes, changing nature of research, and large number of chemicals. Positive matrix factorization (PMF) was applied to volatile organic compound (VOC) concentrations measured in the main exhaust stacks of four different R&D buildings to identify the number and composition of major contributing sources. PMF identified between 9 and 11 source-related factors contributing to stack emissions, depending on the building. Similar factors between buildings were major contributors to trichloroethylene (TCE), acetone, and ethanol emissions; other factors had similar profiles for two or more buildings but not all four. At least one factor for each building was identified that contained a broad mix of many species and constraints were used in PMF to modify the factors to resemble more closely the off-shift concentration profiles. PMF accepted the constraints with little decrease in model fit.

  3. Small on the left, large on the right: numbers orient visual attention onto space in preverbal infants.

    PubMed

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-05-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical cues are critical in orienting infants' visual attention towards a peripheral region of space that is congruent with the number's relative position on a left-to-right oriented representational continuum. This finding provides the first direct evidence that, in humans, the association between numbers and oriented spatial codes occurs before the acquisition of symbols or exposure to formal education, suggesting that the number line is not merely a product of human invention. © 2015 John Wiley & Sons Ltd.

  4. Large-scale production and properties of human plasma-derived activated Factor VII concentrate.

    PubMed

    Tomokiyo, K; Yano, H; Imamura, M; Nakano, Y; Nakagaki, T; Ogata, Y; Terano, T; Miyamoto, S; Funatsu, A

    2003-01-01

    An activated Factor VII (FVIIa) concentrate, prepared from human plasma on a large scale, has to date not been available for clinical use for haemophiliacs with antibodies against FVIII and FIX. In the present study, we attempted to establish a large-scale manufacturing process to obtain plasma-derived FVIIa concentrate with high recovery and safety, and to characterize its biochemical and biological properties. FVII was purified from human cryoprecipitate-poor plasma, by a combination of anion exchange and immunoaffinity chromatography, using Ca2+-dependent anti-FVII monoclonal antibody. To activate FVII, a FVII preparation that was nanofiltered using a Bemberg Microporous Membrane-15 nm was partially converted to FVIIa by autoactivation on an anion-exchange resin. The residual FVII in the FVII and FVIIa mixture was completely activated by further incubating the mixture in the presence of Ca2+ for 18 h at 10 degrees C, without any additional activators. For preparation of the FVIIa concentrate, after dialysis of FVIIa against 20 mm citrate, pH 6.9, containing 13 mm glycine and 240 mm NaCl, the FVIIa preparation was supplemented with 2.5% human albumin (which was first pasteurized at 60 degrees C for 10 h) and lyophilized in vials. To inactivate viruses contaminating the FVIIa concentrate, the lyophilized product was further heated at 65 degrees C for 96 h in a water bath. Total recovery of FVII from 15 000 l of plasma was approximately 40%, and the FVII preparation was fully converted to FVIIa with trace amounts of degraded products (FVIIabeta and FVIIagamma). The specific activity of the FVIIa was approximately 40 U/ micro g. Furthermore, virus-spiking tests demonstrated that immunoaffinity chromatography, nanofiltration and dry-heating effectively removed and inactivated the spiked viruses in the FVIIa. These results indicated that the FVIIa concentrate had both high specific activity and safety. We established a large-scale manufacturing process of human plasma

  5. Number Line Estimation: The Use of Number Line Magnitude Estimation to Detect the Presence of Math Disability in Postsecondary Students

    ERIC Educational Resources Information Center

    McDonald, Steven A.

    2010-01-01

    This study arose from an interest in the possible presence of mathematics disabilities among students enrolled in the developmental math program at a large university in the Mid-Atlantic region. Research in mathematics learning disabilities (MLD) has included a focus on the construct of working memory and number sense. A component of number sense…

  6. Improving the Inventory of Large Lunar Basins: Using Lola Data to Test Previous Candidates and Search for New Ones

    NASA Technical Reports Server (NTRS)

    Frey, Herbert V.; Meyer, H. M.

    2012-01-01

    Topography and crustal thickness data from LOLA altimetry were used to test the validity of 98 candidate large lunar basins derived from photogeologic and earlier topographic and crustal thickness data, and to search for possible new candidates. We eliminate 23 previous candidates but find good evidence for 20 new candidates. The number of basins > 300 km diameter on the Moon is almost certainly a factor 2 (maybe 3?) larger than the number of named features having basin-like topography.

  7. Multiply to conquer: Copy number variations at Ppd-B1 and Vrn-A1 facilitate global adaptation in wheat.

    PubMed

    Würschum, Tobias; Boeven, Philipp H G; Langer, Simon M; Longin, C Friedrich H; Leiser, Willmar L

    2015-07-29

    Copy number variation was found to be a frequent type of DNA polymorphism in the human genome often associated with diseases but its importance in crops and the effects on agronomic traits are still largely unknown. Here, we employed a large worldwide panel of 1110 winter wheat varieties to assess the frequency and the geographic distribution of copy number variants at the Photoperiod-B1 (Ppd-B1) and the Vernalization-A1 (Vrn-A1) loci as well as their effects on flowering time under field conditions. We identified a novel four copy variant of Vrn-A1 and based on the phylogenetic relationships among the lines show that the higher copy variants at both loci are likely to have arisen independently multiple times. In addition, we found that the frequency of the different copy number variants at both loci reflects the environmental conditions in the varieties' region of origin and based on multi-location field trials show that Ppd-B1 copy number has a substantial effect on the fine-tuning of flowering time. In conclusion, our results show the importance of copy number variation at Ppd-B1 and Vrn-A1 for the global adaptation of wheat making it a key factor for wheat success in a broad range of environments and in a wider context substantiate the significant role of copy number variation in crops.

  8. Number, position, and significance of the pseudouridines in the large subunit ribosomal RNA of Haloarcula marismortui and Deinococcus radiodurans

    PubMed Central

    DEL CAMPO, MARK; RECINOS, CLAUDIA; YANEZ, GISCARD; POMERANTZ, STEVEN C.; GUYMON, REBECCA; CRAIN, PAMELA F.; MCCLOSKEY, JAMES A.; OFENGAND, JAMES

    2005-01-01

    The number and position of the pseudouridines of Haloarcula marismortui and Deinococcus radiodurans large subunit RNA have been determined by a combination of total nucleoside analysis by HPLC-mass spectrometry and pseudouridine sequencing by the reverse transcriptase method and by LC/MS/MS. Three pseudouridines were found in H. marismortui, located at positions 1956, 1958, and 2621 corresponding to Escherichia coli positions 1915, 1917, and 2586, respectively. The three pseudouridines are all in locations found in other organisms. Previous reports of a larger number of pseudouridines in this organism were incorrect. Three pseudouridines and one 3-methyl pseudouridine (m3Ψ) were found in D. radiodurans 23S RNA at positions 1894, 1898 (m3Ψ), 1900, and 2584, the m3Ψ site being determined by a novel application of mass spectrometry. These positions correspond to E. coli positions 1911, 1915, 1917, and 2605, which are also pseudouridines in E. coli (1915 is m3Ψ). The pseudouridines in the helix 69 loop, residues 1911, 1915, and 1917, are in positions highly conserved among all phyla. Pseudouridine 2584 in D. radiodurans is conserved in eubacteria and a chloroplast but is not found in archaea or eukaryotes, whereas pseudouridine 2621 in H. marismortui is more conserved in eukaryotes and is not found in eubacteria. All the pseudoridines are near, but not exactly at, nucleotides directly involved in various aspects of ribosome function. In addition, two D. radiodurans Ψ synthases responsible for the four Ψ were identified. PMID:15659360

  9. Experimental determination of Ramsey numbers.

    PubMed

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  10. Experimental Determination of Ramsey Numbers

    NASA Astrophysics Data System (ADS)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Clark, Lane; Gaitan, Frank

    2013-09-01

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  11. A wide angle and high Mach number parabolic equation.

    PubMed

    Lingevitch, Joseph F; Collins, Michael D; Dacol, Dalcio K; Drob, Douglas P; Rogers, Joel C W; Siegmann, William L

    2002-02-01

    Various parabolic equations for advected acoustic waves have been derived based on the assumptions of small Mach number and narrow propagation angles, which are of limited validity in atmospheric acoustics. A parabolic equation solution that does not require these assumptions is derived in the weak shear limit, which is appropriate for frequencies of about 0.1 Hz and above for atmospheric acoustics. When the variables are scaled appropriately in this limit, terms involving derivatives of the sound speed, density, and wind speed are small but can have significant cumulative effects. To obtain a solution that is valid at large distances from the source, it is necessary to account for linear terms in the first derivatives of these quantities [A. D. Pierce, J. Acoust. Soc. Am. 87, 2292-2299 (1990)]. This approach is used to obtain a scalar wave equation for advected waves. Since this equation contains two depth operators that do not commute with each other, it does not readily factor into outgoing and incoming solutions. An approximate factorization is obtained that is correct to first order in the commutator of the depth operators.

  12. Pleiotropic Stabilizing Selection Limits the Number of Polymorphic Loci to at Most the Number of Characters

    PubMed Central

    Hastings, A.; Hom, C. L.

    1989-01-01

    We demonstrate that, in a model incorporating weak Gaussian stabilizing selection on n additively determined characters, at most n loci are polymorphic at a stable equilibrium. The number of characters is defined to be the number of independent components in the Gaussian selection scheme. We also assume linkage equilibrium, and that either the number of loci is large enough that the phenotypic distribution in the population can be approximated as multivariate Gaussian or that selection is weak enough that the mean fitness of the population can be approximated using only the mean and the variance of the characters in the population. Our results appear to rule out antagonistic pleiotropy without epistasis as a major force in maintaining additive genetic variation in a uniform environment. However, they are consistent with the maintenance of variability by genotype-environment interaction if a trait in different environments corresponds to different characters and the number of different environments exceeds the number of polymorphic loci that affect the trait. PMID:2767424

  13. Tradespace investigation of strategic design factors for large space telescopes

    NASA Astrophysics Data System (ADS)

    Karlow, Brandon; Jewison, Christopher; Sternberg, David; Hall, Sherrie; Golkar, Alessandro

    2015-04-01

    Future large telescope arrays require careful balancing of satisfaction across the stakeholders' community. Development programs usually cannot afford to explicitly address all stakeholder tradeoffs during the conceptual design stage, but rather confine the analysis to performance, cost, and schedule discussions, treating policy and budget as constraints defining the envelope of the investigation. Thus, it is of interest to develop an integrated stakeholder analysis approach to explicitly address the impact of all stakeholder interactions on the design of large telescope arrays to address future science and exploration needs. This paper offers a quantitative approach for modeling some of the stakeholder influences relevant to large telescope array designs-the linkages between a given mission and the wider NASA community. The main goal of the analysis is to explore the tradespace of large telescope designs and understand the effects of different design decisions in the stakeholders' network. Proposed architectures that offer benefits to existing constellations of systems, institutions, and mission plans are expected to yield political and engineering benefits for NASA stakeholders' wider objectives. If such synergistic architectures are privileged in subsequent analysis, regions of the tradespace that better meet the needs of the wider NASA community can be selected for further development.

  14. Chaotic behaviour of high Mach number flows

    NASA Technical Reports Server (NTRS)

    Varvoglis, H.; Ghosh, S.

    1985-01-01

    The stability of the super-Alfvenic flow of a two-fluid plasma model with respect to the Mach number and the angle between the flow direction and the magnetic field is investigated. It is found that, in general, a large scale chaotic region develops around the initial equilibrium of the laminar flow when the Mach number exceeds a certain threshold value. After reaching a maximum the size of this region begins shrinking and goes to zero as the Mach number tends to infinity. As a result high Mach number flows in time independent astrophysical plasmas may lead to the formation of 'quasi-shocks' in the presence of little or no dissipation.

  15. The number of operations required for completing breast reconstruction.

    PubMed

    Eom, Jin Sup; Kobayashi, Mark Robert; Paydar, Keyianoosh; Wirth, Garrett A; Evans, Gregory R D

    2014-10-01

    Breast reconstruction often requires multiple surgeries, which demands additional expense and time and is often contrary to the patient's expectation. The aim of this study was to review the number of operations that were needed for completion of breast reconstruction and to determine patient and clinical factors that influenced this number. We retrospectively reviewed the medical records of 254 cases of breast reconstructions (in 185 patients) that were performed between February 2005 and August 2009. We investigated the numbers of operations that were performed for individual case of breast reconstruction and analyzed the influence of variable factors. The purpose of the additional operations was also analyzed. The mean number of operations per breast was 2.37 (range, 1-9). The mean number of operations for mound creation was 2.24. Factors associated with an increased number of operation were use of an implant, contralateral symmetrization, complications, and nipple reconstruction. Considering the reconstruction method, either the use of a primary implant or the use of free abdominal tissue transfer demonstrated fewer surgeries than the use of an expander implant, and the number of operations using free transverse rectus abdominis musculocutaneous or deep inferior epigastric perforator flaps was less than the number of operations using pedicled transverse rectus abdominis musculocutaneous flaps. These data will aid in planning breast reconstruction surgery and will enable patients to be more informed regarding the likelihood of multiple surgeries.

  16. Incidence, risk factors and causes of death in an HIV care programme with a large proportion of injecting drug users.

    PubMed

    Spillane, Heidi; Nicholas, Sarala; Tang, Zhirong; Szumilin, Elisabeth; Balkan, Suna; Pujades-Rodriguez, Mar

    2012-10-01

    To identify factors influencing mortality in an HIV programme providing care to large numbers of injecting drug users (IDUs) and patients co-infected with hepatitis C (HCV). A longitudinal analysis of monitoring data from HIV-infected adults who started antiretroviral therapy (ART) between 2003 and 2009 was performed. Mortality and programme attrition rates within 2 years of ART initiation were estimated. Associations with individual-level factors were assessed with multivariable Cox and piece-wise Cox regression. A total of 1671 person-years of follow-up from 1014 individuals was analysed. Thirty-four percent of patients were women and 33% were current or ex-IDUs. 36.2% of patients (90.8% of IDUs) were co-infected with HCV. Two-year all-cause mortality rate was 5.4 per 100 person-years (95% CI, 4.4-6.7). Most HIV-related deaths occurred within 6 months of ART start (36, 67.9%), but only 5 (25.0%) non-HIV-related deaths were recorded during this period. Mortality was higher in older patients (HR = 2.50; 95% CI, 1.42-4.40 for ≥40 compared to 15-29 years), and in those with initial BMI < 18.5 kg/m(2) (HR = 3.38; 95% CI, 1.82-5.32), poor adherence to treatment (HR = 5.13; 95% CI, 2.47-10.65 during the second year of therapy), or low initial CD4 cell count (HR = 4.55; 95% CI, 1.54-13.41 for <100 compared to ≥100 cells/μl). Risk of death was not associated with IDU status (P = 0.38). Increased mortality was associated with late presentation of patients. In this programme, death rates were similar regardless of injection drug exposure, supporting the notion that satisfactory treatment outcomes can be achieved when comprehensive care is provided to these patients. © 2012 Blackwell Publishing Ltd.

  17. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach

  18. The contributions of non-numeric dimensions to number encoding, representations, and decision-making factors.

    PubMed

    Odic, Darko

    2017-01-01

    Leibovich et al. suggest that congruency effects in number perception (biases towards smaller, denser, etc., dots) are evidence for the number's dependence on these dimensions. I argue that they fail to differentiate between effects at three distinct levels of number perception - encoding, representations, and decision making - and that differentiating between these allows the number to be independent from, but correlated with, non-numeric dimensions.

  19. Critical success factors for competitiveness of construction companies: A critical review

    NASA Astrophysics Data System (ADS)

    Hanafi, Abdul Ghafur; Nawi, Mohd Nasrun Mohd

    2016-08-01

    Making progress basically, a fundamental issue for the construction companies to get by in a highly competitive industry. From time to time, industry players are facing stiff and tough competition due to large number of players, whether existing or new players involved from various background and track record. Furthermore, the large numbers of component deciding the competitiveness of contractors, whose organization structures and governance have turned out to be more muddled. Different construction companies have their own unique criteria which may differ from one to another. The enormous amount of issues needs to bring down to manageable numbers so that measures can be identified and scrutinized to enhance competitiveness. This paper discusses the result from the critical investigation from past studies in the Asian countries, namely China, India, Thailand, Singapore and Malaysia. Several fundamental factors have been identified as CSFs in construction companies in respective country. Also highlighted a critical survey based upon various literatures written on this subject where critical success factors (CSFs) as a yardstick to gauge the relationship among CSFs in various construction companies in the Asian region. Far reaching estimation of an organization's performance and resulting input to its supervision is crucial for business change. Estimation additionally empowers organizations to be contrasted from one another on the premise of institutionalized data, permitting best practices to be distinguished and connected more widely. Different countries have their own set of critical success factors (CSFs) which may differ in term of priority and at the same time share common elements of success factor in accomplishment as a construction companies. The study, which is exploratory in nature, embraced the content investigation and inductive technique to accomplish its objectives.

  20. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    NASA Astrophysics Data System (ADS)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  1. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    PubMed

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  2. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can

  3. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  4. Hyperfibrinogenemia is a poor prognostic factor in diffuse large B cell lymphoma.

    PubMed

    Niu, Jun-Ying; Tian, Tian; Zhu, Hua-Yuan; Liang, Jin-Hua; Wu, Wei; Cao, Lei; Lu, Rui-Nan; Wang, Li; Li, Jian-Yong; Xu, Wei

    2018-06-02

    Diffuse large B cell lymphoma (DLBCL) is the most common subtype of non-Hodgkin lymphomas worldwide. Previous studies indicated that hyperfibrinogenemia was a poor predictor in various tumors. The purpose of our study was to evaluate the prognostic effect of hyperfibrinogenemia in DLBCL. Data of 228 patients, who were diagnosed with DLBCL in our hospital between May 2009 and February 2016, were analyzed retrospectively. The Kaplan-Meier method and Cox regression were performed to find prognostic factors associated with progression-free survival (PFS) and overall survival (OS). Receiver operator characteristic (ROC) curve and the areas under the curve were used to evaluate the predictive accuracy of predictors. Comparison of characters between groups indicated that patients with high National Comprehensive Cancer Network-International Prognostic Index (NCCN-IPI) score (4-8) and advanced stage (III-IV) were more likely to suffer from hyperfibrinogenemia. The Kaplan-Meier method revealed that patients with hyperfibrinogenemia showed inferior PFS (P < 0.001) and OS (P < 0.001) than those without hyperfibrinogenemia. Multivariate analysis showed that hyperfibrinogenemia was an independent prognostic factor associated with poor outcomes (HR = 1.90, 95% CI: 1.15-3.16 for PFS, P = 0.013; HR = 2.65, 95% CI: 1.46-4.79 for OS, P = 0.001). We combined hyperfibrinogenemia and NCCN-IPI to build a new prognostic index (NPI). The NPI was demonstrated to have a superior predictive effect on prognosis (P = 0.0194 for PFS, P = 0.0034 for OS). Hyperfibrinogenemia was demonstrated to be able to predict poor outcome in DLBCL, especially for patients with advanced stage and high NCCN-IPI score. Adding hyperfibrinogenemia to NCCN-IPI could significantly improve the predictive effect of NCCN-IPI.

  5. Estimating finite-population reproductive numbers in heterogeneous populations.

    PubMed

    Keegan, Lindsay T; Dushoff, Jonathan

    2016-05-21

    The basic reproductive number, R0, is one of the most important epidemiological quantities. R0 provides a threshold for elimination and determines when a disease can spread or when a disease will die out. Classically, R0 is calculated assuming an infinite population of identical hosts. Previous work has shown that heterogeneity in the host mixing rate increases R0 in an infinite population. However, it has been suggested that in a finite population, heterogeneity in the mixing rate may actually decrease the finite-population reproductive numbers. Here, we outline a framework for discussing different types of heterogeneity in disease parameters, and how these affect disease spread and control. We calculate "finite-population reproductive numbers" with different types of heterogeneity, and show that in a finite population, heterogeneity has complicated effects on the reproductive number. We find that simple heterogeneity decreases the finite-population reproductive number, whereas heterogeneity in the intrinsic mixing rate (which affects both infectiousness and susceptibility) increases the finite-population reproductive number when R0 is small relative to the size of the population and decreases the finite-population reproductive number when R0 is large relative to the size of the population. Although heterogeneity has complicated effects on the finite-population reproductive numbers, its implications for control are straightforward: when R0 is large relative to the size of the population, heterogeneity decreases the finite-population reproductive numbers, making disease control or elimination easier than predicted by R0. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    PubMed Central

    Lin, Chien-Hsing; Li, Ling-Hui; Ho, Sheng-Feng; Chuang, Tzu-Po; Wu, Jer-Yuarn; Chen, Yuan-Tsong; Fann, Cathy SJ

    2008-01-01

    Background Copy number variations (CNVs) have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT) is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83%) had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb) and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations. PMID:19108714

  7. Mortality, Causes of Death and Associated Factors Relate to a Large HIV Population-Based Cohort.

    PubMed

    Garriga, César; García de Olalla, Patricia; Miró, Josep M; Ocaña, Inma; Knobel, Hernando; Barberá, Maria Jesús; Humet, Victoria; Domingo, Pere; Gatell, Josep M; Ribera, Esteve; Gurguí, Mercè; Marco, Andrés; Caylà, Joan A

    2015-01-01

    Antiretroviral therapy has led to a decrease in HIV-related mortality and to the emergence of non-AIDS defining diseases as competing causes of death. This study estimates the HIV mortality rate and their risk factors with regard to different causes in a large city from January 2001 to June 2013. We followed-up 3137 newly diagnosed HIV non-AIDS cases. Causes of death were classified as HIV-related, non-HIV-related and external. We examined the effect of risk factors on survival using mortality rates, Kaplan-Meier plots and Cox models. Finally, we estimated survival for each main cause of death groups through Fine and Gray models. 182 deaths were found [14.0/1000 person-years of follow-up (py); 95% confidence interval (CI):12.0-16.1/1000 py], 81.3% of them had a known cause of death. Mortality rate by HIV-related causes and non-HIV-related causes was the same (4.9/1000 py; CI:3.7-6.1/1000 py), external was lower [1.7/1000 py; (1.0-2.4/1000 py)]. Kaplan-Meier estimate showed worse survival in intravenous drug user (IDU) and heterosexuals than in men having sex with men (MSM). Factors associated with HIV-related causes of death include: IDU male (subHazard Ratio (sHR):3.2; CI:1.5-7.0) and <200 CD4 at diagnosis (sHR:2.7; CI:1.3-5.7) versus ≥500 CD4. Factors associated with non-HIV-related causes of death include: ageing (sHR:1.5; CI:1.4-1.7) and heterosexual female (sHR:2.8; CI:1.1-7.3) versus MSM. Factors associated with external causes of death were IDU male (sHR:28.7; CI:6.7-123.2) and heterosexual male (sHR:11.8; CI:2.5-56.4) versus MSM. There are important differences in survival among transmission groups. Improved treatment is especially necessary in IDUs and heterosexual males.

  8. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  9. Sex chromosome aneuploidies and copy-number variants: a further explanation for neurodevelopmental prognosis variability?

    PubMed

    Le Gall, Jessica; Nizon, Mathilde; Pichon, Olivier; Andrieux, Joris; Audebert-Bellanger, Séverine; Baron, Sabine; Beneteau, Claire; Bilan, Frédéric; Boute, Odile; Busa, Tiffany; Cormier-Daire, Valérie; Ferec, Claude; Fradin, Mélanie; Gilbert-Dussardier, Brigitte; Jaillard, Sylvie; Jønch, Aia; Martin-Coignard, Dominique; Mercier, Sandra; Moutton, Sébastien; Rooryck, Caroline; Schaefer, Elise; Vincent, Marie; Sanlaville, Damien; Le Caignec, Cédric; Jacquemont, Sébastien; David, Albert; Isidor, Bertrand

    2017-08-01

    Sex chromosome aneuploidies (SCA) is a group of conditions in which individuals have an abnormal number of sex chromosomes. SCA, such as Klinefelter's syndrome, XYY syndrome, and Triple X syndrome are associated with a large range of neurological outcome. Another genetic event such as another cytogenetic abnormality may explain a part of this variable expressivity. In this study, we have recruited fourteen patients with intellectual disability or developmental delay carrying SCA associated with a copy-number variant (CNV). In our cohort (four patients 47,XXY, four patients 47,XXX, and six patients 47,XYY), seven patients were carrying a pathogenic CNV, two a likely pathogenic CNV and five a variant of uncertain significance. Our analysis suggests that CNV might be considered as an additional independent genetic factor for intellectual disability and developmental delay for patients with SCA and neurodevelopmental disorder.

  10. An Evaluation of the Numbers and Locations of Coronary Artery Disease with Some of the Major Atherosclerotic Risk Factors in Patients with Coronary Artery Disease

    PubMed Central

    Naghshtabrizi, Behshad; Moradi, Abbas; Amiri, Jalaleddin; Aarabi, Sepide

    2017-01-01

    Introduction Despite definite recognition of major atherosclerotic risk factors, the relationship between the pattern of coronary artery disease and these risk factors is unknown. Aim The aim of this study was to identify the relationship between some of the major atherosclerotic risk factors and the number and pattern of coronary artery disease in patients with coronary artery disease who presented to Farshchian Heart University Hospital, Hamadan, Iran. Materials and Methods In this descriptive cross-sectional study, we investigated some of the major atherosclerotic risk factors and their relationships with the type of coronary artery disease in terms of number and location of disease. A total of 1100 patients were enrolled with coronary artery disease confirmed by selective coronary angiography from 2010-2014. A p-value<0.05 was considered statistically significant. Results A total of 1100 patients enrolled in this study. The patient population consisted of 743 (67.5%) males and 357 (32.5%) females. A meaningful relationship existed between ageing, diabetes mellitus, hypertension and 3-Vessel Disease (3VD, p<0.001) as well as between hyperlipidemia and Single Vessel Disease (SVD, p<0.001). Patients diagnosed with diabetes mellitus, hypertension, and hyperlipidemia showed greater potential to develop coronary artery disease at the proximal section of the coronary arteries. Conclusion Based on the relationship between some of the major risk factors and the pattern of coronary artery disease in the current study, prospective studies should investigate other risk factors. We recommend that a plan should be developed to reduce adjustable risk factors such as diabetes mellitus, hypertension and hyperlipidemia in order to decrease coronary artery disease. PMID:28969179

  11. Large Nc equivalence and baryons

    NASA Astrophysics Data System (ADS)

    Blake, Mike; Cherman, Aleksey

    2012-09-01

    In the large Nc limit, gauge theories with different gauge groups and matter content sometimes turn out to be “large Nc equivalent,” in the sense of having a set of coincident correlation functions. Large Nc equivalence has mainly been explored in the glueball and meson sectors. However, a recent proposal to dodge the fermion sign problem of QCD with a quark number chemical potential using large Nc equivalence motivates investigating the applicability of large Nc equivalence to correlation functions involving baryon operators. Here we present evidence that large Nc equivalence extends to the baryon sector, under the same type of symmetry realization assumptions as in the meson sector, by adapting the classic Witten analysis of large Nc baryons.

  12. Factors Controlling the Properties of Multi-Phase Arctic Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Ackerman, Andrew; Menon, Surabi

    2005-01-01

    The 2004 Multi-Phase Arctic Cloud Experiment (M-PACE) IOP at the ARM NSA site focused on measuring the properties of autumn transition-season arctic stratus and the environmental conditions controlling them, including concentrations of heterogeneous ice nuclei. Our work aims to use a large-eddy simulation (LES) code with embedded size-resolved aerosol and cloud microphysics to identify factors controlling multi-phase arctic stratus. Our preliminary simulations of autumn transition-season clouds observed during the 1994 Beaufort and Arctic Seas Experiment (BASE) indicated that low concentrations of ice nuclei, which were not measured, may have significantly lowered liquid water content and thereby stabilized cloud evolution. However, cloud drop concentrations appeared to be virtually immune to changes in liquid water content, indicating an active Bergeron process with little effect of collection on drop number concentration. We will compare these results with preliminary simulations from October 8-13 during MPACE. The sensitivity of cloud properties to uncertainty in other factors, such as large-scale forcings and aerosol profiles, will also be investigated. Based on the LES simulations with M-PACE data, preliminary results from the NASA GlSS single-column model (SCM) will be used to examine the sensitivity of predicted cloud properties to changing cloud drop number concentrations for multi-phase arctic clouds. Present parametrizations assumed fixed cloud droplet number concentrations and these will be modified using M-PACE data.

  13. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    ERIC Educational Resources Information Center

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  14. Large Payload Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.; Pope, James C.

    2011-01-01

    Ironically, the limiting factor to a national heavy lift strategy may not be the rocket technology needed to throw a heavy payload, but rather the terrestrial infrastructure - roads, bridges, airframes, and buildings - necessary to transport, acceptance test, and process large spacecraft. Failure to carefully consider how large spacecraft are designed, and where they are manufactured, tested, or launched, could result in unforeseen cost to modify/develop infrastructure, or incur additional risk due to increased handling or elimination of key verifications. During test and verification planning for the Altair project, a number of transportation and test issues related to the large payload diameter were identified. Although the entire Constellation Program - including Altair - was canceled in the 2011 NASA budget, issues identified by the Altair project serve as important lessons learned for future payloads that may be developed to support national "heavy lift" strategies. A feasibility study performed by the Constellation Ground Operations (CxGO) project found that neither the Altair Ascent nor Descent Stage would fit inside available transportation aircraft. Ground transportation of a payload this large over extended distances is generally not permitted by most states, so overland transportation alone would not have been an option. Limited ground transportation to the nearest waterway may be permitted, but water transportation could take as long as 66 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA s Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary

  15. Vegetation-Induced Roughness in Low-Reynold's Number Flows

    NASA Astrophysics Data System (ADS)

    Piercy, C. D.; Wynn, T. M.

    2008-12-01

    Wetlands are important ecosystems, providing habitat for wildlife and fish and shellfish production, water storage, erosion control, and water quality improvement and preservation. Models to estimate hydraulic resistance due to vegetation in emergent wetlands are crucial to good wetland design and analysis. The goal of this project is to improve modeling of emergent wetlands by linking properties of the vegetation to flow. Existing resistance equations such as Hoffmann (2004), Kadlec (1990), Moghadam and Kouwen (1997), Nepf (1999), and Stone and Shen (2002) were evaluated. A large outdoor vegetated flume was constructed at the Price's Fork Research Center near Blacksburg, Virginia to measure flow and water surface slope through woolgrass (Scirpus cyperinus), a common native emergent wetland plant. Measurements of clump and stem density, diameter, and volume, blockage factor, and stiffness were made after each set of flume runs. Flow rates through the flume were low (3-4 L/s) resulting in very low stem-Reynold's numbers (15-102). Since experimental flow conditions were in the laminar to transitional range, most of the models considered did not predict velocity or stage accurately except for conditions in which the stem-Reynold's number approached 100. At low stem-Reynold's numbers (<100), the drag coefficient is inversely proportional to the Reynold's number and can vary greatly with flow conditions. Most of the models considered assumed a stem-Reynold's number in the 100-105 range in which the drag coefficient is relatively constant and as a result did not predict velocity or stage accurately except for conditions in which the stem-Reynold's number approached 100. The only model that accurately predicted stem layer velocity was the Kadlec (1990) model since it does not make assumptions about flow regime; instead, the parameters are adjusted according to the site conditions. Future work includes relating the parameters used to fit the Kadlec (1990) model to measured

  16. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  17. Identifying the necessary and sufficient number of risk factors for predicting academic failure.

    PubMed

    Lucio, Robert; Hunt, Elizabeth; Bornovalova, Marina

    2012-03-01

    Identifying the point at which individuals become at risk for academic failure (grade point average [GPA] < 2.0) involves an understanding of which and how many factors contribute to poor outcomes. School-related factors appear to be among the many factors that significantly impact academic success or failure. This study focused on 12 school-related factors. Using a thorough 5-step process, we identified which unique risk factors place one at risk for academic failure. Academic engagement, academic expectations, academic self-efficacy, homework completion, school relevance, school safety, teacher relationships (positive relationship), grade retention, school mobility, and school misbehaviors (negative relationship) were uniquely related to GPA even after controlling for all relevant covariates. Next, a receiver operating characteristic curve was used to determine a cutoff point for determining how many risk factors predict academic failure (GPA < 2.0). Results yielded a cutoff point of 2 risk factors for predicting academic failure, which provides a way for early identification of individuals who are at risk. Further implications of these findings are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Prevalence and Associated Risk Factors of Asymptomatic Bacteriuria in Ante-Natal Clients in a Large Teaching Hospital in Ghana.

    PubMed

    Labi, A-K; Yawson, A E; Ganyaglo, G Y; Newman, M J

    2015-09-01

    Asymptomatic bacteriuria, the presence of bacteria in urine without symptoms of acute urinary tract infection, predisposes pregnant women to the development of urinary tract infections and pyelonephritis, with an attendant pregnancy related complications. To measure the prevalence of asymptomatic bacteriuria among ante-natal clients at the Korle-Bu Teaching Hospital in Ghana and its' associated risk factors. A cross-sectional study involving 274 antenatal clients was conducted over a period of 4 weeks. A face to face questionnaire was completed and midstream urine collected for culture and antimicrobial susceptibility testing. The prevalence of asymptomatic bacteriuria was 5.5%. It was associated with sexual activity during pregnancy (Fisher's Exact 5.871, p-value 0.0135), but not with sexual frequency. There were no significant associations with educational status, parity, gestational age, marital status and the number of foetuses carried. The commonest organism isolated was Enterococcus spp (26.7%) although the enterobacteriaceae formed the majority of isolated organisms (46.7%). Nitrofurantoin was the antibiotic with the highest sensitivity to all the isolated organisms. The prevalence of asymptomatic bacteriuria among ante-natal clients at this large teaching hospital in Ghana is 5.5%, which is lower than what has been found in other African settings. Enterococcus spp was the commonest causative organism. However, due to the complications associated with asymptomatic bacteriuria, a policy to screen and treat- all pregnant women attending the hospital, is worth considering.

  19. An approximately factored incremental strategy for calculating consistent discrete aerodynamic sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, V. M.; Taylor, A. C., III; Newman, P. A.; Hou, G. J.-W.; Jones, H. E.

    1992-01-01

    An incremental strategy is presented for iteratively solving very large systems of linear equations, which are associated with aerodynamic sensitivity derivatives for advanced CFD codes. It is shown that the left-hand side matrix operator and the well-known factorization algorithm used to solve the nonlinear flow equations can also be used to efficiently solve the linear sensitivity equations. Two airfoil problems are considered as an example: subsonic low Reynolds number laminar flow and transonic high Reynolds number turbulent flow.

  20. Driver-related factors in crashes between large trucks and passenger vehicles

    DOT National Transportation Integrated Search

    1999-04-01

    Large trucks are involved in close to 400,000 police-reported crashes each year, of which 4,500 involve a fatality. About 60% of fatal truck crashes involve one large truck colliding with a single passenger vehicle. Prevention of these crashes requir...