Sample records for comparing large numbers

  1. Comparing spatial regression to random forests for large environmental data sets

    EPA Science Inventory

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputatio...

  2. The effects of large beach debris on nesting sea turtles

    USGS Publications Warehouse

    Fujisaki, Ikuko; Lamont, Margaret M.

    2016-01-01

    A field experiment was conducted to understand the effects of large beach debris on sea turtle nesting behavior as well as the effectiveness of large debris removal for habitat restoration. Large natural and anthropogenic debris were removed from one of three sections of a sea turtle nesting beach and distributions of nests and false crawls (non-nesting crawls) in pre- (2011–2012) and post- (2013–2014) removal years in the three sections were compared. The number of nests increased 200% and the number of false crawls increased 55% in the experimental section, whereas a corresponding increase in number of nests and false crawls was not observed in the other two sections where debris removal was not conducted. The proportion of nest and false crawl abundance in all three beach sections was significantly different between pre- and post-removal years. The nesting success, the percent of successful nests in total nesting attempts (number of nests + false crawls), also increased from 24% to 38%; however the magnitude of the increase was comparably small because both the number of nests and false crawls increased, and thus the proportion of the nesting success in the experimental beach in pre- and post-removal years was not significantly different. The substantial increase in sea turtle nesting activities after the removal of large debris indicates that large debris may have an adverse impact on sea turtle nesting behavior. Removal of large debris could be an effective restoration strategy to improve sea turtle nesting.

  3. The cost of large numbers of hypothesis tests on power, effect size and sample size.

    PubMed

    Lazzeroni, L C; Ray, A

    2012-01-01

    Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.

  4. A comparison between IMSC, PI and MIMSC methods in controlling the vibration of flexible systems

    NASA Technical Reports Server (NTRS)

    Baz, A.; Poh, S.

    1987-01-01

    A comparative study is presented between three active control algorithms which have proven to be successful in controlling the vibrations of large flexible systems. These algorithms are: the Independent Modal Space Control (IMSC), the Pseudo-inverse (PI), and the Modified Independent Modal Space Control (MIMSC). Emphasis is placed on demonstrating the effectiveness of the MIMSC method in controlling the vibration of large systems with small number of actuators by using an efficient time sharing strategy. Such a strategy favors the MIMSC over the IMSC method, which requires a large number of actuators to control equal number of modes, and also over the PI method which attempts to control large number of modes with smaller number of actuators through the use of an in-exact statistical realization of a modal controller. Numerical examples are presented to illustrate the main features of the three algorithms and the merits of the MIMSC method.

  5. On the effects of viscosity on the stability of a trailing-line vortex

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.; Khorrami, Mehdi R.

    1991-01-01

    The linear stability of the Batchelor (1964) vortex is investigated. Particular emphasis is placed on modes found recently in a numerical study by Khorrami (1991). These modes have a number of features very distinct from those found previously for this vortex, including exhibiting small growth rates at large Reynolds numbers and susceptibility to destabilization by viscosity. These modes are described using asymptotic techniques, producing results which compare favorably with fully numerical results at large Reynolds numbers.

  6. [Tumor markers for bladder cancer: up-to-date study by the Kiel Tumor Bank].

    PubMed

    Hautmann, S; Eggers, J; Meyhoff, H; Melchior, D; Munk, A; Hamann, M; Naumann, M; Braun, P M; Jünemann, K P

    2007-11-01

    The number of noninvasive diagnostic tests for bladder cancer has increased tremendously over the last years with a large number of experimental and commercial tests. Comparative analyses of tests for diagnosis, follow-up, and recurrence detection of bladder cancer were performed retrospectively as well as prospectively, unicentrically, and multicentrically. An analysis of multicentric studies with large patient numbers compared with our own Kiel Tumor Bank data is presented. The Kiel Tumor Bank data looked prospectively at 106 consecutive bladder tumor patients from the year 2006. Special focus was put on urine cytology as a reference test, as well as the commercial NMP 22 Bladder Chek. The analysis of the NMP 22 Bladder Chek showed an overall sensitivity of 69% for all tumor grades and stages, with a specificity of 76%. Comparison to multicentric data with an overall sensitivity of 75% for all tumor grades and stages, with a specificity of 73%, showed results similar to those in the literature. Urine cytology showed a comparable overall sensitivity of 73% for all tumor grades and stages, with a specificity of 80%. A large number of noninvasive tests for bladder cancer follow-up with reasonable sensitivity and specificity can currently be used. Because of limited numbers of prospective randomized multicentric studies, no single particular marker for bladder cancer screening can be recommended at this point in time.

  7. A Benchmark Study of Large Contract Supplier Monitoring Within DOD and Private Industry

    DTIC Science & Technology

    1994-03-01

    83 2. Long Term Supplier Relationships ...... .. 84 3. Global Sourcing . . . . . . . . . . . . .. 85 4. Refocusing on Customer Quality...monitoring and recognition, reduced number of suppliers, global sourcing, and long term contractor relationships . These initiatives were then compared to DCMC...on customer quality. 14. suBJE.C TERMS Benchmark Study of Large Contract Supplier Monitoring. 15. NUMBER OF PAGES108 16. PRICE CODE 17. SECURITY

  8. Low Reynolds number airfoil survey, volume 1

    NASA Technical Reports Server (NTRS)

    Carmichael, B. H.

    1981-01-01

    The differences in flow behavior two dimensional airfoils in the critical chordlength Reynolds number compared with lower and higher Reynolds number are discussed. The large laminar separation bubble is discussed in view of its important influence on critical Reynolds number airfoil behavior. The shortcomings of application of theoretical boundary layer computations which are successful at higher Reynolds numbers to the critical regime are discussed. The large variation in experimental aerodynamic characteristic measurement due to small changes in ambient turbulence, vibration, and sound level is illustrated. The difficulties in obtaining accurate detailed measurements in free flight and dramatic performance improvements at critical Reynolds number, achieved with various types of boundary layer tripping devices are discussed.

  9. Poor correlation between the removal or deposition of pollen grains and frequency of pollinator contact with sex organs

    NASA Astrophysics Data System (ADS)

    Sakamoto, Ryota L.; Morinaga, Shin-Ichi

    2013-09-01

    Pollinators deposit pollen grains on stigmas and remove pollen grains from anthers. The mechanics of these transfers can now be quantified with the use of high-speed video. We videoed hawkmoths, carpenter bees, and swallowtail butterflies pollinating Clerodendrum trichotomum. The number of grains deposited on stigmas did not vary significantly with the number of times pollinators contacted stigmas. In contrast, pollen removal from the anthers increased significantly with the number of contacts to anthers. Pollen removal varied among the three types of pollinators. Also, the three types carried pollen on different parts of their bodies. In hawkmoths and carpenter bees, a large number of contacted body part with anthers differed significantly from the body part that attached a large number of pollen grains. Our results indicate that a large number of contacts by pollinators does not increase either the male or female reproductive success of plants compared to a small number of contacts during a visit.

  10. A Reynolds Number Study of Wing Leading-Edge Effects on a Supersonic Transport Model at Mach 0.3

    NASA Technical Reports Server (NTRS)

    Williams, M. Susan; Owens, Lewis R., Jr.; Chu, Julio

    1999-01-01

    A representative supersonic transport design was tested in the National Transonic Facility (NTF) in its original configuration with small-radius leading-edge flaps and also with modified large-radius inboard leading-edge flaps. Aerodynamic data were obtained over a range of Reynolds numbers at a Mach number of 0.3 and angles of attack up to 16 deg. Increasing the radius of the inboard leading-edge flap delayed nose-up pitching moment to a higher lift coefficient. Deflecting the large-radius leading-edge flap produced an overall decrease in lift coefficient and delayed nose-up pitching moment to even higher angles of attack as compared with the undeflected large- radius leading-edge flap. At angles of attack corresponding to the maximum untrimmed lift-to-drag ratio, lift and drag coefficients decreased while lift-to-drag ratio increased with increasing Reynolds number. At an angle of attack of 13.5 deg., the pitching-moment coefficient was nearly constant with increasing Reynolds number for both the small-radius leading-edge flap and the deflected large-radius leading-edge flap. However, the pitching moment coefficient increased with increasing Reynolds number for the undeflected large-radius leading-edge flap above a chord Reynolds number of about 35 x 10 (exp 6).

  11. Solar concentration properties of flat fresnel lenses with large F-numbers

    NASA Technical Reports Server (NTRS)

    Cosby, R. M.

    1978-01-01

    The solar concentration performances of flat, line-focusing sun-tracking Fresnel lenses with selected f-numbers between 0.9 and 2.0 were analyzed. Lens transmittance was found to have a weak dependence on f-number, with a 2% increase occuring as the f-number is increased from 0.9 to 2.0. The geometric concentration ratio for perfectly tracking lenses peaked for an f-number near 1.35. Intensity profiles were more uniform over the image extent for large f-number lenses when compared to the f/0.9 lens results. Substantial decreases in geometri concentration ratios were observed for transverse tracking errors equal to or below 1 degree for all f-number lenses. With respect to tracking errors, the solar performance is optimum for f-numbers between 1.25 and 1.5.

  12. Symbolic Numerical Distance Effect Does Not Reflect the Difference between Numbers.

    PubMed

    Krajcsi, Attila; Kojouharova, Petia

    2017-01-01

    In a comparison task, the larger the distance between the two numbers to be compared, the better the performance-a phenomenon termed as the numerical distance effect. According to the dominant explanation, the distance effect is rooted in a noisy representation, and performance is proportional to the size of the overlap between the noisy representations of the two values. According to alternative explanations, the distance effect may be rooted in the association between the numbers and the small-large categories, and performance is better when the numbers show relatively high differences in their strength of association with the small-large properties. In everyday number use, the value of the numbers and the association between the numbers and the small-large categories strongly correlate; thus, the two explanations have the same predictions for the distance effect. To dissociate the two potential sources of the distance effect, in the present study, participants learned new artificial number digits only for the values between 1 and 3, and between 7 and 9, thus, leaving out the numbers between 4 and 6. It was found that the omitted number range (the distance between 3 and 7) was considered in the distance effect as 1, and not as 4, suggesting that the distance effect does not follow the values of the numbers predicted by the dominant explanation, but it follows the small-large property association predicted by the alternative explanations.

  13. The effect of peer-group size on the delivery of feedback in basic life support refresher training: a cluster randomized controlled trial.

    PubMed

    Cho, Youngsuk; Je, Sangmo; Yoon, Yoo Sang; Roh, Hye Rin; Chang, Chulho; Kang, Hyunggoo; Lim, Taeho

    2016-07-04

    Students are largely providing feedback to one another when instructor facilitates peer feedback rather than teaching in group training. The number of students in a group affect the learning of students in the group training. We aimed to investigate whether a larger group size increases students' test scores on a post-training test with peer feedback facilitated by instructor after video-guided basic life support (BLS) refresher training. Students' one-rescuer adult BLS skills were assessed by a 2-min checklist-based test 1 year after the initial training. A cluster randomized controlled trial was conducted to evaluate the effect of student number in a group on BLS refresher training. Participants included 115 final-year medical students undergoing their emergency medicine clerkship. The median number of students was 8 in the large groups and 4 in the standard group. The primary outcome was to examine group differences in post-training test scores after video-guided BLS training. Secondary outcomes included the feedback time, number of feedback topics, and results of end-of-training evaluation questionnaires. Scores on the post-training test increased over three consecutive tests with instructor-led peer feedback, but not differ between large and standard groups. The feedback time was longer and number of feedback topics generated by students were higher in standard groups compared to large groups on the first and second tests. The end-of-training questionnaire revealed that the students in large groups preferred the smaller group size compared to their actual group size. In this BLS refresher training, the instructor-led group feedback increased the test score after tutorial video-guided BLS learning, irrespective of the group size. A smaller group size allowed more participations in peer feedback.

  14. Two Types of Well Followed Users in the Followership Networks of Twitter

    PubMed Central

    Saito, Kodai; Masuda, Naoki

    2014-01-01

    In the Twitter blogosphere, the number of followers is probably the most basic and succinct quantity for measuring popularity of users. However, the number of followers can be manipulated in various ways; we can even buy follows. Therefore, alternative popularity measures for Twitter users on the basis of, for example, users' tweets and retweets, have been developed. In the present work, we take a purely network approach to this fundamental question. First, we find that two relatively distinct types of users possessing a large number of followers exist, in particular for Japanese, Russian, and Korean users among the seven language groups that we examined. A first type of user follows a small number of other users. A second type of user follows approximately the same number of other users as the number of follows that the user receives. Then, we compare local (i.e., egocentric) followership networks around the two types of users with many followers. We show that the second type, which is presumably uninfluential users despite its large number of followers, is characterized by high link reciprocity, a large number of friends (i.e., those whom a user follows) for the followers, followers' high link reciprocity, large clustering coefficient, large fraction of the second type of users among the followers, and a small PageRank. Our network-based results support that the number of followers used alone is a misleading measure of user's popularity. We propose that the number of friends, which is simple to measure, also helps us to assess the popularity of Twitter users. PMID:24416209

  15. Evaluating Comparative Judgment as an Approach to Essay Scoring

    ERIC Educational Resources Information Center

    Steedle, Jeffrey T.; Ferrara, Steve

    2016-01-01

    As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…

  16. Responses of arthropods to large-scale manipulations of dead wood in loblolly pine stands of the Southeastern United States

    Treesearch

    Michael D. Ulyshen; James L. Hanula

    2009-01-01

    Large-scale experimental manipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...

  17. Responses of Arthropods to large scale manipulations of dead wood in loblolly pine stands of the southeastern United States

    Treesearch

    Michael Ulyshen; James Hanula

    2009-01-01

    Large-scale experimentalmanipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...

  18. A large number of stepping motor network construction by PLC

    NASA Astrophysics Data System (ADS)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  19. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, J; Ma, L

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beammore » numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.« less

  20. A replacement for islet equivalents with improved reliability and validity.

    PubMed

    Huang, Han-Hung; Ramachandran, Karthik; Stehno-Bittel, Lisa

    2013-10-01

    Islet equivalent (IE), the standard estimate of isolated islet volume, is an essential measure to determine the amount of transplanted islet tissue in the clinic and is used in research laboratories to normalize results, yet it is based on the false assumption that all islets are spherical. Here, we developed and tested a new easy-to-use method to quantify islet volume with greater accuracy. Isolated rat islets were dissociated into single cells, and the total cell number per islet was determined by using computer-assisted cytometry. Based on the cell number per islet, we created a regression model to convert islet diameter to cell number with a high R2 value (0.8) and good validity and reliability with the same model applicable to young and old rats and males or females. Conventional IE measurements overestimated the tissue volume of islets. To compare results obtained using IE or our new method, we compared Glut2 protein levels determined by Western Blot and proinsulin content via ELISA between small (diameter≤100 μm) and large (diameter≥200 μm) islets. When normalized by IE, large islets showed significantly lower Glut2 level and proinsulin content. However, when normalized by cell number, large and small islets had no difference in Glut2 levels, but large islets contained more proinsulin. In conclusion, normalizing islet volume by IE overestimated the tissue volume, which may lead to erroneous results. Normalizing by cell number is a more accurate method to quantify tissue amounts used in islet transplantation and research.

  1. Direct determination of the number-weighted mean radius and polydispersity from dynamic light-scattering data.

    PubMed

    Patty, Philipus J; Frisken, Barbara J

    2006-04-01

    We compare results for the number-weighted mean radius and polydispersity obtained either by directly fitting number distributions to dynamic light-scattering data or by converting results obtained by fitting intensity-weighted distributions. We find that results from fits using number distributions are angle independent and that converting intensity-weighted distributions is not always reliable, especially when the polydispersity of the sample is large. We compare the results of fitting symmetric and asymmetric distributions, as represented by Gaussian and Schulz distributions, respectively, to data for extruded vesicles and find that the Schulz distribution provides a better estimate of the size distribution for these samples.

  2. Human behaviour can trigger large carnivore attacks in developed countries

    PubMed Central

    Penteriani, Vincenzo; Delgado, María del Mar; Pinchera, Francesco; Naves, Javier; Fernández-Gil, Alberto; Kojola, Ilpo; Härkönen, Sauli; Norberg, Harri; Frank, Jens; Fedriani, José María; Sahlén, Veronica; Støen, Ole-Gunnar; Swenson, Jon E.; Wabakken, Petter; Pellegrini, Mario; Herrero, Stephen; López-Bao, José Vicente

    2016-01-01

    The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans in North America and Europe. Although rare compared to human fatalities by other wildlife, the media often overplay large carnivore attacks on humans, causing increased fear and negative attitudes towards coexisting with and conserving these species. Although large carnivore populations are generally increasing in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks by large carnivores. Here we show that an increasing number of people are involved in outdoor activities and, when doing so, some people engage in risk-enhancing behaviour that can increase the probability of a risky encounter and a potential attack. About half of the well-documented reported attacks have involved risk-enhancing human behaviours, the most common of which is leaving children unattended. Our study provides unique insight into the causes, and as a result the prevention, of large carnivore attacks on people. Prevention and information that can encourage appropriate human behaviour when sharing the landscape with large carnivores are of paramount importance to reduce both potentially fatal human-carnivore encounters and their consequences to large carnivores. PMID:26838467

  3. Human behaviour can trigger large carnivore attacks in developed countries.

    PubMed

    Penteriani, Vincenzo; Delgado, María del Mar; Pinchera, Francesco; Naves, Javier; Fernández-Gil, Alberto; Kojola, Ilpo; Härkönen, Sauli; Norberg, Harri; Frank, Jens; Fedriani, José María; Sahlén, Veronica; Støen, Ole-Gunnar; Swenson, Jon E; Wabakken, Petter; Pellegrini, Mario; Herrero, Stephen; López-Bao, José Vicente

    2016-02-03

    The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans in North America and Europe. Although rare compared to human fatalities by other wildlife, the media often overplay large carnivore attacks on humans, causing increased fear and negative attitudes towards coexisting with and conserving these species. Although large carnivore populations are generally increasing in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks by large carnivores. Here we show that an increasing number of people are involved in outdoor activities and, when doing so, some people engage in risk-enhancing behaviour that can increase the probability of a risky encounter and a potential attack. About half of the well-documented reported attacks have involved risk-enhancing human behaviours, the most common of which is leaving children unattended. Our study provides unique insight into the causes, and as a result the prevention, of large carnivore attacks on people. Prevention and information that can encourage appropriate human behaviour when sharing the landscape with large carnivores are of paramount importance to reduce both potentially fatal human-carnivore encounters and their consequences to large carnivores.

  4. Interaction between numbers and size during visual search.

    PubMed

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2017-05-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.

  5. Mipomersen preferentially reduces small low-density lipoprotein particle number in patients with hypercholesterolemia.

    PubMed

    Santos, Raul D; Raal, Frederick J; Donovan, Joanne M; Cromwell, William C

    2015-01-01

    Because of variability in lipoprotein cholesterol content, low-density lipoprotein (LDL) cholesterol frequently underrepresents or overrepresents the number of LDL particles. Mipomersen is an antisense oligonucleotide that reduces hepatic production of apolipoprotein B-100, the sole apolipoprotein of LDL. To characterize the effects of mipomersen on lipoprotein particle numbers as well as subclass distribution using nuclear magnetic resonance (NMR) spectroscopy. We compared the tertiary results for the direct measurement of LDL particle numbers by NMR among 4 placebo-controlled, phase 3 studies of mipomersen that had similar study designs but different patient populations: homozygous familial hypercholesterolemia (HoFH), severe hypercholesterolemia, heterozygous familial hypercholesterolemia with established coronary artery disease, or hypercholesterolemia with high risk for coronary heart disease (HC-CHD). HoFH patients had the highest median total LDL particles at baseline compared with HC-CHD patients, who had the lowest. At baseline, the HoFH population uniquely had a greater mean percentage of large LDL particles (placebo, 60.2%; mipomersen, 54.9%) compared with small LDL particles (placebo, 33.1%; mipomersen, 38.9%). In all 4 studies, mipomersen was associated with greater reductions from baseline in the concentrations of small LDL particles compared with those of large LDL particles, and both total LDL particles and small LDL particles were statistically significantly reduced. Mipomersen consistently reduced all LDL particle numbers and preferentially reduced the concentration of small LDL particles in all 4 phase 3 studies. Copyright © 2015 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  6. Heat and fluid flow characteristics of an oval fin-and-tube heat exchanger with large diameters for textile machine dryer

    NASA Astrophysics Data System (ADS)

    Bae, Kyung Jin; Cha, Dong An; Kwon, Oh Kyung

    2016-11-01

    The objectives of this paper are to develop correlations between heat transfer and pressure drop for oval finned-tube heat exchanger with large diameters (larger than 20 mm) used in a textile machine dryer. Numerical tests using ANSYS CFX are performed for four different parameters; tube size, fin pitch, transverse tube pitch and longitudinal tube pitch. The numerical results showed that the Nusselt number and the friction factor are in a range of -16.2 ~ +3.1 to -7.7 ~ +3.9 %, respectively, compared with experimental results. It was found that the Nusselt number linearly increased with increasing Reynolds number, but the friction factor slightly decreased with increasing Reynolds number. It was also found that the variation of longitudinal tube pitch has little effect on the Nusselt number and friction factor than other parameters (below 2.0 and 2.5 %, respectively). This study proposed a new Nusselt number and friction factor correlation of the oval finned-tube heat exchanger with large diameters for textile machine dryer.

  7. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ye; Thornber, Ben

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology andmore » wind tunnel experiments.« less

  8. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  9. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats

    PubMed Central

    Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532

  10. Exact diagonalization of quantum lattice models on coprocessors

    NASA Astrophysics Data System (ADS)

    Siro, T.; Harju, A.

    2016-10-01

    We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.

  11. Occupational cancer in the European part of the Commonwealth of Independent States.

    PubMed Central

    Bulbulyan, M A; Boffetta, P

    1999-01-01

    Precise information on the number of workers currently exposed to carcinogens in the Commonwealth of Independent States (CIS) is lacking. However, the large number of workers employed in high-risk industries such as the chemical and metal industries suggests that the number of workers potentially exposed to carcinogens may be large. In the CIS, women account for almost 50% of the industrial work force. Although no precise data are available on the number of cancers caused by occupational exposures, indirect evidence suggests that the magnitude of the problem is comparable to that observed in Western Europe, representing some 20,000 cases per year. The large number of women employed in the past and at present in industries that create potential exposure to carcinogens is a special characteristic of the CIS. In recent years an increasing amount of high-quality research has been conducted on occupational cancer in the CIS; there is, however, room for further improvement. International training programs should be established, and funds from international research and development programs should be devoted to this area. In recent years, following privatization of many large-scale industries, access to employment and exposure data is becoming increasingly difficult. PMID:10350512

  12. Comparing Medical Ecology, Utilization, and Expenditures Between 1996-1997 and 2011-2012.

    PubMed

    Johansen, Michael E

    2017-07-01

    This study compared ecology (number of individuals using a service), utilization (number of services used), and expenditures (dollars spent) for various categories of medical services between primarily 1996-1997 and 2011-2012. A repeated cross-sectional study was performed using nationally representative data mainly from the 1996, 1997, 2011, and 2012 Medical Expenditure Panel Survey (MEPS). These data were augmented with the 2002-2003 MEPS as well as the 1999-2000 and 2011-2012 National Heath and Nutrition Examination Survey. Individuals (number per 1,000 people), utilization, and expenditures during an average month in 1996-1997 and 2011-2012 were determined for 15 categories of services. The number of individuals who used various medical services was unchanged for many categories of services (total, outpatient, outpatient physician, users of prescribed medications, primary care and specialty physicians, inpatient hospitalization, and emergency department). It was, however, increased for others (optometry/podiatry, therapy, and alternative/complementary medicine) and decreased for a few (dental and home health). The number of services used (utilization) largely mirrored the findings for individual use, with the exception of an increase in the number of prescribed medications and a decrease in number of primary care physician visits. There were large increases in dollars spent (expenditures) in every category with the exception of primary care physician and home health; the largest absolute increases were in prescribed medications, specialty physicians, emergency department visits, and likely inpatient hospitalizations. Although the number of individuals with visits during an average month and the total utilization of medical services were largely unchanged between the 2 time periods, total expenditures increased markedly. The increases in expenditure varied dramatically by category. © 2017 Annals of Family Medicine, Inc.

  13. FASTPM: a new scheme for fast simulations of dark matter and haloes

    NASA Astrophysics Data System (ADS)

    Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick

    2016-12-01

    We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.

  14. Abstract number and arithmetic in preschool children.

    PubMed

    Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Spelke, Elizabeth S

    2005-09-27

    Educated humans use language to express abstract number, applying the same number words to seven apples, whistles, or sins. Is language or education the source of numerical abstraction? Claims to the contrary must present evidence for numerical knowledge that applies to disparate entities, in people who have received no formal mathematics instruction and cannot express such knowledge in words. Here we show that preschool children can compare and add large sets of elements without counting, both within a single visual-spatial modality (arrays of dots) and across two modalities and formats (dot arrays and tone sequences). In two experiments, children viewed animations and either compared one visible array of dots to a second array or added two successive dot arrays and compared the sum to a third array. In further experiments, a dot array was replaced by a sequence of sounds, so that participants had to integrate quantity information presented aurally and visually. Children performed all tasks successfully, without resorting to guessing strategies or responding to continuous variables. Their accuracy varied with the ratio of the two quantities: a signature of large, approximate number representations in adult humans and animals. Addition was as accurate as comparison, even though children showed no relevant knowledge when presented with symbolic versions of the addition tasks. Abstract knowledge of number and addition therefore precedes, and may guide, language-based instruction in mathematics.

  15. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  16. Rescaling citations of publications in physics

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Castellano, Claudio

    2011-04-01

    We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.

  17. Rescaling citations of publications in physics.

    PubMed

    Radicchi, Filippo; Castellano, Claudio

    2011-04-01

    We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.

  18. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  19. Large number discrimination by mosquitofish.

    PubMed

    Agrillo, Christian; Piffer, Laura; Bisazza, Angelo

    2010-12-22

    Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4) were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance). Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all vertebrates.

  20. arrayCGHbase: an analysis platform for comparative genomic hybridization microarrays

    PubMed Central

    Menten, Björn; Pattyn, Filip; De Preter, Katleen; Robbrecht, Piet; Michels, Evi; Buysse, Karen; Mortier, Geert; De Paepe, Anne; van Vooren, Steven; Vermeesch, Joris; Moreau, Yves; De Moor, Bart; Vermeulen, Stefan; Speleman, Frank; Vandesompele, Jo

    2005-01-01

    Background The availability of the human genome sequence as well as the large number of physically accessible oligonucleotides, cDNA, and BAC clones across the entire genome has triggered and accelerated the use of several platforms for analysis of DNA copy number changes, amongst others microarray comparative genomic hybridization (arrayCGH). One of the challenges inherent to this new technology is the management and analysis of large numbers of data points generated in each individual experiment. Results We have developed arrayCGHbase, a comprehensive analysis platform for arrayCGH experiments consisting of a MIAME (Minimal Information About a Microarray Experiment) supportive database using MySQL underlying a data mining web tool, to store, analyze, interpret, compare, and visualize arrayCGH results in a uniform and user-friendly format. Following its flexible design, arrayCGHbase is compatible with all existing and forthcoming arrayCGH platforms. Data can be exported in a multitude of formats, including BED files to map copy number information on the genome using the Ensembl or UCSC genome browser. Conclusion ArrayCGHbase is a web based and platform independent arrayCGH data analysis tool, that allows users to access the analysis suite through the internet or a local intranet after installation on a private server. ArrayCGHbase is available at . PMID:15910681

  1. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  2. Completing the mechanical energy pathways in turbulent Rayleigh-Bénard convection.

    PubMed

    Gayen, Bishakhdatta; Hughes, Graham O; Griffiths, Ross W

    2013-09-20

    A new, more complete view of the mechanical energy budget for Rayleigh-Bénard convection is developed and examined using three-dimensional numerical simulations at large Rayleigh numbers and Prandtl number of 1. The driving role of available potential energy is highlighted. The relative magnitudes of different energy conversions or pathways change significantly over the range of Rayleigh numbers Ra ~ 10(7)-10(13). At Ra < 10(7) small-scale turbulent motions are energized directly from available potential energy via turbulent buoyancy flux and kinetic energy is dissipated at comparable rates by both the large- and small-scale motions. In contrast, at Ra ≥ 10(10) most of the available potential energy goes into kinetic energy of the large-scale flow, which undergoes shear instabilities that sustain small-scale turbulence. The irreversible mixing is largely confined to the unstable boundary layer, its rate exactly equal to the generation of available potential energy by the boundary fluxes, and mixing efficiency is 50%.

  3. Exploration of multiphoton entangled states by using weak nonlinearities

    PubMed Central

    He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting

    2016-01-01

    We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044

  4. Large-scale influences in near-wall turbulence.

    PubMed

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  5. Quality of life assessment in interventional radiology.

    PubMed

    Monsky, Wayne L; Khorsand, Derek; Nolan, Timothy; Douglas, David; Khanna, Pavan

    2014-03-01

    The aim of this review was to describe quality of life (QoL) questionnaires relevant to interventional radiology. Interventional radiologists perform a large number of palliative procedures. The effect of these therapies on QoL is important. This is particularly true for cancer therapies where procedures with marginal survival benefits may result in tremendous QoL benefits. Image-guided minimally invasive procedures should be compared to invasive procedures, with respect to QoL, as part of comparative effectiveness assessment. A large number of questionnaires have been validated for measurement of overall and disease-specific quality of life. Use of applicable QoL assessments can aid in evaluating clinical outcomes and help to further substantiate the need for minimally invasive image-guided procedures. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  6. The number of striatal cholinergic interneurons expressing calretinin is increased in parkinsonian monkeys.

    PubMed

    Petryszyn, Sarah; Di Paolo, Thérèse; Parent, André; Parent, Martin

    2016-11-01

    The most abundant interneurons in the primate striatum are those expressing the calcium-binding protein calretinin (CR). The present immunohistochemical study provides detailed assessments of their morphological traits, number, and topographical distribution in normal monkeys (Macaca fascicularis) and in monkeys rendered parkinsonian (PD) by MPTP intoxication. In primates, the CR+ striatal interneurons comprise small (8-12μm), medium (12-20μm) and large-sized (20-45μm) neurons, each with distinctive morphologies. The small CR+ neurons were 2-3 times more abundant than the medium-sized CR+ neurons, which were 20-40 times more numerous than the large CR+ neurons. In normal and PD monkeys, the density of small and medium-sized CR+ neurons was twice as high in the caudate nucleus than in the putamen, whereas the inverse occurred for the large CR+ neurons. Double immunostaining experiments revealed that only the large-sized CR+ neurons expressed choline acetyltransferase (ChAT). The number of large CR+ neurons was found to increase markedly (4-12 times) along the entire anteroposterior extent of both the caudate nucleus and putamen of PD monkeys compared to controls. Comparison of the number of large CR-/ChAT+ and CR+/ChAT+ neurons together with experiments involving the use of bromo-deoxyuridine (BrdU) as a marker of newly generated cells showed that it is the expression of CR by the large ChAT+ striatal interneurons, and not their absolute number, that is increased in the dopamine-depleted striatum. These findings reveal the modulatory role of dopamine in the phenotypic expression of the large cholinergic striatal neurons, which are known to play a crucial role in PD pathophysiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Can Virtual Schools Thrive in the Real World?

    ERIC Educational Resources Information Center

    Wang, Yinying; Decker, Janet R.

    2014-01-01

    Despite the relatively large number of students enrolled in Ohio's virtual schools, it is unclear how virtual schools compare to their traditional school counterparts on measures of student achievement. To provide some insight, we compared the school performance from 2007-2011 at Ohio's virtual and traditional schools. The results suggest that…

  8. Birds have primate-like numbers of neurons in the forebrain

    PubMed Central

    Olkowicz, Seweryn; Kocourek, Martin; Lučan, Radek K.; Porteš, Michal; Fitch, W. Tecumseh; Herculano-Houzel, Suzana; Němec, Pavel

    2016-01-01

    Some birds achieve primate-like levels of cognition, even though their brains tend to be much smaller in absolute size. This poses a fundamental problem in comparative and computational neuroscience, because small brains are expected to have a lower information-processing capacity. Using the isotropic fractionator to determine numbers of neurons in specific brain regions, here we show that the brains of parrots and songbirds contain on average twice as many neurons as primate brains of the same mass, indicating that avian brains have higher neuron packing densities than mammalian brains. Additionally, corvids and parrots have much higher proportions of brain neurons located in the pallial telencephalon compared with primates or other mammals and birds. Thus, large-brained parrots and corvids have forebrain neuron counts equal to or greater than primates with much larger brains. We suggest that the large numbers of neurons concentrated in high densities in the telencephalon substantially contribute to the neural basis of avian intelligence. PMID:27298365

  9. Large eddy simulation of turbulent cavitating flows

    NASA Astrophysics Data System (ADS)

    Gnanaskandan, A.; Mahesh, K.

    2015-12-01

    Large Eddy Simulation is employed to study two turbulent cavitating flows: over a cylinder and a wedge. A homogeneous mixture model is used to treat the mixture of water and water vapor as a compressible fluid. The governing equations are solved using a novel predictor- corrector method. The subgrid terms are modeled using the Dynamic Smagorinsky model. Cavitating flow over a cylinder at Reynolds number (Re) = 3900 and cavitation number (σ) = 1.0 is simulated and the wake characteristics are compared to the single phase results at the same Reynolds number. It is observed that cavitation suppresses turbulence in the near wake and delays three dimensional breakdown of the vortices. Next, cavitating flow over a wedge at Re = 200, 000 and σ = 2.0 is presented. The mean void fraction profiles obtained are compared to experiment and good agreement is obtained. Cavity auto-oscillation is observed, where the sheet cavity breaks up into a cloud cavity periodically. The results suggest LES as an attractive approach for predicting turbulent cavitating flows.

  10. Direction Counts: A Comparative Study of Spatially Directional Counting Biases in Cultures with Different Reading Directions

    ERIC Educational Resources Information Center

    Shaki, Samuel; Fischer, Martin H.; Gobel, Silke M.

    2012-01-01

    Western adults associate small numbers with left space and large numbers with right space. Where does this pervasive spatial-numerical association come from? In this study, we first recorded directional counting preferences in adults with different reading experiences (left to right, right to left, mixed, and illiterate) and observed a clear…

  11. [Distributions of the numbers of monitoring stations in the surveillance of infectious diseases in Japan].

    PubMed

    Murakami, Y; Hashimoto, S; Taniguchi, K; Nagai, M

    1999-12-01

    To describe the characteristics of monitoring stations for the infectious disease surveillance system in Japan, we compared the distributions of the number of monitoring stations in terms of population, region, size of medical institution, and medical specialty. The distributions of annual number of reported cases in terms of the type of diseases, the size of medical institution, and medical specialty were also compared. We conducted a nationwide survey of the pediatrics stations (16 diseases), ophthalmology stations (3 diseases) and the stations of sexually transmitted diseases (STD) (5 diseases) in Japan. In the survey, we collected the data of monitoring stations and the annual reported cases of diseases. We also collected the data on the population, served by the health center where the monitoring stations existed, from the census. First, we compared the difference between the present number of monitoring stations and the current standard established by the Ministry of Health and Welfare (MHW). Second, we compared the distribution of all medical institutions in Japan and the monitoring stations in terms of the size of the medical institution. Third, we compared the average number of annual reported cases of diseases in terms of the size of medical institution and the medical specialty. In most health centers, the number of monitoring stations achieved the current standard of MHW, while a few health centers had no monitoring station, although they had a large population. Most prefectures also achieved the current standard of MHW, but some prefectures were well below the standard. Among pediatric stations, the sampling proportion of large hospitals was higher than other categories. Among the ophthalmology stations, the sampling proportion of hospitals was higher than other categories. Among the STD stations, the sampling proportion of clinics of obstetrics and gynecology was lower than other categories. Except for some diseases, it made little difference in the average number of annual reported cases of diseases in terms of the type of medical institution. Among STD, there was a great difference in the average number of annual reported cases of diseases in terms of medical specialty.

  12. An evaluation of Health of the Nation Outcome Scales data to inform psychiatric morbidity following the Canterbury earthquakes.

    PubMed

    Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J

    2017-11-01

    Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation Outcome Scales scores for attendees of local mental health services compared to other large district health boards. This suggests that patients presented with greater degrees of psychiatric distress, social disruption, behavioural change and impairment as a result of the earthquakes.

  13. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  14. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade performance. Thus, for the use patterns studied here the database performance is not critically dependent on the exact choices of index or level.

  15. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    DOE R&D Accomplishments Database

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  16. Long-Delayed Aftershocks in New Zealand and the 2016 M7.8 Kaikoura Earthquake

    NASA Astrophysics Data System (ADS)

    Shebalin, P.; Baranov, S.

    2017-10-01

    We study aftershock sequences of six major earthquakes in New Zealand, including the 2016 M7.8 Kaikaoura and 2016 M7.1 North Island earthquakes. For Kaikaoura earthquake, we assess the expected number of long-delayed large aftershocks of M5+ and M5.5+ in two periods, 0.5 and 3 years after the main shocks, using 75 days of available data. We compare results with obtained for other sequences using same 75-days period. We estimate the errors by considering a set of magnitude thresholds and corresponding periods of data completeness and consistency. To avoid overestimation of the expected rates of large aftershocks, we presume a break of slope of the magnitude-frequency relation in the aftershock sequences, and compare two models, with and without the break of slope. Comparing estimations to the actual number of long-delayed large aftershocks, we observe, in general, a significant underestimation of their expected number. We can suppose that the long-delayed aftershocks may reflect larger-scale processes, including interaction of faults, that complement an isolated relaxation process. In the spirit of this hypothesis, we search for symptoms of the capacity of the aftershock zone to generate large events months after the major earthquake. We adapt an algorithm EAST, studying statistics of early aftershocks, to the case of secondary aftershocks within aftershock sequences of major earthquakes. In retrospective application to the considered cases, the algorithm demonstrates an ability to detect in advance long-delayed aftershocks both in time and space domains. Application of the EAST algorithm to the 2016 M7.8 Kaikoura earthquake zone indicates that the most likely area for a delayed aftershock of M5.5+ or M6+ is at the northern end of the zone in Cook Strait.

  17. Exciting New Images | Lunar Reconnaissance Orbiter Camera

    Science.gov Websites

    slowly and relentlessly reshapes the Moon's topography. Comparative study of the shapes of lunar craters , quantitative comparison be derived? And how can we quantify and compare the topography of a large number of for quantitative characterization of impact crater topography (Mahanti, P. et al., 2014, Icarus v. 241

  18. Improving the Comparability and Local Usefulness of International Assessments: A Look Back and a Way Forward

    ERIC Educational Resources Information Center

    Rutkowski, Leslie; Rutkowski, David

    2018-01-01

    Over time international large-scale assessments have grown in terms of number of studies, cycles, and participating countries, many of which are a heterogeneous mix of economies, languages, cultures, and geography. This heterogeneity has meaningful consequences for comparably measuring both achievement and non-achievement constructs, such as…

  19. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  20. Students Selection for University Course Admission at the Joint Admissions Board (Kenya) Using Trained Neural Networks

    ERIC Educational Resources Information Center

    Wabwoba, Franklin; Mwakondo, Fullgence M.

    2011-01-01

    Every year, the Joint Admission Board (JAB) is tasked to determine those students who are expected to join various Kenyan public universities under the government sponsorship scheme. This exercise is usually extensive because of the large number of qualified students compared to the very limited number of slots at various institutions and the…

  1. Extension of electronic speckle correlation interferometry to large deformations

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Sciammarella, Federico M.

    1998-07-01

    The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.

  2. Comparing Human and Automated Essay Scoring for Prospective Graduate Students with Learning Disabilities and/or ADHD

    ERIC Educational Resources Information Center

    Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael

    2016-01-01

    Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra

    Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less

  4. Comparative genome-wide polymorphic microsatellite markers in Antarctic penguins through next generation sequencing

    PubMed Central

    Vianna, Juliana A.; Noll, Daly; Mura-Jornet, Isidora; Valenzuela-Guerra, Paulina; González-Acuña, Daniel; Navarro, Cristell; Loyola, David E.; Dantas, Gisele P. M.

    2017-01-01

    Abstract Microsatellites are valuable molecular markers for evolutionary and ecological studies. Next generation sequencing is responsible for the increasing number of microsatellites for non-model species. Penguins of the Pygoscelis genus are comprised of three species: Adélie (P. adeliae), Chinstrap (P. antarcticus) and Gentoo penguin (P. papua), all distributed around Antarctica and the sub-Antarctic. The species have been affected differently by climate change, and the use of microsatellite markers will be crucial to monitor population dynamics. We characterized a large set of genome-wide microsatellites and evaluated polymorphisms in all three species. SOLiD reads were generated from the libraries of each species, identifying a large amount of microsatellite loci: 33,677, 35,265 and 42,057 for P. adeliae, P. antarcticus and P. papua, respectively. A large number of dinucleotide (66,139), trinucleotide (29,490) and tetranucleotide (11,849) microsatellites are described. Microsatellite abundance, diversity and orthology were characterized in penguin genomes. We evaluated polymorphisms in 170 tetranucleotide loci, obtaining 34 polymorphic loci in at least one species and 15 polymorphic loci in all three species, which allow to perform comparative studies. Polymorphic markers presented here enable a number of ecological, population, individual identification, parentage and evolutionary studies of Pygoscelis, with potential use in other penguin species. PMID:28898354

  5. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. YBYRÁ facilitates comparison of large phylogenetic trees.

    PubMed

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  7. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  8. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  9. Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Huchtmeier, W. K.; Richter, O. G.; Materne, J.

    1981-09-01

    The large-scale structure of the universe is dominated by clustering. Most galaxies seem to be members of pairs, groups, clusters, and superclusters. To that degree we are able to recognize a hierarchical structure of the universe. Our local group of galaxies (LG) is centred on two large spiral galaxies: the Andromeda nebula and our own galaxy. Three sr:naller galaxies - like M 33 - and at least 23 dwarf galaxies (KraanKorteweg and Tammann, 1979, Astronomische Nachrichten, 300, 181) can be found in the evironment of these two large galaxies. Neighbouring groups have comparable sizes (about 1 Mpc in extent) and comparable numbers of bright members. Small dwarf galaxies cannot at present be observed at great distances.

  10. Changes in numbers of large ovarian follicles, plasma luteinizing hormone and estradiol-17beta concentrations and egg production figures in farmed ostriches throughout the year.

    PubMed

    Bronneberg, R G G; Stegeman, J A; Vernooij, J C M; Dieleman, S J; Decuypere, E; Bruggeman, V; Taverne, M A M

    2007-06-01

    In this study we described and analysed changes in the numbers of large ovarian follicles (diameter 6.1-9.0 cm) and in the plasma concentrations of luteinizing hormone (LH) and estradiol-17beta (E(2)beta) in relation to individual egg production figures of farmed ostriches (Struthio camelus spp.) throughout one year. Ultrasound scanning and blood sampling for plasma hormone analysis were performed in 9 hens on a monthly basis during the breeding season and in two periods of the non-breeding season. Our data demonstrated that: (1) large follicles were detected and LH concentrations were elevated already 1 month before first ovipositions of the egg production season took place; (2) E(2)beta concentrations increased as soon as the egg production season started; (3) numbers of large follicles, LH and E(2)beta concentrations were elevated during the entire egg production season; and that (4) numbers of large follicles, LH and E(2)beta concentrations decreased simultaneous with or following the last ovipositions of the egg production season. By comparing these parameters during the egg production season with their pre-and post-seasonal values, significant differences were found in the numbers of large follicles and E(2)beta concentrations between the pre-seasonal, seasonal and post-seasonal period; while LH concentrations were significantly different between the seasonal and post-seasonal period. In conclusion, our data demonstrate that changes in numbers of large follicles and in concentrations of LH and E(2)beta closely parallel individual egg production figures and provide some new cues that egg production in ostriches is confined to a marked reproductive season. Moreover, our data provide indications that mechanism, initiating, maintaining and terminating the egg production season in farmed breeding ostriches are quite similar to those already known for other seasonal breeding bird species.

  11. Supersonic jet noise generated by large scale instabilities

    NASA Technical Reports Server (NTRS)

    Seiner, J. M.; Mclaughlin, D. K.; Liu, C. H.

    1982-01-01

    The role of large scale wavelike structures as the major mechanism for supersonic jet noise emission is examined. With the use of aerodynamic and acoustic data for low Reynolds number, supersonic jets at and below 70 thousand comparisons are made with flow fluctuation and acoustic measurements in high Reynolds number, supersonic jets. These comparisons show that a similar physical mechanism governs the generation of sound emitted in he principal noise direction. These experimental data are further compared with a linear instability theory whose prediction for the axial location of peak wave amplitude agrees satisfactorily with measured phased averaged flow fluctuation data in the low Reynolds number jets. The agreement between theory and experiment in the high Reynolds number flow differs as to the axial location for peak flow fluctuations and predicts an apparent origin for sound emission far upstream of the measured acoustic data.

  12. Computational tools for copy number variation (CNV) detection using next-generation sequencing data: features and perspectives.

    PubMed

    Zhao, Min; Wang, Qingguo; Wang, Quan; Jia, Peilin; Zhao, Zhongming

    2013-01-01

    Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development.

  13. Computational tools for copy number variation (CNV) detection using next-generation sequencing data: features and perspectives

    PubMed Central

    2013-01-01

    Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development. PMID:24564169

  14. The Panchromatic Comparative Exoplanetary Treasury Program

    NASA Astrophysics Data System (ADS)

    Sing, David

    2016-10-01

    HST has played the definitive role in the characterization of exoplanets and from the first planets available, we have learned that their atmospheres are incredibly diverse. The large number of transiting planets now available has prompted a new era of atmospheric studies, where wide scale comparative planetology is now possible. The atmospheric chemistry of cloud/haze formation and atmospheric mass-loss are a major outstanding issues in the field of exoplanets, and we seek to make progress gaining insight into their underlying physical process through comparative studies. Here we propose to use Hubble's full spectroscopic capabilities to produce the first large-scale, simultaneous UVOIR comparative study of exoplanets. With full wavelength coverage, an entire planet's atmosphere can be probed simultaneously and with sufficient numbers of planets, we can statistically compare their features with physical parameters for the first time. This panchromatic program will build a lasting HST legacy, providing the UV and blue-optical spectra unavailable to JWST. From these observations, chemistry over a wide range of physical environments will be probed, from the hottest condensates to much cooler planets where photochemical hazes could be present. Constraints on aerosol size and composition will help unlock our understanding of clouds and how they are suspended at such high altitudes. Notably, there have been no large transiting UV HST programs, and this panchromatic program will provide a fundamental legacy contribution to atmospheric escape of small exoplanets, where the mass loss can be significant and have a major impact on the evolution of the planet itself.

  15. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  16. Comparative Approaches to Genetic Discrimination: Chasing Shadows?

    PubMed

    Joly, Yann; Feze, Ida Ngueng; Song, Lingqiao; Knoppers, Bartha M

    2017-05-01

    Genetic discrimination (GD) is one of the most pervasive issues associated with genetic research and its large-scale implementation. An increasing number of countries have adopted public policies to address this issue. Our research presents a worldwide comparative review and typology of these approaches. We conclude with suggestions for public policy development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A 30-Year Prospective Follow-Up Study of Hyperactive Boys with Conduct Problems: Adult Criminality

    ERIC Educational Resources Information Center

    Satterfield, James H.; Faller, Katherine J.; Crinella, Francis M.; Schell, Anne M.; Swanson, James M.; Homer, Louis D.

    2007-01-01

    Objective: To compare the official arrest records for a large number of hyperactive boys (N = 179), most with conduct problems, and 75 control boys; to examine childhood IQ, socioeconomic status, and parent reports of childhood hyperactivity and conduct problems for their contribution to criminal behavior in adulthood; and to compare adult outcome…

  18. Some anomalies between wind tunnel and flight transition results

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.; Bobbitt, P. J.

    1981-01-01

    A review of environmental disturbance influence and boundary layer transition measurements on a large collection of reference sharp cone tests in wind tunnels and of recent transonic-supersonic cone flight results have previously demonstrated the dominance of free-stream disturbance level on the transition process from the beginning to end. Variation of the ratio of transition Reynolds number at onset-to-end with Mach number has been shown to be consistently different between flight and wind tunnels. Previous correlations of the end of transition with disturbance level give good results for flight and large number of tunnels, however, anomalies occur for similar correlation based on transition onset. Present cone results with a tunnel sonic throat reduced the disturbance level by an order of magnitude with transition values comparable to flight.

  19. Resonance and intercombination lines in Mg-like ions of atomic numbers Z = 13 – 92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santana, Juan A.; Trabert, Elmar

    2015-02-05

    While prominent lines of various Na-like ions have been measured with an accuracy of better than 100 ppm and corroborate equally accurate calculations, there have been remarkably large discrepancies between calculations for Mg-like ions of high atomic number. We present ab initio calculations using the multireference Moller-Plesset approach for Mg-like ions of atomic numbers Z = 13-92 and compare the results with other calculations of this isoelectronic sequence as well as with experimental data. Our results come very close to experiment (typically 100 ppm) over a wide range. Furthermore, data at high values of Z are sparse, which calls formore » further accurate measurements in this range where relativistic and QED effects are large.« less

  20. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    PubMed Central

    Jardim-Messeder, Débora; Lambert, Kelly; Noctor, Stephen; Pestana, Fernanda M.; de Castro Leal, Maria E.; Bertelsen, Mads F.; Alagaili, Abdulaziz N.; Mohammad, Osama B.; Manger, Paul R.; Herculano-Houzel, Suzana

    2017-01-01

    Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion) share with non-primates, including artiodactyls (the typical prey of large carnivorans), roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal composition of carnivoran brains is not affected by domestication. Instead, large carnivorans appear to be particularly vulnerable to metabolic constraints that impose a trade-off between body size and number of cortical neurons. PMID:29311850

  1. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species.

    PubMed

    Jardim-Messeder, Débora; Lambert, Kelly; Noctor, Stephen; Pestana, Fernanda M; de Castro Leal, Maria E; Bertelsen, Mads F; Alagaili, Abdulaziz N; Mohammad, Osama B; Manger, Paul R; Herculano-Houzel, Suzana

    2017-01-01

    Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion) share with non-primates, including artiodactyls (the typical prey of large carnivorans), roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal composition of carnivoran brains is not affected by domestication. Instead, large carnivorans appear to be particularly vulnerable to metabolic constraints that impose a trade-off between body size and number of cortical neurons.

  2. Fighting for independence.

    PubMed

    Saxon, Emma

    2016-01-19

    Male crickets (Gryllus bimaculatus) establish dominance hierarchies within a population by fighting with one another. Larger males win fights more frequently than their smaller counterparts, and a previous study found that males recognise one another primarily through sensory input from the antennae. This study therefore investigated whether the success of larger crickets is influenced by sensory input from the antennae, in part by assessing the number of fights that large 'antennectomized' crickets won against small crickets, compared with the number that large, intact crickets won. The success rate was significantly lower in antennectomized males, though they still won the majority of fights (73/100 versus 58/100, Fisher's exact test P < 0.05); the authors thus conclude that sensory input from the antennae affects the fighting success of large males, but that other size-related factors also play a part.

  3. Comparative population genomics reveals the domestication history of the peach, Prunus persica, and human influences on perennial fruit crops.

    PubMed

    Cao, Ke; Zheng, Zhijun; Wang, Lirong; Liu, Xin; Zhu, Gengrui; Fang, Weichao; Cheng, Shifeng; Zeng, Peng; Chen, Changwen; Wang, Xinwei; Xie, Min; Zhong, Xiao; Wang, Xiaoli; Zhao, Pei; Bian, Chao; Zhu, Yinling; Zhang, Jiahui; Ma, Guosheng; Chen, Chengxuan; Li, Yanjun; Hao, Fengge; Li, Yong; Huang, Guodong; Li, Yuxiang; Li, Haiyan; Guo, Jian; Xu, Xun; Wang, Jun

    2014-07-31

    Recently, many studies utilizing next generation sequencing have investigated plant evolution and domestication in annual crops. Peach, Prunus persica, is a typical perennial fruit crop that has ornamental and edible varieties. Unlike other fruit crops, cultivated peach includes a large number of phenotypes but few polymorphisms. In this study, we explore the genetic basis of domestication in peach and the influence of humans on its evolution. We perform large-scale resequencing of 10 wild and 74 cultivated peach varieties, including 9 ornamental, 23 breeding, and 42 landrace lines. We identify 4.6 million SNPs, a large number of which could explain the phenotypic variation in cultivated peach. Population analysis shows a single domestication event, the speciation of P. persica from wild peach. Ornamental and edible peach both belong to P. persica, along with another geographically separated subgroup, Prunus ferganensis. Our analyses enhance our knowledge of the domestication history of perennial fruit crops, and the dataset we generated could be useful for future research on comparative population genomics.

  4. Who Does a Better Job? Work Quality and Quantity Comparison between Student Volunteers and Students Who Get Extra Credit

    ERIC Educational Resources Information Center

    Omori, Megumi; Feldhaus, Heather

    2015-01-01

    Although undergraduate students are often involved in academic research as volunteers, paid assistants or to receive extra-credit, very little attention has been paid to how well these students perform when they assist researchers. The current study compares the number of surveys gathered at a large local event and the number of missing entries…

  5. What Do Parents Think of Their Children's Schools? "EdNext" Poll Compares Charter, District, and Private Schools Nationwide

    ERIC Educational Resources Information Center

    West, Martin R.; Peterson, Paul E.; Barrows, Samuel

    2017-01-01

    Over the past 25 years, charter schools have offered an increasing number of families an alternative to their local district schools. The charter option has proven particularly popular in large cities, but charter-school growth is often constrained by state laws that limit the number of students the sector can serve. The charter sector is the most…

  6. TSPmap, a tool making use of traveling salesperson problem solvers in the efficient and accurate construction of high-density genetic linkage maps.

    PubMed

    Monroe, J Grey; Allen, Zachariah A; Tanger, Paul; Mullen, Jack L; Lovell, John T; Moyers, Brook T; Whitley, Darrell; McKay, John K

    2017-01-01

    Recent advances in nucleic acid sequencing technologies have led to a dramatic increase in the number of markers available to generate genetic linkage maps. This increased marker density can be used to improve genome assemblies as well as add much needed resolution for loci controlling variation in ecologically and agriculturally important traits. However, traditional genetic map construction methods from these large marker datasets can be computationally prohibitive and highly error prone. We present TSPmap , a method which implements both approximate and exact Traveling Salesperson Problem solvers to generate linkage maps. We demonstrate that for datasets with large numbers of genomic markers (e.g. 10,000) and in multiple population types generated from inbred parents, TSPmap can rapidly produce high quality linkage maps with low sensitivity to missing and erroneous genotyping data compared to two other benchmark methods, JoinMap and MSTmap . TSPmap is open source and freely available as an R package. With the advancement of low cost sequencing technologies, the number of markers used in the generation of genetic maps is expected to continue to rise. TSPmap will be a useful tool to handle such large datasets into the future, quickly producing high quality maps using a large number of genomic markers.

  7. The U.S. Farm Sector in the Mid-1980's. Agricultural Economic Report Number 548.

    ERIC Educational Resources Information Center

    Reimund, Donn A.; And Others

    This report compares several farm characteristics of the mid-1980s with those of a decade earlier to document the real amount of change in the farm sector. Farms are stratified into five groups based on their farm income: rural residence, small family, family, large family, and very large. Sources and levels of farm operator income and wealth are…

  8. How small firms contrast with large firms regarding perceptions, practices, and needs in the U.S

    Treesearch

    Urs Buehlmann; Matthew Bumgardner; Michael Sperber

    2013-01-01

    As many larger secondary woodworking firms have moved production offshore and been adversely impacted by the recent housing downturn, smaller firms have become important to driving U.S. hardwood demand. This study compared and contrasted small and large firms on a number of factors to help determine the unique characteristics of small firms and to provide insights into...

  9. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    NASA Astrophysics Data System (ADS)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  10. USAF solar thermal applications overview

    NASA Technical Reports Server (NTRS)

    Hauger, J. S.; Simpson, J. A.

    1981-01-01

    Process heat applications were compared to solar thermal technologies. The generic process heat applications were analyzed for solar thermal technology utilization, using SERI's PROSYS/ECONOMAT model in an end use matching analysis and a separate analysis was made for solar ponds. Solar technologies appear attractive in a large number of applications. Low temperature applications at sites with high insolation and high fuel costs were found to be most attractive. No one solar thermal technology emerges as a clearly universal or preferred technology, however,, solar ponds offer a potential high payoff in a few, selected applications. It was shown that troughs and flat plate systems are cost effective in a large number of applications.

  11. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  12. Metabolic constraint imposes tradeoff between body size and number of brain neurons in human evolution

    PubMed Central

    Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana

    2012-01-01

    Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution. PMID:23090991

  13. Metabolic constraint imposes tradeoff between body size and number of brain neurons in human evolution.

    PubMed

    Fonseca-Azevedo, Karina; Herculano-Houzel, Suzana

    2012-11-06

    Despite a general trend for larger mammals to have larger brains, humans are the primates with the largest brain and number of neurons, but not the largest body mass. Why are great apes, the largest primates, not also those endowed with the largest brains? Recently, we showed that the energetic cost of the brain is a linear function of its numbers of neurons. Here we show that metabolic limitations that result from the number of hours available for feeding and the low caloric yield of raw foods impose a tradeoff between body size and number of brain neurons, which explains the small brain size of great apes compared with their large body size. This limitation was probably overcome in Homo erectus with the shift to a cooked diet. Absent the requirement to spend most available hours of the day feeding, the combination of newly freed time and a large number of brain neurons affordable on a cooked diet may thus have been a major positive driving force to the rapid increased in brain size in human evolution.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    This note derives the first and second strike magnitudes and costs for strikes between vulnerable missile forces with multiple warheads. The extension to mixes with invulnerable missiles is performed in a companion note. Stability increases as the number of weapons per missile is reduced. The optimal allocation of weapons between missiles and value is significant in predicting the stability impact of the reduction of the number of weapons per missile at large numbers of missiles, less significant in reducing the number of missiles for fixed weapons per missile. At low numbers of missiles, the stability indices for singlet and tripletmore » configurations are comparable, as are the number of weapons each would deliver on value targets.« less

  15. Vegetative characteristics of oak savannas in the southwestern United States: a comparative analysis with oak woodlands in the region [Abstract

    Treesearch

    Peter F. Ffolliott; Gerald J. Gottfried

    2005-01-01

    Much has been learned about the oak (encinal) woodlands of the southwestern United States in recent years. Ecological, hydrologic, and environmental characterizations have been obtained through collaborative efforts involving a large number of collaborators. This state-of-knowledge has been presented in a variety of publications and presentations. However, comparable...

  16. An approach to solve group-decision-making problems with ordinal interval numbers.

    PubMed

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  17. The Shock and Vibration Digest. Volume 14, Number 12

    DTIC Science & Technology

    1982-12-01

    to evaluate the uses of statistical energy analysis for determining sound transmission performance. Coupling loss factors were mea- sured and compared...measurements for the artificial (Also see No. 2623) cracks in mild-steel test pieces. 82-2676 Ihprovement of the Method of Statistical Energy Analysis for...eters, using a large number of free-response time histories In the application of the statistical energy analysis theory simultaneously in one analysis

  18. Temporal Variations of Different Solar Activity Indices Through the Solar Cycles 21-23

    NASA Astrophysics Data System (ADS)

    Göker, Ü. D.; Singh, J.; Nutku, F.; Priyal, M.

    2017-12-01

    Here, we compare the sunspot counts and the number of sunspot groups (SGs) with variations of total solar irradiance (TSI), magnetic activity, Ca II K-flux, faculae and plage areas. We applied a time series method for extracting the data over the descending phases of solar activity cycles (SACs) 21, 22 and 23, and the ascending phases 22 and 23. Our results suggest that there is a strong correlation between solar activity indices and the changes in small (A, B, C and H-modified Zurich Classification) and large (D, E and F) SGs. This somewhat unexpected finding suggests that plage regions substantially decreased in spite of the higher number of large SGs in SAC 23 while the Ca II K-flux did not decrease by a large amount nor was it comparable with SAC 22 and relates with C and DEF type SGs. In addition to this, the increase of facular areas which are influenced by large SGs, caused a small percentage decrease in TSI while the decrement of plage areas triggered a higher decrease in the magnetic field flux. Our results thus reveal the potential of such a detailed comparison of the SG analysis with solar activity indices for better understanding and predicting future trends in the SACs.

  19. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  20. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-70-1). 3: A comparison between characteristics predicted from wind-tunnel measurements and those measured in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.

    1980-01-01

    A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.

  1. Effectiveness of controlled internal drug release device treatment to alleviate reproductive seasonality in anestrus lactating or dry Barki and Rahmani ewes during non-breeding season.

    PubMed

    El-Mokadem, M Y; Nour El-Din, Anm; Ramadan, T A; Rashad, A M; Taha, T A; Samak, M A; Salem, M H

    2018-04-01

    This study aimed to evaluate the effectiveness of hormonal treatments on ovarian activity and reproductive performance in Barki and Rahmani ewes during non-breeding season. Forty-eight multiparous ewes, 24 Barki and 24 Rahmani ewes were divided into two groups, 12 lactating and 12 dry ewes for each breed. Controlled internal drug release (CIDR) device was inserted in all ewes for 14 days in conjunction with intramuscular 500 IU equine chronic gonadotrophin (eCG) at day of CIDR removal. Data were analysed using PROC MIXED of SAS for repeated measures. Breed, physiological status and days were used as fixed effects and individual ewes as random effects. Barki ewes recorded higher (p < .05) total number of follicles, number of large follicles, serum estradiol concentration and estradiol: progesterone (E 2 :P 4 ) ratio compared to Rahmani ewes. Lactating ewes recorded higher (p < .05) number of small follicles and lower concentration of total antioxidant capacity (TAC) compared to dry ewes. Number and diameter of large follicles recorded the highest (p < .05) values accompanied with disappearance of corpora lutea at day of mating. Serum progesterone concentration recorded lower (p < .05) value at day of mating and the highest (p < .05) value at day 35 after mating. CIDR-eCG protocol induced 100% oestrous behaviour in both breeds, but Rahmani ewes recorded longer (p < .05) oestrous duration compared to Barki. Conception failure was higher (p < .05) in Barki compared to Rahmani ewes. In conclusion, CIDR-eCG protocol was more potent in improving ovarian activity in Barki compared to Rahmani ewes, but this protocol seems to induce hormonal imbalance in Barki ewes that resulted in increasing conception failure compared to Rahmani ewes. © 2017 Blackwell Verlag GmbH.

  2. An Admittance Survey of Large Volcanoes on Venus: Implications for Volcano Growth

    NASA Technical Reports Server (NTRS)

    Brian, A. W.; Smrekar, S. E.; Stofan, E. R.

    2004-01-01

    Estimates of the thickness of the venusian crust and elastic lithosphere are important in determining the rheological and thermal properties of Venus. These estimates offer insights into what conditions are needed for certain features, such as large volcanoes and coronae, to form. Lithospheric properties for much of the large volcano population on Venus are not well known. Previous studies of elastic thickness (Te) have concentrated on individual or small groups of edifices, or have used volcano models and fixed values of Te to match with observations of volcano morphologies. In addition, previous studies use different methods to estimate lithospheric parameters meaning it is difficult to compare their results. Following recent global studies of the admittance signatures exhibited by the venusian corona population, we performed a similar survey into large volcanoes in an effort to determine the range of lithospheric parameters shown by these features. This survey of the entire large volcano population used the same method throughout so that all estimates could be directly compared. By analysing a large number of edifices and comparing our results to observations of their morphology and models of volcano formation, we can help determine the controlling parameters that govern volcano growth on Venus.

  3. Family Size and Sex Preference of Children: A Bi-racial Comparison.

    ERIC Educational Resources Information Center

    Rao, V.V. Prakasa; Rao, V. Nandini

    1981-01-01

    Compares the attitudes of 409 White and Black undergraduate students toward the ideal and expected number of children, intentions of having a large family, sex preference for the first child, and sex preference for three children. (Author/CM)

  4. Demodex cati Hirst 1919: a redescription.

    PubMed

    Desch, C; Nutting, W B

    1979-07-01

    All life stages of Demodex cati are described and compared with D. canis. Presence of D. cati is reported for the first time from the external auditory meatus. In the two cases examined mites occurred in large numbers with little pathogenic effect.

  5. Effect of anti-vertigo granule on the opening number and blood flow of mouse ear capillary network

    NASA Astrophysics Data System (ADS)

    Li, Chongxian; Liu, Xiaobin; Li, Jun; Hao, Shaojun; Wang, Xidong; Li, Wenjun; Zhang, Zhengchen

    2018-04-01

    To observe the effects of anti-glare particles on the open number and blood flow in the auricle of mice with microcirculation disturbance model. Sixty mice, half male and half female, were randomly divided into 6 groups. The mice were given Kangxuan granule suspension, serum brain granule suspension and normal saline of the same volume, respectively, once a day. The mice were anesthetized by intraperitoneal injection of chloral hydrate at 1 hour after the last administration. The mouse was fixed on the observation platform and the auricle was placed on the transmission stage. BZ-2000 microcirculation microscope and microcirculation analysis system were used to observe the changes of blood velocity and capillary opening volume in auricle of mice before administration. The changes of blood velocity and capillaries opening volume of mouse auricle were observed 2 min after epinephrine injection into tail vein of mice. Bear fruit: Compared with those before epinephrine, the opening number of capillary reticulum of auricle in large dose Kangxuan granule group was significantly decreased (P<0.05), and in normal saline group and middle group. In the small dose Kangxuan granule group, the opening number of capillary network of auricle decreased significantly (P<0.01). Compared with the model group, the large dose Kangxuan granule group could significantly increase the opening number of the auricle capillary network in mice (P<0.01). Yangxuannao granule group could significantly increase the opening number of auricle capillary reticulum in mice (P<0.05), compared with the model group by Ridit test. Both Kangxuan granule group and Yangxuannao granule group could significantly improve the auricle hair of mice with microcirculation disorder. The blood flow in fine blood vessels (P<0.01). Kangxuan granule has a good effect on the opening number of capillary network of auricle and blood flow in mice with microcirculation disorder.

  6. The Large Super-Fast Rotators and Asteroidal Spin-Rate Distributions With Large Sky-Field Surveys Using iPTF

    NASA Astrophysics Data System (ADS)

    Chang, Chan-Kao; Lin, Hsing-Wen; Ip, Wing-Huen; iPTF Team

    2016-10-01

    In order to look for kilometer-sized super-fast rotators (large SFRs) and understand the spin-rate distributions of small (i.e. D of several kilometers) asteroids, we have been conducting asteroid rotation period surveys of large sky area using intermediate Palomar Transient Factory (iPTF) since 2014. So far, we have observed 261 deg2 with 20 min cadence, 188 deg2 with 10 min cadence, and 65 deg2 with 5 min cadence. From these surveys, we found that the spin-rate distributions of small asteroids at different locations in the main-belt are very similar. Moreover, the distributions of asteroids with 3 < D < 15 km show number decrease along with increase of spin rate for frequency > 5 rev/day, and that of asteroids with D < 3 km have a significant number drop at frequency = 5 rev/day. However, we only discover two new large SFRs and 24 candidates. Comparing with the ordinary asteroids, the population of large SFR seems to be far less than the whole asteroid population. This might indicate a peculiar group of asteroid for large SFRs.

  7. Large craters on the meteoroid and space debris impact experiment

    NASA Technical Reports Server (NTRS)

    Humes, Donald H.

    1991-01-01

    The distribution around the Long Duration Exposure Facility (LDEF) of 532 large craters in the Al plates from the Meteoroid and Space Debris Impact Experiment (S0001) is discussed along with 74 additional large craters in Al plates donated to the Meteoroid and Debris Special Investigation Group by other LDEF experimenters. The craters are 0.5 mm in diameter and larger. Crater shape is discussed. The number of craters and their distribution around the spacecraft are compared with values predicted with models of the meteoroid environment and the manmade orbital debris environment.

  8. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  9. Phylogenetics of modern birds in the era of genomics

    PubMed Central

    Edwards, Scott V; Bryan Jennings, W; Shedlock, Andrew M

    2005-01-01

    In the 14 years since the first higher-level bird phylogenies based on DNA sequence data, avian phylogenetics has witnessed the advent and maturation of the genomics era, the completion of the chicken genome and a suite of technologies that promise to add considerably to the agenda of avian phylogenetics. In this review, we summarize current approaches and data characteristics of recent higher-level bird studies and suggest a number of as yet untested molecular and analytical approaches for the unfolding tree of life for birds. A variety of comparative genomics strategies, including adoption of objective quality scores for sequence data, analysis of contiguous DNA sequences provided by large-insert genomic libraries, and the systematic use of retroposon insertions and other rare genomic changes all promise an integrated phylogenetics that is solidly grounded in genome evolution. The avian genome is an excellent testing ground for such approaches because of the more balanced representation of single-copy and repetitive DNA regions than in mammals. Although comparative genomics has a number of obvious uses in avian phylogenetics, its application to large numbers of taxa poses a number of methodological and infrastructural challenges, and can be greatly facilitated by a ‘community genomics’ approach in which the modest sequencing throughputs of single PI laboratories are pooled to produce larger, complementary datasets. Although the polymerase chain reaction era of avian phylogenetics is far from complete, the comparative genomics era—with its ability to vastly increase the number and type of molecular characters and to provide a genomic context for these characters—will usher in a host of new perspectives and opportunities for integrating genome evolution and avian phylogenetics. PMID:16024355

  10. TRAFFIC EMISSION IMPACTS ON AIR QUALITY NEAR LARGE ROADWAYS

    EPA Science Inventory

    Recent epidemiological studies have examined associations between living near major roads and different health endpoints. They find that populations living, working or going to school near major roads may be at increased risk for a number of adverse health effects. When compare...

  11. Sorption of triazine and organophosphorus pesticides on soil and biochar

    USDA-ARS?s Scientific Manuscript database

    Although a large number of reports are available on sorption and degradation of triazine and organophosphorus pesticides in soils, systematic studies are lacking to directly compare and predict the fate of agrochemicals having different susceptibilities for hydrolysis and other degradation pathways....

  12. Inactive Wells: Economic and Policy Issues

    NASA Astrophysics Data System (ADS)

    Krupnick, A.

    2016-12-01

    This paper examines the economic and policy issues associated with various types of inactive oil and gas wells. It covers the costs of decommissioning wells, and compares them to the bonding requirements on these wells, looking at a large number of states. It also reviews the detailed regulations governing treatment of inactive wells by states and the federal government and compares them according to their completeness and stringency.

  13. Comparative Academic Performance at the Baccalaureate Level Between Graduate Transferees at Nassau Community College and Native Students at Selected Public/Private Institutions.

    ERIC Educational Resources Information Center

    Fernandez, Thomas V., Jr.; And Others

    The academic performance of 1973 graduate transferees from Nassau Community College (NCC) was compared at the baccalaureate level with 1975 native student graduates from 12 four-year public and private colleges and universities. These senior institutions were selected on the basis of the large number of NCC transferees they received. The…

  14. INTERACTIONS OF RAPIDLY MOVING BODIES IN TERRESTRIAL ATMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chopra, K.P.

    1960-03-31

    The drag of a moving body or satellite in the upper atmosphere where the molecular mean free paths are large is studied with special reference to interactions with magnetic fields. The various models for aerodynamic drag are reviewed, and some theoretical expectations for cone and cylindrical satellites (Sputnik III and Explorer IV) are tabulated, tumbling effects included. Gyration of charged particles in a magnetic field is studied; at the altitudes of interest, electrons but not ions are free to spiral. Satellites will become charged because of their contact with charged particles; they usually become negatively charged and, since their velocitymore » is greater than that of ions, they behave like enormous ions with large charges. There is also drag due to Coulomb interaction of the satellite with charged particles, which describe hyperbolic orbits around the satellite. Present theories of Coulomb drag are critically reviewed. According to the Chopra-Singer theory, Coulomb drag contributes significantly to the total drag at 350 km, becomes comparable to the neutral drag at 500 km, and is predominant above 650 km. The next kind of drag considered is induction drag, caused by electric currents induced by the motion through the magnetic field. Induction drag tends to damp out rotational as well as translational motion and is negligible compared to neutral drag at 250 km but becomes large at 500 km. A sphere in strong magnetic fields does not affect the magnetic fields if the Reynolds number of flow is large and the magnetic Reynolds number is small, and a cylinder of fInid with radius equal to that of the sphere is pushed out in front of the sphere. Large magnetic Reynolds numbers are also considered. Another kind of drag is that caused by generation of electromagnetic waves from the satellite; they propagate along the direction of the magnetic field at a velocity slightly less than that of the satellite. The contribution of this drag is negligible at 250 km but is comparable to the Coulomb drag at 800 kin. Experimental apparatus for the simulation of electron and ion bombardment and aerodynamical testing of a satellite are described. A bibliography of 103 references is given. (D.L.C.)« less

  15. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. On the stability of a time dependent boundary layer

    NASA Technical Reports Server (NTRS)

    Otto, S. R.

    1993-01-01

    The aim of this article is to determine the stability characteristics of a Rayleigh layer, which is known to occur when the fluid above a flat plate has a velocity imparted to it (parallel to the plate). This situation is intrinsically unsteady, however, as a first approximation we consider the instantaneous stability of the flow. The Orr-Sommerfeld equation is found to govern fixed downstream wavelength linear perturbations to the basic flow profile. By the solution of this equation, we can determine the Reynolds numbers at which the flow is neutrally stable; this quasisteady approach is only formally applicable for infinite Reynolds numbers. We shall consider the large Reynolds number limit of the original problem and use a three deck mentality to determine the form of the modes. The results of the two calculations are compared, and the linear large Reynolds number analysis is extended to consider the effect of weak nonlinearity in order to determine whether the system is subcritical or supercritical.

  17. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  18. Transcriptome and ultrastructural changes in dystrophic Epidermolysis bullosa resemble skin aging

    PubMed Central

    Trost, Andrea; Weber, Manuela; Klausegger, Alfred; Gruber, Christina; Bruckner, Daniela; Reitsamer, Herbert A.; Bauer, Johann W.; Breitenbach, Michael

    2015-01-01

    The aging process of skin has been investigated recently with respect to mitochondrial function and oxidative stress. We have here observed striking phenotypic and clinical similarity between skin aging and recessive dystrophic Epidermolysis bullosa (RDEB), which is caused by recessive mutations in the gene coding for collagen VII, COL7A1. Ultrastructural changes, defects in wound healing, and inflammation markers are in part shared with aged skin. We have here compared the skin transcriptomes of young adults suffering from RDEB with that of sex‐ and age‐matched healthy probands. In parallel we have compared the skin transcriptome of healthy young adults with that of elderly healthy donors. Quite surprisingly, there was a large overlap of the two gene lists that concerned a limited number of functional protein families. Most prominent among the proteins found are a number of proteins of the cornified envelope or proteins mechanistically involved in cornification and other skin proteins. Further, the overlap list contains a large number of genes with a known role in inflammation. We are documenting some of the most prominent ultrastructural and protein changes by immunofluorescence analysis of skin sections from patients, old individuals, and healthy controls. PMID:26143532

  19. Transcriptome and ultrastructural changes in dystrophic Epidermolysis bullosa resemble skin aging.

    PubMed

    Breitenbach, Jenny S; Rinnerthaler, Mark; Trost, Andrea; Weber, Manuela; Klausegger, Alfred; Gruber, Christina; Bruckner, Daniela; Reitsamer, Herbert A; Bauer, Johann W; Breitenbach, Michael

    2015-06-01

    The aging process of skin has been investigated recently with respect to mitochondrial function and oxidative stress. We have here observed striking phenotypic and clinical similarity between skin aging and recessive dystrophic Epidermolysis bullosa (RDEB), which is caused by recessive mutations in the gene coding for collagen VII,COL7A1. Ultrastructural changes, defects in wound healing, and inflammation markers are in part shared with aged skin. We have here compared the skin transcriptomes of young adults suffering from RDEB with that of sex- and age-matched healthy probands. In parallel we have compared the skin transcriptome of healthy young adults with that of elderly healthy donors. Quite surprisingly, there was a large overlap of the two gene lists that concerned a limited number of functional protein families. Most prominent among the proteins found are a number of proteins of the cornified envelope or proteins mechanistically involved in cornification and other skin proteins. Further, the overlap list contains a large number of genes with a known role in inflammation. We are documenting some of the most prominent ultrastructural and protein changes by immunofluorescence analysis of skin sections from patients, old individuals, and healthy controls.

  20. Network-Centric Interventions to Contain the Syphilis Epidemic in San Francisco.

    PubMed

    Juher, David; Saldaña, Joan; Kohn, Robert; Bernstein, Kyle; Scoglio, Caterina

    2017-07-25

    The number of reported early syphilis cases in San Francisco has increased steadily since 2005. It is not yet clear what factors are responsible for such an increase. A recent analysis of the sexual contact network of men who have sex with men with syphilis in San Francisco has discovered a large connected component, members of which have a significantly higher chance of syphilis and HIV compared to non-member individuals. This study investigates whether it is possible to exploit the existence of the largest connected component to design new notification strategies that can potentially contribute to reducing the number of cases. We develop a model capable of incorporating multiple types of notification strategies and compare the corresponding incidence of syphilis. Through extensive simulations, we show that notifying the community of the infection state of few central nodes appears to be the most effective approach, balancing the cost of notification and the reduction of syphilis incidence. Additionally, among the different measures of centrality, the eigenvector centrality reveals to be the best to reduce the incidence in the long term as long as the number of missing links (non-disclosed contacts) is not very large.

  1. Comparing the Happiness Effects of Real and On-Line Friends

    PubMed Central

    Helliwell, John F.; Huang, Haifang

    2013-01-01

    A recent large Canadian survey permits us to compare face-to-face (‘real-life’) and on-line social networks as sources of subjective well-being. The sample of 5,000 is drawn randomly from an on-line pool of respondents, a group well placed to have and value on-line friendships. We find three key results. First, the number of real-life friends is positively correlated with subjective well-being (SWB) even after controlling for income, demographic variables and personality differences. Doubling the number of friends in real life has an equivalent effect on well-being as a 50% increase in income. Second, the size of online networks is largely uncorrelated with subjective well-being. Third, we find that real-life friends are much more important for people who are single, divorced, separated or widowed than they are for people who are married or living with a partner. Findings from large international surveys (the European Social Surveys 2002–2008) are used to confirm the importance of real-life social networks to SWB; they also indicate a significantly smaller value of social networks to married or partnered couples. PMID:24019875

  2. The natural number bias and its role in rational number understanding in children with dyscalculia. Delay or deficit?

    PubMed

    Van Hoof, Jo; Verschaffel, Lieven; Ghesquière, Pol; Van Dooren, Wim

    2017-12-01

    Previous research indicated that in several cases learners' errors on rational number tasks can be attributed to learners' tendency to (wrongly) apply natural number properties. There exists a large body of literature both on learners' struggle with understanding the rational number system and on the role of the natural number bias in this struggle. However, little is known about this phenomenon in learners with dyscalculia. We investigated the rational number understanding of learners with dyscalculia and compared it with the rational number understanding of learners without dyscalculia. Three groups of learners were included: sixth graders with dyscalculia, a chronological age match group, and an ability match group. The results showed that the rational number understanding of learners with dyscalculia is significantly lower than that of typically developing peers, but not significantly different from younger learners, even after statistically controlling for mathematics achievement. Next to a delay in their mathematics achievement, learners with dyscalculia seem to have an extra delay in their rational number understanding, compared with peers. This is especially the case in those rational number tasks where one has to inhibit natural number knowledge to come to the right answer. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  4. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    PubMed

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  5. Single-wave-number representation of nonlinear energy spectrum in elastic-wave turbulence of the Föppl-von Kármán equation: energy decomposition analysis and energy budget.

    PubMed

    Yokoyama, Naoto; Takaoka, Masanori

    2014-12-01

    A single-wave-number representation of a nonlinear energy spectrum, i.e., a stretching-energy spectrum, is found in elastic-wave turbulence governed by the Föppl-von Kármán (FvK) equation. The representation enables energy decomposition analysis in the wave-number space and analytical expressions of detailed energy budgets in the nonlinear interactions. We numerically solved the FvK equation and observed the following facts. Kinetic energy and bending energy are comparable with each other at large wave numbers as the weak turbulence theory suggests. On the other hand, stretching energy is larger than the bending energy at small wave numbers, i.e., the nonlinearity is relatively strong. The strong correlation between a mode a(k) and its companion mode a(-k) is observed at the small wave numbers. The energy is input into the wave field through stretching-energy transfer at the small wave numbers, and dissipated through the quartic part of kinetic-energy transfer at the large wave numbers. Total-energy flux consistent with energy conservation is calculated directly by using the analytical expression of the total-energy transfer, and the forward energy cascade is observed clearly.

  6. Higher first Chern numbers in one-dimensional Bose-Fermi mixtures

    NASA Astrophysics Data System (ADS)

    Knakkergaard Nielsen, Kristian; Wu, Zhigang; Bruun, G. M.

    2018-02-01

    We propose to use a one-dimensional system consisting of identical fermions in a periodically driven lattice immersed in a Bose gas, to realise topological superfluid phases with Chern numbers larger than 1. The bosons mediate an attractive induced interaction between the fermions, and we derive a simple formula to analyse the topological properties of the resulting pairing. When the coherence length of the bosons is large compared to the lattice spacing and there is a significant next-nearest neighbour hopping for the fermions, the system can realise a superfluid with Chern number ±2. We show that this phase is stable in a large region of the phase diagram as a function of the filling fraction of the fermions and the coherence length of the bosons. Cold atomic gases offer the possibility to realise the proposed system using well-known experimental techniques.

  7. Devil's staircases, quantum dimer models, and stripe formation in strong coupling models of quantum frustration.

    NASA Astrophysics Data System (ADS)

    Raman, Kumar; Papanikolaou, Stefanos; Fradkin, Eduardo

    2007-03-01

    We construct a two-dimensional microscopic model of interacting quantum dimers that displays an infinite number of periodic striped phases in its T=0 phase diagram. The phases form an incomplete devil's staircase and the period becomes arbitrarily large as the staircase is traversed. The Hamiltonian has purely short-range interactions, does not break any symmetries, and is generic in that it does not involve the fine tuning of a large number of parameters. Our model, a quantum mechanical analog of the Pokrovsky-Talapov model of fluctuating domain walls in two dimensional classical statistical mechanics, provides a mechanism by which striped phases with periods large compared to the lattice spacing can, in principle, form in frustrated quantum magnetic systems with only short-ranged interactions and no explicitly broken symmetries. Please see cond-mat/0611390 for more details.

  8. Random sampling of constrained phylogenies: conducting phylogenetic analyses when the phylogeny is partially known.

    PubMed

    Housworth, E A; Martins, E P

    2001-01-01

    Statistical randomization tests in evolutionary biology often require a set of random, computer-generated trees. For example, earlier studies have shown how large numbers of computer-generated trees can be used to conduct phylogenetic comparative analyses even when the phylogeny is uncertain or unknown. These methods were limited, however, in that (in the absence of molecular sequence or other data) they allowed users to assume that no phylogenetic information was available or that all possible trees were known. Intermediate situations where only a taxonomy or other limited phylogenetic information (e.g., polytomies) are available are technically more difficult. The current study describes a procedure for generating random samples of phylogenies while incorporating limited phylogenetic information (e.g., four taxa belong together in a subclade). The procedure can be used to conduct comparative analyses when the phylogeny is only partially resolved or can be used in other randomization tests in which large numbers of possible phylogenies are needed.

  9. Novel applications of array comparative genomic hybridization in molecular diagnostics.

    PubMed

    Cheung, Sau W; Bi, Weimin

    2018-05-31

    In 2004, the implementation of array comparative genomic hybridization (array comparative genome hybridization [CGH]) into clinical practice marked a new milestone for genetic diagnosis. Array CGH and single-nucleotide polymorphism (SNP) arrays enable genome-wide detection of copy number changes in a high resolution, and therefore microarray has been recognized as the first-tier test for patients with intellectual disability or multiple congenital anomalies, and has also been applied prenatally for detection of clinically relevant copy number variations in the fetus. Area covered: In this review, the authors summarize the evolution of array CGH technology from their diagnostic laboratory, highlighting exonic SNP arrays developed in the past decade which detect small intragenic copy number changes as well as large DNA segments for the region of heterozygosity. The applications of array CGH to human diseases with different modes of inheritance with the emphasis on autosomal recessive disorders are discussed. Expert commentary: An exonic array is a powerful and most efficient clinical tool in detecting genome wide small copy number variants in both dominant and recessive disorders. However, whole-genome sequencing may become the single integrated platform for detection of copy number changes, single-nucleotide changes as well as balanced chromosomal rearrangements in the near future.

  10. The elephant brain in numbers

    PubMed Central

    Herculano-Houzel, Suzana; Avelino-de-Souza, Kamilla; Neves, Kleber; Porfírio, Jairo; Messeder, Débora; Mattos Feijó, Larissa; Maldonado, José; Manger, Paul R.

    2014-01-01

    What explains the superior cognitive abilities of the human brain compared to other, larger brains? Here we investigate the possibility that the human brain has a larger number of neurons than even larger brains by determining the cellular composition of the brain of the African elephant. We find that the African elephant brain, which is about three times larger than the human brain, contains 257 billion (109) neurons, three times more than the average human brain; however, 97.5% of the neurons in the elephant brain (251 billion) are found in the cerebellum. This makes the elephant an outlier in regard to the number of cerebellar neurons compared to other mammals, which might be related to sensorimotor specializations. In contrast, the elephant cerebral cortex, which has twice the mass of the human cerebral cortex, holds only 5.6 billion neurons, about one third of the number of neurons found in the human cerebral cortex. This finding supports the hypothesis that the larger absolute number of neurons in the human cerebral cortex (but not in the whole brain) is correlated with the superior cognitive abilities of humans compared to elephants and other large-brained mammals. PMID:24971054

  11. Extensive Error in the Number of Genes Inferred from Draft Genome Assemblies

    PubMed Central

    Denton, James F.; Lugo-Martinez, Jose; Tucker, Abraham E.; Schrider, Daniel R.; Warren, Wesley C.; Hahn, Matthew W.

    2014-01-01

    Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process. PMID:25474019

  12. Extensive error in the number of genes inferred from draft genome assemblies.

    PubMed

    Denton, James F; Lugo-Martinez, Jose; Tucker, Abraham E; Schrider, Daniel R; Warren, Wesley C; Hahn, Matthew W

    2014-12-01

    Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process.

  13. Preterm Cord Blood Contains a Higher Proportion of Immature Hematopoietic Progenitors Compared to Term Samples

    PubMed Central

    Podestà, Marina; Bruschettini, Matteo; Cossu, Claudia; Sabatini, Federica; Dagnino, Monica; Romantsik, Olga; Spaggiari, Grazia Maria; Ramenghi, Luca Antonio; Frassoni, Francesco

    2015-01-01

    Background Cord blood contains high number of hematopoietic cells that after birth disappear. In this paper we have studied the functional properties of the umbilical cord blood progenitor cells collected from term and preterm neonates to establish whether quantitative and/or qualitative differences exist between the two groups. Methods and Results Our results indicate that the percentage of total CD34+ cells was significantly higher in preterm infants compared to full term: 0.61% (range 0.15–4.8) vs 0.3% (0.032–2.23) p = 0.0001 and in neonates <32 weeks of gestational age (GA) compared to those ≥32 wks GA: 0.95% (range 0.18–4.8) and 0.36% (0.15–3.2) respectively p = 0.0025. The majority of CD34+ cells co-expressed CD71 antigen (p<0.05 preterm vs term) and grew in vitro large BFU-E, mostly in the second generation. The subpopulations CD34+CD38- and CD34+CD45- resulted more represented in preterm samples compared to term, conversely, Side Population (SP) did not show any difference between the two group. The absolute number of preterm colonies (CFCs/10microL) resulted higher compared to term (p = 0.004) and these progenitors were able to grow until the third generation maintaining an higher proportion of CD34+ cells (p = 0.0017). The number of colony also inversely correlated with the gestational age (Pearson r = -0.3001 p<0.0168). Conclusions We found no differences in the isolation and expansion capacity of Endothelial Colony Forming Cells (ECFCs) from cord blood of term and preterm neonates: both groups grew in vitro large number of endothelial cells until the third generation and showed a transitional phenotype between mesenchymal stem cells and endothelial progenitors (CD73, CD31, CD34 and CD144)The presence, in the cord blood of preterm babies, of high number of immature hematopoietic progenitors and endothelial/mesenchymal stem cells with high proliferative potential makes this tissue an important source of cells for developing new cells therapies. PMID:26417990

  14. Preterm Cord Blood Contains a Higher Proportion of Immature Hematopoietic Progenitors Compared to Term Samples.

    PubMed

    Podestà, Marina; Bruschettini, Matteo; Cossu, Claudia; Sabatini, Federica; Dagnino, Monica; Romantsik, Olga; Spaggiari, Grazia Maria; Ramenghi, Luca Antonio; Frassoni, Francesco

    2015-01-01

    Cord blood contains high number of hematopoietic cells that after birth disappear. In this paper we have studied the functional properties of the umbilical cord blood progenitor cells collected from term and preterm neonates to establish whether quantitative and/or qualitative differences exist between the two groups. Our results indicate that the percentage of total CD34+ cells was significantly higher in preterm infants compared to full term: 0.61% (range 0.15-4.8) vs 0.3% (0.032-2.23) p = 0.0001 and in neonates <32 weeks of gestational age (GA) compared to those ≥32 wks GA: 0.95% (range 0.18-4.8) and 0.36% (0.15-3.2) respectively p = 0.0025. The majority of CD34+ cells co-expressed CD71 antigen (p<0.05 preterm vs term) and grew in vitro large BFU-E, mostly in the second generation. The subpopulations CD34+CD38- and CD34+CD45- resulted more represented in preterm samples compared to term, conversely, Side Population (SP) did not show any difference between the two group. The absolute number of preterm colonies (CFCs/10microL) resulted higher compared to term (p = 0.004) and these progenitors were able to grow until the third generation maintaining an higher proportion of CD34+ cells (p = 0.0017). The number of colony also inversely correlated with the gestational age (Pearson r = -0.3001 p<0.0168). We found no differences in the isolation and expansion capacity of Endothelial Colony Forming Cells (ECFCs) from cord blood of term and preterm neonates: both groups grew in vitro large number of endothelial cells until the third generation and showed a transitional phenotype between mesenchymal stem cells and endothelial progenitors (CD73, CD31, CD34 and CD144)The presence, in the cord blood of preterm babies, of high number of immature hematopoietic progenitors and endothelial/mesenchymal stem cells with high proliferative potential makes this tissue an important source of cells for developing new cells therapies.

  15. The Comparative Efficacy of the Masquelet versus Titanium Mesh Cage Reconstruction Techniques for the Treatment of Large Long Bone Deficiencies

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-13-1-0492 TITLE: The Comparative Efficacy of the Masquelet versus Titanium Mesh Cage Reconstruction Techniques for the...decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this...DATE October 2017 2. REPORT TYPE Annual 3. DATES COVERED 30Sept2016 - 29Sept2017 4. TITLE AND SUBTITLE The Comparative Efficacy of the Masquelet versus

  16. Shock wave structure in rarefied polyatomic gases with large relaxation time for the dynamic pressure

    NASA Astrophysics Data System (ADS)

    Taniguchi, Shigeru; Arima, Takashi; Ruggeri, Tommaso; Sugiyama, Masaru

    2018-05-01

    The shock wave structure in rarefied polyatomic gases is analyzed based on extended thermodynamics (ET). In particular, the case with large relaxation time for the dynamic pressure, which corresponds to large bulk viscosity, is considered by adopting the simplest version of extended thermodynamics with only 6 independent fields (ET6); the mass density, the velocity, the temperature and the dynamic pressure. Recently, the validity of the theoretical predictions by ET was confirmed by the numerical analysis based on the kinetic theory in [S Kosuge and K Aoki: Phys. Rev. Fluids, Vol. 3, 023401 (2018)]. It was shown that numerical results using the polyatomic version of ellipsoidal statistical model agree with the theoretical predictions by ET for small or moderately large Mach numbers. In the present paper, first, we compare the theoretical predictions by ET6 with the ones by kinetic theory for large Mach number under the same assumptions, that is, the gas is polytropic and the bulk viscosity is proportional to the temperature. Second, the shock wave structure for large Mach number in a non-polytropic gas is analyzed with the particular interest in the effect of the temperature dependence of specific heat and the bulk viscosity on the shock wave structure. Through the analysis of the case of a rarefied carbon dioxide (CO2) gas, it is shown that these temperature dependences play important roles in the precise analysis of the structure for strong shock waves.

  17. Planning multi-arm screening studies within the context of a drug development program

    PubMed Central

    Wason, James M S; Jaki, Thomas; Stallard, Nigel

    2013-01-01

    Screening trials are small trials used to decide whether an intervention is sufficiently promising to warrant a large confirmatory trial. Previous literature examined the situation where treatments are tested sequentially until one is considered sufficiently promising to take forward to a confirmatory trial. An important consideration for sponsors of clinical trials is how screening trials should be planned to maximize the efficiency of the drug development process. It has been found previously that small screening trials are generally the most efficient. In this paper we consider the design of screening trials in which multiple new treatments are tested simultaneously. We derive analytic formulae for the expected number of patients until a successful treatment is found, and propose methodology to search for the optimal number of treatments, and optimal sample size per treatment. We compare designs in which only the best treatment proceeds to a confirmatory trial and designs in which multiple treatments may proceed to a multi-arm confirmatory trial. We find that inclusion of a large number of treatments in the screening trial is optimal when only one treatment can proceed, and a smaller number of treatments is optimal when more than one can proceed. The designs we investigate are compared on a real-life set of screening designs. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23529936

  18. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  19. Large-Scale Comparative Phenotypic and Genomic Analyses Reveal Ecological Preferences of Shewanella Species and Identify Metabolic Pathways Conserved at the Genus Level ▿ †

    PubMed Central

    Rodrigues, Jorge L. M.; Serres, Margrethe H.; Tiedje, James M.

    2011-01-01

    The use of comparative genomics for the study of different microbiological species has increased substantially as sequence technologies become more affordable. However, efforts to fully link a genotype to its phenotype remain limited to the development of one mutant at a time. In this study, we provided a high-throughput alternative to this limiting step by coupling comparative genomics to the use of phenotype arrays for five sequenced Shewanella strains. Positive phenotypes were obtained for 441 nutrients (C, N, P, and S sources), with N-based compounds being the most utilized for all strains. Many genes and pathways predicted by genome analyses were confirmed with the comparative phenotype assay, and three degradation pathways believed to be missing in Shewanella were confirmed as missing. A number of previously unknown gene products were predicted to be parts of pathways or to have a function, expanding the number of gene targets for future genetic analyses. Ecologically, the comparative high-throughput phenotype analysis provided insights into niche specialization among the five different strains. For example, Shewanella amazonensis strain SB2B, isolated from the Amazon River delta, was capable of utilizing 60 C compounds, whereas Shewanella sp. strain W3-18-1, isolated from deep marine sediment, utilized only 25 of them. In spite of the large number of nutrient sources yielding positive results, our study indicated that except for the N sources, they were not sufficiently informative to predict growth phenotypes from increasing evolutionary distances. Our results indicate the importance of phenotypic evaluation for confirming genome predictions. This strategy will accelerate the functional discovery of genes and provide an ecological framework for microbial genome sequencing projects. PMID:21642407

  20. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  1. Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.

    ERIC Educational Resources Information Center

    Colley, J. R.; And Others

    1996-01-01

    Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)

  2. Inner-outer predictive wall model for wall-bounded turbulence in hypersonic flow

    NASA Astrophysics Data System (ADS)

    Martin, M. Pino; Helm, Clara M.

    2017-11-01

    The inner-outer predictive wall model of Mathis et al. is modified for hypersonic turbulent boundary layers. The model is based on a modulation of the energized motions in the inner layer by large scale momentum fluctuations in the logarithmic layer. Using direct numerical simulation (DNS) data of turbulent boundary layers with free stream Mach number 3 to 10, it is shown that the variation of the fluid properties in the compressible flows leads to large Reynolds number (Re) effects in the outer layer and facilitate the modulation observed in high Re incompressible flows. The modulation effect by the large scale increases with increasing free-stream Mach number. The model is extended to include spanwise and wall-normal velocity fluctuations and is generalized through Morkovin scaling. Temperature fluctuations are modeled using an appropriate Reynolds Analogy. Density fluctuations are calculated using an equation of state and a scaling with Mach number. DNS data are used to obtain the universal signal and parameters. The model is tested by using the universal signal to reproduce the flow conditions of Mach 3 and Mach 7 turbulent boundary layer DNS data and comparing turbulence statistics between the modeled flow and the DNS data. This work is supported by the Air Force Office of Scientific Research under Grant FA9550-17-1-0104.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    >Fundamental Alloying. Studies of crystal structures, reactions at metal surfaces, spectroscopy of molten salts, mechanical deformation, and alloy theory are reported. Long-Range Applied Metallurgy. A thermal comparator is described and the characteristic temperature of U0/sub 2/ determined. Sintering studies were carried out on ThO/sub 2/. The diffusion of fission products in fuel and of Al/sup 26/ and Mn/sup 54/ in Al and the reaction of Be with UC were studied. Transformation and oxidation data were obtained for a number of Zr alloys. Reactor Metallurgy. A large number of ceramic technology projects are described. Some corrosion data are given for metalsmore » exposed to impure He and molten fluorides. Studies were made of the fission-gas-retention Properties of ceramic fuel bodies. A large number of materials compatibility studies are described. The mechanical properties of some reactor materials were studied. Fabrication work was conducted to develop materials for application in low-, medium-, and high-temperature reactors or systems. A large number of new metallographic and nondestructive testing techniques are reported. Studies were carried out on the oxidation, carburization, and stability of alloys. Equipment for postirradiation examination is described. Preparation of some alloys and dispersion fuels by powder metallurgy methods was studied. The development of welding and brazing techniques for reactor materials is described. (D.L.C.)« less

  4. Investigation of Rossby-number similarity in the neutral boundary layer using large-eddy simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohmstede, W.D.; Cederwall, R.T.; Meyers, R.E.

    One special case of particular interest, especially to theoreticians, is the steady-state, horizontally homogeneous, autobarotropic (PLB), hereafter referred to as the neutral boundary layer (NBL). The NBL is in fact a 'rare' atmospheric phenomenon, generally associated with high-wind situations. Nevertheless, there is a disproportionate interest in this problem because Rossby-number similarity theory provides a sound approach for addressing this issue. Rossby-number similarity theory has rather wide acceptance, but because of the rarity of the 'true' NBL state, there remains an inadequate experimental database for quantifying constants associated with the Rossby-number similarity concept. Although it remains a controversial issue, it hasmore » been proposed that large-eddy simulation (LES) is an alternative to physical experimentation for obtaining basic atmospherc 'data'. The objective of the study reported here is to investigate Rossby-number similarity in the NBL using LES. Previous studies have not addressed Rossby-number similarity explicitly, although they made use of it in the interpretation of their results. The intent is to calculate several sets of NBL solutions that are ambiguous relative to the their respective Rossby numbers and compare the results for similarity, or the lack of it. 14 refs., 1 fig.« less

  5. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.

  6. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  7. The aerodynamic characteristics of eight very thick airfoils from tests in the variable density wind tunnel

    NASA Technical Reports Server (NTRS)

    Jacobs, Eastman N

    1932-01-01

    Report presents the results of wind tunnel tests on a group of eight very thick airfoils having sections of the same thickness as those used near the roots of tapered airfoils. The tests were made to study certain discontinuities in the characteristic curves that have been obtained from previous tests of these airfoils, and to compare the characteristics of the different sections at values of the Reynolds number comparable with those attained in flight. The discontinuities were found to disappear as the Reynolds number was increased. The results obtained from the large-scale airfoil, a symmetrical airfoil having a thickness ratio of 21 per cent, has the best general characteristics.

  8. Comparing Homicide-Suicides in the United States and Sweden.

    PubMed

    Regoeczi, Wendy C; Granath, Sven; Issa, Rania; Gilson, Thomas; Sturup, Joakim

    2016-11-01

    Research on homicides followed by suicides has largely relied on very localized samples and relatively short time spans of data. As a result, little is known about the extent to which patterns within cases of homicide-suicides are geographically specific. The current study seeks to help fill this gap by comparing twenty years of homicide-suicide data for Sweden and a large U.S. county. Although some of the underlying patterns in the two countries are similar (e.g., decreasing rates), a number of important differences emerge, particularly with respect to incidence, weapons used, perpetrator age, and relationship of the perpetrator to the victim. © 2016 American Academy of Forensic Sciences.

  9. Monitoring Shifts in Campus Image and Recruitment Efforts in Small Colleges.

    ERIC Educational Resources Information Center

    Murray, David

    1991-01-01

    Without an institutional research office to look at recruitment and retention programs, DePauw University (Indiana) relies heavily on institutional trend data and comparative information to assess the effectiveness of three initiatives: recruitment of large numbers of minority students, strategic use of financial aid, and use of market…

  10. Extracting Valuable Data from Classroom Trading Pits

    ERIC Educational Resources Information Center

    Bergstrom, Theodore C.; Kwok, Eugene

    2005-01-01

    How well does competitive theory explain the outcome in experimental markets? The authors examined the results of a large number of classroom trading experiments that used a pit-trading design found in Experiments with Economic Principles, an introductory economics textbook by Bergstrom and Miller. They compared experimental outcomes with…

  11. The Changing Nature of Division III Athletics

    ERIC Educational Resources Information Center

    Beaver, William

    2014-01-01

    Non-selective Division III institutions often face challenges in meeting their enrollment goals. To ensure their continued viability, these schools recruit large numbers of student athletes. As a result, when compared to FBS (Football Bowl Division) institutions these schools have a much higher percentage of student athletes on campus and a…

  12. Learner Washback Variability in Standardized Exit Tests

    ERIC Educational Resources Information Center

    Pan, Yi-Ching

    2014-01-01

    In much of the world, the issue of accountability and measurement of educational outcomes is highly controversial. Exit testing is part of the movement to ascertain what students have learned and hold institutions and teachers to account. However, compared to the large number of teacher washback studies, learner washback research is lacking…

  13. Genetic structure of populations and differentiation in forest trees

    Treesearch

    Raymond P. Guries; F. Thomas Ledig

    1981-01-01

    Electrophoretic techniques permit population biologists to analyze genetic structure of natural populations by using large numbers of allozyme loci. Several methods of analysis have been applied to allozyme data, including chi-square contingency tests, F-statistics, and genetic distance. This paper compares such statistics for pitch pine (Pinus rigida...

  14. How Effective Is the Multidisciplinary Approach? A Follow-Up Study.

    ERIC Educational Resources Information Center

    Hochstadt, Neil J.; Harwicke, Neil J.

    1985-01-01

    The effectiveness of the multidisciplinary approach was assessed by examining the number of recommended services obtained by 180 children one year after multidisciplinary evaluation. Results indicated that a large percentage of services recommended were obtained, compared with the low probability reported in samples of abused and neglected…

  15. A comparison of soil moisture characteristics predicted by the Arya-Paris model with laboratory-measured data

    NASA Technical Reports Server (NTRS)

    Arya, L. M.; Richter, J. C.; Davidson, S. A. (Principal Investigator)

    1982-01-01

    Soil moisture characteristics predicted by the Arya-Paris model were compared with the laboratory measured data for 181 New Jersey soil horizons. For a number of soil horizons, the predicted and the measured moisture characteristic curves are almost coincident; for a large number of other horizons, despite some disparity, their shapes are strikingly similar. Uncertainties in the model input and laboratory measurement of the moisture characteristic are indicated, and recommendations for additional experimentation and testing are made.

  16. Orientational alignment in cavity quantum electrodynamics

    NASA Astrophysics Data System (ADS)

    Keeling, Jonathan; Kirton, Peter G.

    2018-05-01

    We consider the orientational alignment of dipoles due to strong matter-light coupling for a nonvanishing density of excitations. We compare various approaches to this problem in the limit of large numbers of emitters and show that direct Monte Carlo integration, mean-field theory, and large deviation methods match exactly in this limit. All three results show that orientational alignment develops in the presence of a macroscopically occupied polariton mode and that the dipoles asymptotically approach perfect alignment in the limit of high density or low temperature.

  17. Compiler-directed cache management in multiprocessors

    NASA Technical Reports Server (NTRS)

    Cheong, Hoichi; Veidenbaum, Alexander V.

    1990-01-01

    The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.

  18. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  19. [An analysis of existing terminology towards constructing ontology in the field of the radiological technology].

    PubMed

    Tsuji, Shintaro; Fukuda, Akihisa; Yagahara, Ayako; Nishimoto, Naoki; Homma, Katsumi; Ogasawara, Katsuhiko

    2015-03-01

    In 1994, Japanese Society of Radiological Technology (JSRT) constructed the lexicon in the field of radiologic technology. However, recently, latest lexicon is not updated yet. The purpose of this article is to compare the terminologies in clinical medicine with the others and to consider reconstructing the lexicon in the radiological technology. Our study selected three categories from the database of the academic society. These three groups were Clinical medicine (hereafter CM, 167 societies, includes JSRT), Psychology / Education (hereafter P/E, 104 societies), and Comprehensive synthetic engineering (hereafter CSE, 40 societies). First, all societies were surveyed to know whether there were any lexicon in their official website. Second, these terminologies were surveyed on the following criteria: (a) Media of lexicon, (b) Number of terms, (c) File type of lexicon, (d) Terms translated into English, (e) Way of searching terms, and (f) Number of committees of the terminology. Lexicon in CM, P/E, and CSE had 20, 4, and 7. Compared with P/E and CSE, CM showed the following trends: (a) used electronic media frequently, (b) stored large number of terms (about 5,000 to 11,000), (c) enabled to download frequently, and (d) used the alphabet and Japanese syllabary order frequently. Compared with the lexicon of P/E and CSE, terminology in CM tended to adopt the electronic media of lexicon and to have large number of terms. Additionally, many lexicons were expressed in English terms along with Japanese terms. Following massive lexicon of SNOMED-CT and RadLex, it is necessary to consider applying the web-based term searching and an ontological technique to the lexicon of radiological technology.

  20. Effects of the oral administration of the exopolysaccharide produced by Lactobacillus kefiranofaciens on the gut mucosal immunity.

    PubMed

    Vinderola, Gabriel; Perdigón, Gabriela; Duarte, Jairo; Farnworth, Edward; Matar, Chantal

    2006-12-01

    The probiotic effects ascribed to lactic acid bacteria (LAB) and their fermented dairy products arise not only from whole microorganisms and cell wall components but also from peptides and extracellular polysaccharides (exopolysaccharides) produced during the fermentation of milk. There is a lack of knowledge concerning the immune mechanisms induced by exopolysaccharides produced by lactic acid bacteria, which would allow a better understanding of the functional effects described to them. The aim of this study was to investigate the in vivo immunomodulating capacity of the exopolysaccharide produced by Lactobacillus kefiranofaciens by analyzing the profile of cytokines and immunoglobulins induced at the intestinal mucosa level, in the intestinal fluid and blood serum. BALB/c mice received the exopolysaccharide produced by L. kefiranofaciens for 2, 5 or 7 consecutive days. At the end of each period of administration, control and treated mice were sacrificed and the numbers of IgA+ and IgG+ cells were determined on histological slices of the small and large intestine by immunofluorescence. Cytokines (IL-4, IL-6, IL-10, IL-12, IFNgamma and TNFalpha) were also determined in the gut lamina propria as well as in the intestinal fluid and blood serum. There was an increase of IgA+ cells in the small and large intestine lamina propria, without change in the number of IgG+ cells in the small intestine. This study reports the effects of the oral administration of the exopolysaccharide produced by L. kefiranofaciens in the number of IgA+ cells in the small and large intestine, comparing simultaneously the production of cytokines by cells of the lamina propria and in the intestinal fluid and blood serum. The increase in the number of IgA+ cells was not simultaneously accompanied by an enhance of the number of IL-4+ cells in the small intestine. This finding would be in accordance with the fact that, in general, polysaccharide antigens elicit a T-independent immune response. For IL-10+, IL-6+ and IL-12+ cells, the values found were slightly increased compared to control values, while IFNgamma+ and TNFalpha+ cells did not change compared to control values. The effects observed on immunoglobulins and in all the cytokines assayed in the large intestine after kefiran administration were of greater magnitude than the ones observed in the small intestine lamina propria, which may be due to the saccharolytic action of the colonic microflora. In the intestinal fluid, only IL-4 and IL-12 increased compared to control values. In blood serum, all the cytokines assayed followed a pattern of production quite similar to the one found for them in the small intestine lamina propria. We observed that the exopolysaccharide induced a gut mucosal response and it was able to up and down regulate it for protective immunity, maintaining intestinal homeostasis, enhancing the IgA production at both the small and large intestine level and influencing the systemic immunity through the cytokines released to the circulating blood.

  1. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    PubMed

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  2. Effects of gonadotropin-releasing hormone agonist/recombinant follicle-stimulating hormone versus gonadotropin-releasing hormone antagonist/recombinant follicle-stimulating hormone on follicular fluid levels of adhesion molecules during in vitro fertilization.

    PubMed

    Fornaro, Felice; Cobellis, Luigi; Mele, Daniela; Tassou, Argyrò; Badolati, Barbara; Sorrentino, Simona; De Lucia, Domenico; Colacurci, Nicola

    2007-01-01

    To compare the effects of GnRH-agonist/recombinant rFSH versus GnRH-antagonist/recombinant FSH stimulation on follicular fluid levels of soluble intercellular adhesion molecule (sICAM)-1 and vascular cell adhesion molecule-1 (sVCAM-1) during in vitro fertilization (IVF). Prospective, randomized study. University hospital. Seventy-three women underwent IVF. GnRH-agonist/rFSH or GnRH-antagonist/rFSH administration and collection of follicular fluid from 3 small (11-14 mm in diameter) and 3 large (18-21 mm in diameter) follicles on the day of oocyte retrieval. Follicular fluid levels of sICAM-1 and sVCAM-1 and intrafollicular estradiol and progesterone were also measured. Women who underwent GnRH-agonist/rFSH showed higher concentrations of sICAM-1 in both small and large follicles were compared with patients who received GnRH-antagonist/rFSH treatment; follicular fluid levels of sVCAM-1 were similar between the 2 stimulation protocols. Content of sICAM-1 in small and large follicles positively correlated with the number of follicles of > or =15 mm and the number of oocytes that were retrieved in both study groups. Concentrations of follicular fluid sVCAM-1 and progesterone were higher in large than in small follicles and were correlated positively to each other in both follicular classes. In IVF, GnRH-agonist/rFSH is associated with higher follicular fluid levels of sICAM-1 compared with GnRH-antagonist/rFSH regimen. Intrafollicular sICAM-1 content may predict ovarian response, and sVCAM-1 appears as an indicator of the degree of follicular luteinization.

  3. On large time step TVD scheme for hyperbolic conservation laws and its efficiency evaluation

    NASA Astrophysics Data System (ADS)

    Qian, ZhanSen; Lee, Chun-Hian

    2012-08-01

    A large time step (LTS) TVD scheme originally proposed by Harten is modified and further developed in the present paper and applied to Euler equations in multidimensional problems. By firstly revealing the drawbacks of Harten's original LTS TVD scheme, and reasoning the occurrence of the spurious oscillations, a modified formulation of its characteristic transformation is proposed and a high resolution, strongly robust LTS TVD scheme is formulated. The modified scheme is proven to be capable of taking larger number of time steps than the original one. Following the modified strategy, the LTS TVD schemes for Yee's upwind TVD scheme and Yee-Roe-Davis's symmetric TVD scheme are constructed. The family of the LTS schemes is then extended to multidimensional by time splitting procedure, and the associated boundary condition treatment suitable for the LTS scheme is also imposed. The numerical experiments on Sod's shock tube problem, inviscid flows over NACA0012 airfoil and ONERA M6 wing are performed to validate the developed schemes. Computational efficiencies for the respective schemes under different CFL numbers are also evaluated and compared. The results reveal that the improvement is sizable as compared to the respective single time step schemes, especially for the CFL number ranging from 1.0 to 4.0.

  4. Large amplitude forcing of a high speed 2-dimensional jet

    NASA Technical Reports Server (NTRS)

    Bernal, L.; Sarohia, V.

    1984-01-01

    The effect of large amplitude forcing on the growth of a high speed two dimensional jet was investigated experimentally. Two forcing techniques were utilized: mass flow oscillations and a mechanical system. The mass flow oscillation tests were conducted at Strouhal numbers from 0.00052 to 0.045, and peak to peak amplitudes up to 50 percent of the mean exit velocity. The exit Mach number was varied in the range 0.15 to 0.8. The corresponding Reynolds numbers were 8,400 and 45,000. The results indicate no significant change of the jet growth rate or centerline velocity decay compared to the undisturbed free jet. The mechanical forcing system consists of two counter rotating hexagonal cylinders located parallel to the span of the nozzle. Forcing frequencies up to 1,500 Hz were tested. Both symmetric and antisymmetric forcing can be implemented. The results for antisymmetric forcing showed a significant (75 percent) increase of the jet growth rate at an exit Mach number of 0.25 and a Strouhal number of 0.019. At higher rotational speeds, the jet deflected laterally. A deflection angle of 39 deg with respect to the centerline was measured at the maximum rotational speed.

  5. Wall-Resolved Large-Eddy Simulation of Flow Separation Over NASA Wall-Mounted Hump

    NASA Technical Reports Server (NTRS)

    Uzun, Ali; Malik, Mujeeb R.

    2017-01-01

    This paper reports the findings from a study that applies wall-resolved large-eddy simulation to investigate flow separation over the NASA wall-mounted hump geometry. Despite its conceptually simple flow configuration, this benchmark problem has proven to be a challenging test case for various turbulence simulation methods that have attempted to predict flow separation arising from the adverse pressure gradient on the aft region of the hump. The momentum-thickness Reynolds number of the incoming boundary layer has a value that is near the upper limit achieved by recent direct numerical simulation and large-eddy simulation of incompressible turbulent boundary layers. The high Reynolds number of the problem necessitates a significant number of grid points for wall-resolved calculations. The present simulations show a significant improvement in the separation-bubble length prediction compared to Reynolds-Averaged Navier-Stokes calculations. The current simulations also provide good overall prediction of the skin-friction distribution, including the relaminarization observed over the front portion of the hump due to the strong favorable pressure gradient. We discuss a number of problems that were encountered during the course of this work and present possible solutions. A systematic study regarding the effect of domain span, subgrid-scale model, tunnel back pressure, upstream boundary layer conditions and grid refinement is performed. The predicted separation-bubble length is found to be sensitive to the span of the domain. Despite the large number of grid points used in the simulations, some differences between the predictions and experimental observations still exist (particularly for Reynolds stresses) in the case of the wide-span simulation, suggesting that additional grid resolution may be required.

  6. Simplified methods of determining treatment retention in Malawi: ART cohort reports vs. pharmacy stock cards.

    PubMed

    Chan, A K; Singogo, E; Changamire, R; Ratsma, Y E C; Tassie, J-M; Harries, A D

    2012-06-21

    Rapid scale-up of antiretroviral therapy (ART) has challenged the health system in Malawi to monitor large numbers of patients effectively. To compare two methods of determining retention on treatment: quarterly ART clinic data aggregation vs. pharmacy stock cards. Between October 2010 and March 2011, data on ART outcomes were extracted from monitoring tools at five facilities. Pharmacy data on ART consumption were extracted. Workload for each method was observed and timed. We used intraclass correlation and Bland-Altman plots to compare the agreeability of both methods to determine treatment retention. There is wide variability between ART clinic cohort data and pharmacy data to determine treatment retention due to divergence in data at sites with large numbers of patients. However, there is a non-significant trend towards agreeability between the two methods (intraclass correlation coefficient > 0.9; P > 0.05). Pharmacy stock card monitoring is more time-efficient than quarterly ART data aggregation (81 min vs. 573 min). In low-resource settings, pharmacy records could be used to improve drug forecasting and estimate ART retention in a more time-efficient manner than quarterly data aggregation; however, a necessary precondition would be capacity building around pharmacy data management, particularly for large-sized cohorts.

  7. Gene Conversion Violates the Stepwise Mutation Model for Microsatellites in Y-Chromosomal Palindromic Repeats

    PubMed Central

    Balaresque, Patricia; King, Turi E; Parkin, Emma J; Heyer, Evelyne; Carvalho-Silva, Denise; Kraaijenbrink, Thirsa; de Knijff, Peter; Tyler-Smith, Chris; Jobling, Mark A

    2014-01-01

    The male-specific region of the human Y chromosome (MSY) contains eight large inverted repeats (palindromes), in which high-sequence similarity between repeat arms is maintained by gene conversion. These palindromes also harbor microsatellites, considered to evolve via a stepwise mutation model (SMM). Here, we ask whether gene conversion between palindrome microsatellites contributes to their mutational dynamics. First, we study the duplicated tetranucleotide microsatellite DYS385a,b lying in palindrome P4. We show, by comparing observed data with simulated data under a SMM within haplogroups, that observed heteroallelic combinations in which the modal repeat number difference between copies was large, can give rise to homoallelic combinations with zero-repeats difference, equivalent to many single-step mutations. These are unlikely to be generated under a strict SMM, suggesting the action of gene conversion. Second, we show that the intercopy repeat number difference for a large set of duplicated microsatellites in all palindromes in the MSY reference sequence is significantly reduced compared with that for nonpalindrome-duplicated microsatellites, suggesting that the former are characterized by unusual evolutionary dynamics. These observations indicate that gene conversion violates the SMM for microsatellites in palindromes, homogenizing copies within individual Y chromosomes, but increasing overall haplotype diversity among chromosomes within related groups. PMID:24610746

  8. Anthropogenic effects on marine mollusks diversity and abundance; mangrove mollusks along an environmental gradient at Teyab, Persian gulf

    NASA Astrophysics Data System (ADS)

    Azarmanesh, H.; Javanshir, A.

    2009-04-01

    Management of coastal environments requires understanding of ecological relationships among different habitats and their biotas.. The mollusk diversity and density and sedimentological properties of mangrove (Avicennia marina) stands of two different seasons in Teyab have been compared. Pollutant area and cleaner area showed clear separation on the basis of environmental characteristics and benthic mollusks. Numbers of mollusks taxa were generally larger at cleaner sites, and numbers of individuals of several taxa were also larger at other sites. The total number of individuals was not different between the two seasons, largely due to the presence of large numbers of the Mud-living gastropod Cerithium cingulata at the pollutant sites. Differences in the Mollusks were coincident with differences in the nature of the sediment. Sediments in cleaner stands were more compacted and contained lesser organic matter and leaf litter.Analysis of sediment chemistry suggested that mangrove sediment in the Cleaner sites were able to take up more N and P than those in the other sites. Key Words: Sustainable development, Impact, Gastropods, Bivalves, Persian Gulf

  9. Comparison of a large and small-calibre tube drain for managing spontaneous pneumothoraces.

    PubMed

    Benton, Ian J; Benfield, Grant F A

    2009-10-01

    To compare treatment success of large- and small-bore chest drains in the treatment of spontaneous pneumothoraces the case-notes were reviewed of those admitted to our hospital with a total of 73 pneumothoraces and who were treated by trainee doctors of varying experience. Both a large- and a small-bore intercostal tube drain system were in use during the two-year period reviewed. Similar pneumothorax profile and numbers treated with both drains were recorded, resulting in a similar drain time and numbers of successful and failed re-expansion of pneumothoraces. Successful pneumothorax resolution was the same for both drain types and the negligible tube drain complications observed with the small-bore drain reflected previously reported experiences. However the large-bore drain was associated with a high complication rate (32%) with more infectious complications (24%). The small-bore drain was prone to displacement (21%). There was generally no evidence of an increased failure and morbidity, reflecting poorer expertise, in the non-specialist trainees managing the pneumothoraces. A practical finding however was that in those large pneumothoraces where re-expansion failed, the tip of the drain had not been sited at the apex of the pleural cavity irrespective of the drain type inserted.

  10. Large-scale motions in the universe: Using clusters of galaxies as tracers

    NASA Technical Reports Server (NTRS)

    Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard

    1995-01-01

    Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).

  11. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    PubMed

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  12. Improvements in medical quality and patient safety through implementation of a case bundle management strategy in a large outpatient blood collection center.

    PubMed

    Zhao, Shuzhen; He, Lujia; Feng, Chenchen; He, Xiaoli

    2018-06-01

    Laboratory errors in blood collection center (BCC) are most common in the preanalytical phase. It is, therefore, of vital importance for administrators to take measures to improve healthcare quality and patient safety.In 2015, a case bundle management strategy was applied in a large outpatient BCC to improve its medical quality and patient safety.Unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, patient complaints, and patient satisfaction were compared over the period from 2014 to 2016.The strategy reduced unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, and patient complaints, while improving patient satisfaction.This strategy was effective in improving BCC healthcare quality and patient safety.

  13. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  14. Reduction of the field-aligned potential drop in the polar cap during large geomagnetic storms

    NASA Astrophysics Data System (ADS)

    Kitamura, N.; Seki, K.; Nishimura, Y.; Hori, T.; Terada, N.; Ono, T.; Strangeway, R. J.

    2013-12-01

    We have studied photoelectron flows and the inferred field-aligned potential drop in the polar cap during 5 large geomagnetic storms that occurred in the periods when the photoelectron observations in the polar cap were available near the apogee of the FAST satellite (~4000 km) at solar maximum, and the footprint of the satellite paths in the polar cap was under sunlit conditions most of the time. In contrast to the ~20 V potential drop during geomagnetically quiet periods at solar maximum identified by Kitamura et al. [JGR, 2012], the field-aligned potential drop frequently became smaller than ~5 V during the main and early recovery phases of the large geomagnetic storms. Because the potential acts to inhibit photoelectron escape, this result indicates that the corresponding acceleration of ions by the field-aligned potential drop in the polar cap and the lobe region is smaller during the main and early recovery phases of large geomagnetic storms compared to during geomagnetically quiet periods. Under small field-aligned current conditions, the number flux of outflowing ions should be nearly equal to the net escaping electron number flux. Since ions with large flux originating from the cusp/cleft ionosphere convect into the polar cap during geomagnetic storms [e.g., Kitamura et al., JGR, 2010], the net escaping electron number flux should increase to balance the enhanced ion outflows. The magnitude of the field-aligned potential drop would be reduced to let a larger fraction of photoelectrons escape.

  15. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  16. Genetic analysis of rice mutants responsible for narrow leaf phenotype and reduced vein number.

    PubMed

    Kubo, Fumika Clara; Yasui, Yukiko; Kumamaru, Toshihiro; Sato, Yutaka; Hirano, Hiro-Yuki

    2017-03-17

    Leaves are a major site for photosynthesis and a key determinant of plant architecture. Rice produces thin and slender leaves, which consist of the leaf blade and leaf sheath separated by the lamina joint. Two types of vasculature, the large and small vascular bundles, run in parallel, together with a strong structure, the midrib. In this paper, we examined the function of four genes that regulate the width of the leaf blade and the vein number: NARROW LEAF1 (NAL1), NAL2, NAL3 and NAL7. We backcrossed original mutants of these genes with the standard wild-type rice, Taichung 65. We then compared the effect of each mutation on similar genetic backgrounds and examined genetic interactions of these genes. The nal1 single mutation and the nal2 nal3 double mutation showed a severe effect on leaf width, resulting in very narrow leaves. Although vein number was also reduced in the nal1 and nal2 nal3 mutants, the small vein number was more strongly reduced than the large vein number. In contrast, the nal7 mutation showed a milder effect on leaf width and vein number, and both the large and small veins were similarly affected. Thus, the genes responsible for narrow leaf phenotype seem to play distinct roles. The nal7 mutation showed additive effects on both leaf width and vein number, when combined with the nal1 single or the nal2 nal3 double mutation. In addition, observations of inner tissues revealed that cell differentiation was partially compromised in the nal2 nal3 nal7 mutant, consistent with the severe reduction in leaf width in this triple mutant.

  17. In Vitro Evaluation of the Size, Knot Holding Capacity, and Knot Security of the Forwarder Knot Compared to Square and Surgeon's Knots Using Large Gauge Suture.

    PubMed

    Gillen, Alex M; Munsterman, Amelia S; Hanson, R Reid

    2016-11-01

    To investigate the strength, size, and holding capacity of the self-locking forwarder knot compared to surgeon's and square knots using large gauge suture. In vitro mechanical study. Knotted suture. Forwarder, surgeon's, and square knots were tested on a universal testing machine under linear tension using 2 and 3 USP polyglactin 910 and 2 USP polydioxanone. Knot holding capacity (KHC) and mode of failure were recorded and relative knot security (RKS) was calculated as a percentage of KHC. Knot volume and weight were assessed by digital micrometer and balance, respectively. ANOVA and post hoc testing were used tocompare strength between number of throws, suture, suture size, and knot type. P<.05 was considered significant. Forwarder knots had a higher KHC and RKS than surgeon's or square knots for all suture types and number of throws. No forwarder knots unraveled, but a proportion of square and surgeon's knots with <6 throws did unravel. Forwarder knots had a smaller volume and weight than surgeon's and square knots with equal number of throws. The forwarder knot of 4 throws using 3 USP polyglactin 910 had the highest KHC, RKS, and the smallest size and weight. Forwarder knots may be an alternative for commencing continuous patterns in large gauge suture, without sacrificing knot integrity, but further in vivo and ex vivo testing is required to assess the effects of this sliding knot on tissue perfusion before clinical application. © Copyright 2016 by The American College of Veterinary Surgeons.

  18. Review of comparative LCAs of food waste management systems - Current status and potential improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstad, A., E-mail: anna.bernstad@chemeng.lth.se; Cour Jansen, J. la

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer GHG-emissions from different treatment alternatives vary largely in 25 reviewed comparative LCAs of bio-waste management. Black-Right-Pointing-Pointer System-boundary settings often vary largely in reviewed studies. Black-Right-Pointing-Pointer Existing LCA guidelines give varying recommendations in relation to several key issues. - Abstract: Twenty-five comparative cycle assessments (LCAs) addressing food waste treatment were reviewed, including the treatment alternatives landfill, thermal treatment, compost (small and large scale) and anaerobic digestion. The global warming potential related to these treatment alternatives varies largely amongst the studies. Large differences in relation to setting of system boundaries, methodological choices and variations in used input data were seenmore » between the studies. Also, a number of internal contradictions were identified, many times resulting in biased comparisons between alternatives. Thus, noticed differences in global warming potential are not found to be a result of actual differences in the environmental impacts from studied systems, but rather to differences in the performance of the study. A number of key issues with high impact on the overall global warming potential from different treatment alternatives for food waste were identified through the use of one-way sensitivity analyses in relation to a previously performed LCA of food waste management. Assumptions related to characteristics in treated waste, losses and emissions of carbon, nutrients and other compounds during the collection, storage and pretreatment, potential energy recovery through combustion, emissions from composting, emissions from storage and land use of bio-fertilizers and chemical fertilizers and eco-profiles of substituted goods were all identified as highly relevant for the outcomes of this type of comparisons. As the use of LCA in this area is likely to increase in coming years, it is highly relevant to establish more detailed guidelines within this field in order to increase both the general quality in assessments as well as the potentials for cross-study comparisons.« less

  19. Influences of Problem Format and SES on Preschoolers' Understanding of Approximate Addition

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Fuhs, Mary Wagner; Keultjes, M. Claire; Gibson, Matthew H.

    2011-01-01

    Recent studies suggest that 5-year-olds can add and compare large numerical quantities through approximate representations of number. However, the nature of this understanding and its susceptibility to environmental influences remain unclear. We examined whether children's early competence depends on the canonical problem format (i.e., arithmetic…

  20. Assaultive Behavior in State Psychiatric Hospitals: Differences Between Forensic and Nonforensic Patients

    ERIC Educational Resources Information Center

    Linhorst, Donald M.; Scott, Lisa Parker

    2004-01-01

    Forensic patients are occupying an increasingly large number of beds in state psychiatric hospitals. The presence of these mentally ill offenders has raised concerns about the risk they present to nonforensic patients. This study compared the rate of assaults and factors associated with assaultive behavior among 308 nonforensic patients and two…

  1. Silvicultural options for young-growth Douglas-fir forests: the Capitol Forest study—establishment and first results.

    Treesearch

    Robert O. Curtis; David D. Marshall; Dean S. DeBell

    2004-01-01

    This report describes the origin, design, establishment and measurement procedures and first results of a large long-term cooperative study comparing a number of widely different silvicultural regimes applied to young-growth Douglas-fir (Pseudotsuga menziesii) stands managed for multiple objectives. Regimes consist of (1) conventional clearcutting...

  2. Computing Integrated Ratings from Heterogeneous Phenotypic Assessments: A Case Study of Lettuce Postharvest Quality and Downy Mildew Resistance

    USDA-ARS?s Scientific Manuscript database

    Comparing performance of a large number of accessions simultaneously is not always possible. Typically, only subsets of all accessions are tested in separate trials with only some (or none) of the accessions overlapping between subsets. Using standard statistical approaches to combine data from such...

  3. First comprehensive inventory of a tropical site for a megadiverse group of insects, the true flies (Diptera)

    USDA-ARS?s Scientific Manuscript database

    Tropical insect biodiversity remains largely unknown for most research sites in the world, due to the overwhelming number of species and lack of focus by taxonomists. Single-site studies are necessary, however to establish baselines for comparative studies across space, and as the basis for ecologic...

  4. A Survey of the 1986 Canadian Library Systems Marketplace.

    ERIC Educational Resources Information Center

    Merilees, Bobbie

    1987-01-01

    This analysis of trends in the Canadian library systems marketplace in 1986, compares installations of large integrated systems and microcomputer based systems by relative market share, and number of installations by type of library. Canadian vendors' sales in international markets are also analyzed, and a director of vendors provided. (Author/CLB)

  5. Development of laboratory testing facility for evaluation of base-soil behavior under repeated loading : phase 1 : feasibility study.

    DOT National Transportation Integrated Search

    2005-02-01

    Accelerated load testing of paved and unpaved roads is the application of a large number of load repetitions in a short period of time. This type of testing is an economic way to determine the behavior of roads and compare different materials, struct...

  6. Cardiac Reactivity and Stimulant Use in Adolescents with Autism Spectrum Disorders with Comorbid ADHD Versus ADHD

    ERIC Educational Resources Information Center

    Bink, M.; Popma, A.; Bongers, I. L.; van Boxtel, G. J. M.; Denissen, A.; van Nieuwenhuizen, Ch.

    2015-01-01

    A large number of youngsters with autism spectrum disorders (ASD) display comorbid attention deficit/hyperactivity disorder (ADHD) symptoms. However, previous studies are not conclusive whether psychophysiological correlates, like cardiac reactivity, are different for ASD with comorbid ADHD (ASD+) compared to ADHD. Therefore, the current study…

  7. Implementing Collaborative Learning across the Engineering Curriculum

    ERIC Educational Resources Information Center

    Ralston, Patricia A. S.; Tretter, Thomas R.; Kendall-Brown, Marie

    2017-01-01

    Active and collaborative teaching methods increase student learning, and it is broadly accepted that almost any active or collaborative approach will improve learning outcomes as compared to lecture. Yet, large numbers of faculty have not embraced these methods. Thus, the challenge to encourage evidence-based change in teaching is not only how to…

  8. Using experimental design and spatial analyses to improve the precision of NDVI estimates in upland cotton field trials

    USDA-ARS?s Scientific Manuscript database

    Controlling for spatial variability is important in high-throughput phenotyping studies that enable large numbers of genotypes to be evaluated across time and space. In the current study, we compared the efficacy of different experimental designs and spatial models in the analysis of canopy spectral...

  9. The Effects of Career Magnet Schools. IEE Brief Number 22.

    ERIC Educational Resources Information Center

    Crain, Robert L.; Allen, Anna; Little, Judith Warren; Sullivan, Debora; Thaler, Robert; Quigley, Denise; Zellman, Gail

    A research study compared graduates of career magnet programs to graduates of comprehensive high schools in a large metropolitan area. The career magnet programs studied are located either within regular comprehensive high schools or combined with other magnet programs to fill an entire building. Research was conducted through school records of…

  10. Social and Individual Frame Factors in L2 Learning: Comparative Aspects.

    ERIC Educational Resources Information Center

    Ekstrand, Lars H.

    A large number of factors are considered in their role in second language learning. Individual factors include language aptitude, personality, attitudes and motivation, and the role of the speaker's native language. Teacher factors involve the method of instruction, the sex of the teacher, and a teacher's training and competence, while…

  11. A Self-Determination Perspective on Chinese Fifth-Graders' Task Disengagement

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2017-01-01

    Engagement in academic tasks is important. However, compared to the large body of research on task engagement, the number of studies on task disengagement is quite limited. The aim of this study is to examine the associations between the motivational (self-determination) and attitudinal antecedents (learning orientations) of task disengagement.…

  12. Testing of transition-region models: Test cases and data

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Dinavahi, Surya; Iyer, Venkit

    1991-01-01

    Mean flow quantities in the laminar turbulent transition region and in the fully turbulent region are predicted with different models incorporated into a 3-D boundary layer code. The predicted quantities are compared with experimental data for a large number of different flows and the suitability of the models for each flow is evaluated.

  13. Effects of Camera Arrangement on Perceptual-Motor Performance in Minimally Invasive Surgery

    ERIC Educational Resources Information Center

    Delucia, Patricia R.; Griswold, John A.

    2011-01-01

    Minimally invasive surgery (MIS) is performed for a growing number of treatments. Whereas open surgery requires large incisions, MIS relies on small incisions through which instruments are inserted and tissues are visualized with a camera. MIS results in benefits for patients compared with open surgery, but degrades the surgeon's perceptual-motor…

  14. Civic Journalism and Nonelite Sourcing: Making Routine Newswork of Community Connectedness.

    ERIC Educational Resources Information Center

    Massey, Brian L.

    1998-01-01

    Compares the number of "average" citizens brought into the news in three newspapers. Finds nonelite information sources in numerical parity with elite sources in a civic-journalism newspaper, but finds the frequency and directness of their news voices largely unchanged. Finds that routine civic journalism did more to tone down elites'…

  15. Developmental Stuttering in Children Who Are Hard of Hearing

    ERIC Educational Resources Information Center

    Arena, Richard M.; Walker, Elizabeth A.; Oleson, Jacob J.

    2017-01-01

    Purpose: A number of studies with large sample sizes have reported lower prevalence of stuttering in children with significant hearing loss compared to children without hearing loss. This study used a parent questionnaire to investigate the characteristics of stuttering (e.g., incidence, prevalence, and age of onset) in children who are hard of…

  16. Use of the disease severity index for null hypothesis testing

    USDA-ARS?s Scientific Manuscript database

    A disease severity index (DSI) is a single number for summarizing a large amount of disease severity information. It is used to indicate relative resistance of cultivars, to relate disease severity to yield loss, or to compare treatments. The DSI has most often been based on a special type of ordina...

  17. The single-scattering properties of black carbon aggregates determined from the geometric-optics surface-wave approach and the T-matrix method

    NASA Astrophysics Data System (ADS)

    Takano, Y.; Liou, K. N.; Kahnert, M.; Yang, P.

    2013-08-01

    The single-scattering properties of eight black carbon (BC, soot) fractal aggregates, composed of primary spheres from 7 to 600, computed by the geometric-optics surface-wave (GOS) approach coupled with the Rayleigh-Gans-Debye (RGD) adjustment for size parameters smaller than approximately 2, are compared with those determined from the superposition T-matrix method. We show that under the condition of random orientation, the results from GOS/RGD are in general agreement with those from T-matrix in terms of the extinction and absorption cross-sections, the single-scattering co-albedo, and the asymmetry factor. When compared with the specific absorption (m2/g) measured in the laboratory, we illustrate that using the observed radii of primary spheres ranging from 3.3 to 25 nm, the theoretical values determined from GOS/RGD for primary sphere numbers of 100-600 are within the range of measured values. The GOS approach can be effectively applied to aggregates composed of a large number of primary spheres (e.g., >6000) and large size parameters (≫2) in terms of computational efforts.

  18. Identification of linearised RMS-voltage dip patterns based on clustering in renewable plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, Edward

    Generation units connected to the grid are currently required to meet low-voltage ride-through (LVRT) requirements. In most developed countries, these requirements also apply to renewable sources, mainly wind power plants and photovoltaic installations connected to the grid. This study proposes an alternative characterisation solution to classify and visualise a large number of collected events in light of current limits and requirements. The authors' approach is based on linearised root-mean-square-(RMS)-voltage trajectories, taking into account LRVT requirements, and a clustering process to identify the most likely pattern trajectories. The proposed solution gives extensive information on an event's severity by providing a simplemore » but complete visualisation of the linearised RMS-voltage patterns. In addition, these patterns are compared to current LVRT requirements to determine similarities or discrepancies. A large number of collected events can then be automatically classified and visualised for comparative purposes. Real disturbances collected from renewable sources in Spain are used to assess the proposed solution. Extensive results and discussions are also included in this study.« less

  19. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Kinetic Boltzmann approach adapted for modeling highly ionized matter created by x-ray irradiation of a solid.

    PubMed

    Ziaja, Beata; Saxena, Vikrant; Son, Sang-Kil; Medvedev, Nikita; Barbrel, Benjamin; Woloncewicz, Bianca; Stransky, Michal

    2016-05-01

    We report on the kinetic Boltzmann approach adapted for simulations of highly ionized matter created from a solid by its x-ray irradiation. X rays can excite inner-shell electrons, which leads to the creation of deeply lying core holes. Their relaxation, especially in heavier elements, can take complicated paths, leading to a large number of active configurations. Their number can be so large that solving the set of respective evolution equations becomes computationally inefficient and another modeling approach should be used instead. To circumvent this complexity, the commonly used continuum models employ a superconfiguration scheme. Here, we propose an alternative approach which still uses "true" atomic configurations but limits their number by restricting the sample relaxation to the predominant relaxation paths. We test its reliability, performing respective calculations for a bulk material consisting of light atoms and comparing the results with a full calculation including all relaxation paths. Prospective application for heavy elements is discussed.

  1. Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  2. Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  3. Reproductive potential of Spodoptera eridania (Stoll) (Lepidoptera: Noctuidae) in the laboratory: effect of multiple couples and the size.

    PubMed

    Specht, A; Montezano, D G; Sosa-Gómez, D R; Paula-Moraes, S V; Roque-Specht, V F; Barros, N M

    2016-06-01

    This study aimed to evaluate the effect of keeping three couples in the same cage, and the size of adults emerged from small, medium-sized and large pupae (278.67 mg; 333.20 mg and 381.58 mg, respectively), on the reproductive potential of S. eridania (Stoll, 1782) adults, under controlled conditions (25 ± 1 °C, 70% RH and 14 hour photophase). We evaluated the survival, number of copulations, fecundity and fertility of the adult females. The survival of females from these different pupal sizes did not differ statistically, but the survival of males from large pupae was statistically shorter than from small pupae. Fecundity differed significantly and correlated positively with size. The number of effective copulations (espematophores) and fertility did not vary significantly with pupal size. Our results emphasize the importance of indicating the number of copulations and the size of the insects when reproductive parameters are compared.

  4. Space Construction System Analysis. Special Emphasis Studies

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Generic concepts were analyzed to determine: (1) the maximum size of a deployable solar array which might be packaged into a single orbit payload bay; (2) the optimal overall shape of a large erectable structure for large satellite projects; (3) the optimization of electronic communication with emphasis on the number of antennas and their diameters; and (4) the number of beams, traffic growth, and projections and frequencies were found feasible to package a deployable solar array which could generate over 250 kilowatts of electrical power. Also, it was found that the linear-shaped erectable structure is better for ease of construction and installation of systems, and compares favorably on several other counts. The study of electronic communication technology indicated that proliferation of individual satellites will crowd the spectrum by the early 1990's, so that there will be a strong tendency toward a small number of communications platforms over the continental U.S.A. with many antennas and multiple spot beams.

  5. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  6. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  7. Implicit Large Eddy Simulation of a wingtip vortex at Rec =1.2x106

    NASA Astrophysics Data System (ADS)

    Lombard, Jean-Eloi; Moxey, Dave; Sherwin, Spencer; SherwinLab Team

    2015-11-01

    We present recent developments in numerical methods for performing a Large Eddy Simulation (LES) of the formation and evolution of a wingtip vortex. The development of these vortices in the near wake, in combination with the large Reynolds numbers present in these cases, make these types of test cases particularly challenging to investigate numerically. To demonstrate the method's viability, we present results from numerical simulations of flow over a NACA 0012 profile wingtip at Rec = 1.2 x106 and compare them against experimental data, which is to date the highest Reynolds number achieved for a LES that has been correlated with experiments for this test case. Our model correlates favorably with experiment, both for the characteristic jetting in the primary vortex and pressure distribution on the wing surface. The proposed method is of general interest for the modeling of transitioning vortex dominated flows over complex geometries. McLaren Racing/Royal Academy of Engineering Research Chair.

  8. Gene selection and cancer type classification of diffuse large-B-cell lymphoma using a bivariate mixture model for two-species data.

    PubMed

    Su, Yuhua; Nielsen, Dahlia; Zhu, Lei; Richards, Kristy; Suter, Steven; Breen, Matthew; Motsinger-Reif, Alison; Osborne, Jason

    2013-01-05

    : A bivariate mixture model utilizing information across two species was proposed to solve the fundamental problem of identifying differentially expressed genes in microarray experiments. The model utility was illustrated using a dog and human lymphoma data set prepared by a group of scientists in the College of Veterinary Medicine at North Carolina State University. A small number of genes were identified as being differentially expressed in both species and the human genes in this cluster serve as a good predictor for classifying diffuse large-B-cell lymphoma (DLBCL) patients into two subgroups, the germinal center B-cell-like diffuse large B-cell lymphoma and the activated B-cell-like diffuse large B-cell lymphoma. The number of human genes that were observed to be significantly differentially expressed (21) from the two-species analysis was very small compared to the number of human genes (190) identified with only one-species analysis (human data). The genes may be clinically relevant/important, as this small set achieved low misclassification rates of DLBCL subtypes. Additionally, the two subgroups defined by this cluster of human genes had significantly different survival functions, indicating that the stratification based on gene-expression profiling using the proposed mixture model provided improved insight into the clinical differences between the two cancer subtypes.

  9. Assessment of large copy number variants in patients with apparently isolated congenital left-sided cardiac lesions reveals clinically relevant genomic events.

    PubMed

    Hanchard, Neil A; Umana, Luis A; D'Alessandro, Lisa; Azamian, Mahshid; Poopola, Mojisola; Morris, Shaine A; Fernbach, Susan; Lalani, Seema R; Towbin, Jeffrey A; Zender, Gloria A; Fitzgerald-Butt, Sara; Garg, Vidu; Bowman, Jessica; Zapata, Gladys; Hernandez, Patricia; Arrington, Cammon B; Furthner, Dieter; Prakash, Siddharth K; Bowles, Neil E; McBride, Kim L; Belmont, John W

    2017-08-01

    Congenital left-sided cardiac lesions (LSLs) are a significant contributor to the mortality and morbidity of congenital heart disease (CHD). Structural copy number variants (CNVs) have been implicated in LSL without extra-cardiac features; however, non-penetrance and variable expressivity have created uncertainty over the use of CNV analyses in such patients. High-density SNP microarray genotyping data were used to infer large, likely-pathogenic, autosomal CNVs in a cohort of 1,139 probands with LSL and their families. CNVs were molecularly confirmed and the medical records of individual carriers reviewed. The gene content of novel CNVs was then compared with public CNV data from CHD patients. Large CNVs (>1 MB) were observed in 33 probands (∼3%). Six of these were de novo and 14 were not observed in the only available parent sample. Associated cardiac phenotypes spanned a broad spectrum without clear predilection. Candidate CNVs were largely non-recurrent, associated with heterozygous loss of copy number, and overlapped known CHD genomic regions. Novel CNV regions were enriched for cardiac development genes, including seven that have not been previously associated with human CHD. CNV analysis can be a clinically useful and molecularly informative tool in LSLs without obvious extra-cardiac defects, and may identify a clinically relevant genomic disorder in a small but important proportion of these individuals. © 2017 Wiley Periodicals, Inc.

  10. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  11. LES-ODT Simulations of Turbulent Reacting Shear Layers

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas; Echekki, Tarek

    2012-11-01

    Large-eddy simulations (LES) combined with the one-dimensional turbulence (ODT) simulations of a spatially developing turbulent reacting shear layer with heat release and high Reynolds numbers were conducted and compared to results from direct numerical simulations (DNS) of the same configuration. The LES-ODT approach is based on LES solutions for momentum on a coarse grid and solutions for momentum and reactive scalars on a fine ODT grid, which is embedded in the LES computational domain. The shear layer is simulated with a single-step, second-order reaction with an Arrhenius reaction rate. The transport equations are solved using a low Mach number approximation. The LES-ODT simulations yield reasonably accurate predictions of turbulence and passive/reactive scalars' statistics compared to DNS results.

  12. Comparative analysis of geological features and seasonal processes in "Inca City" and "Pityusa Patera" regions on Mars

    NASA Astrophysics Data System (ADS)

    Manrubia, S. C.; Prieto Ballesteros, O.; González Kessler, C.; Fernández Remolar, D.; Córdoba-Jabonero, C.; Selsis, F.; Bérczi, S.; Gánti, T.; Horváth, A.; Sik, A.; Szathmáry, E.

    2004-03-01

    We carry out a comparative analysis of the morphological and seasonal features of two regions in the Martian Southern Polar Region: the Inca City (82S 65W) and the Pityusa Patera zone (66S 37E). These two sites are representative of a large number of areas which are subjected to dynamical, seasonal processes that deeply modify the local conditions of those regions. Due to varitions in sunlight, seasonal CO2 accumulates during autumn and winter and starts defrosting in spring. By mid summer the seasonal ice has disappeared. Despite a number of relevant differences in the morphology of the seasonal features observed, they seem to result from similar processes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, P.D.

    Intercontinental ballistic missiles based in silos have become relatively vulnerable, at least in theory, to counter-force attacks. This theoretical vulnerability may not, in fact, be a serious practical concern; it is nonetheless troubling both to policy-makers and to the public. Furthermore, the present generation of ICBMs is aging (the Minuteman II single warhead missile will exceed its operational life-span early in the next decade) and significant restructuring of the ballistic missile force may well be necessary if a Strategic Arms Reduction Treaty (START) is signed. This paper compares several proposed schemes for modernizing the ICBM force. Because the rail-garrison MIRVdmore » mobile system is the least costly alternative to secure a large number of strategic warheads, it receives a comparatively large amount of attention.« less

  14. Simulating spin models on GPU

    NASA Astrophysics Data System (ADS)

    Weigel, Martin

    2011-09-01

    Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating spin models, such as the Ising model, on GPU as compared to conventional simulations on CPU.

  15. Identification of both copy number variation-type and constant-type core elements in a large segmental duplication region of the mouse genome

    PubMed Central

    2013-01-01

    Background Copy number variation (CNV), an important source of diversity in genomic structure, is frequently found in clusters called CNV regions (CNVRs). CNVRs are strongly associated with segmental duplications (SDs), but the composition of these complex repetitive structures remains unclear. Results We conducted self-comparative-plot analysis of all mouse chromosomes using the high-speed and large-scale-homology search algorithm SHEAP. For eight chromosomes, we identified various types of large SD as tartan-checked patterns within the self-comparative plots. A complex arrangement of diagonal split lines in the self-comparative-plots indicated the presence of large homologous repetitive sequences. We focused on one SD on chromosome 13 (SD13M), and developed SHEPHERD, a stepwise ab initio method, to extract longer repetitive elements and to characterize repetitive structures in this region. Analysis using SHEPHERD showed the existence of 60 core elements, which were expected to be the basic units that form SDs within the repetitive structure of SD13M. The demonstration that sequences homologous to the core elements (>70% homology) covered approximately 90% of the SD13M region indicated that our method can characterize the repetitive structure of SD13M effectively. Core elements were composed largely of fragmented repeats of a previously identified type, such as long interspersed nuclear elements (LINEs), together with partial genic regions. Comparative genome hybridization array analysis showed that whereas 42 core elements were components of CNVR that varied among mouse strains, 8 did not vary among strains (constant type), and the status of the others could not be determined. The CNV-type core elements contained significantly larger proportions of long terminal repeat (LTR) types of retrotransposon than the constant-type core elements, which had no CNV. The higher divergence rates observed in the CNV-type core elements than in the constant type indicate that the CNV-type core elements have a longer evolutionary history than constant-type core elements in SD13M. Conclusions Our methodology for the identification of repetitive core sequences simplifies characterization of the structures of large SDs and detailed analysis of CNV. The results of detailed structural and quantitative analyses in this study might help to elucidate the biological role of one of the SDs on chromosome 13. PMID:23834397

  16. Diversification dynamics of rhynchostomatian ciliates: the impact of seven intrinsic traits on speciation and extinction in a microbial group.

    PubMed

    Vďačný, Peter; Rajter, Ľubomír; Shazib, Shahed Uddin Ahmed; Jang, Seok Won; Shin, Mann Kyoon

    2017-08-30

    Ciliates are a suitable microbial model to investigate trait-dependent diversification because of their comparatively complex morphology and high diversity. We examined the impact of seven intrinsic traits on speciation, extinction, and net-diversification of rhynchostomatians, a group of comparatively large, predatory ciliates with proboscis carrying a dorsal brush (sensoric structure) and toxicysts (organelles used to kill the prey). Bayesian estimates under the binary-state speciation and extinction model indicate that two types of extrusomes and two-rowed dorsal brush raise diversification through decreasing extinction. On the other hand, the higher number of contractile vacuoles and their dorsal location likely increase diversification via elevating speciation rate. Particular nuclear characteristics, however, do not significantly differ in their diversification rates and hence lineages with various macronuclear patterns and number of micronuclei have similar probabilities to generate new species. Likelihood-based quantitative state diversification analyses suggest that rhynchostomatians conform to Cope's rule in that their diversity linearly grows with increasing body length and relative length of the proboscis. Comparison with other litostomatean ciliates indicates that rhynchostomatians are not among the cladogenically most successful lineages and their survival over several hundred million years could be associated with their comparatively large and complex bodies that reduce the risk of extinction.

  17. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    NASA Astrophysics Data System (ADS)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 <= Ra <= 4 ×108 . The momentum transfer response to these system parameters generates a fluid flow in the closed cell with a macroscopic flow Reynolds number that takes values in the range of 1 . 8 ×103 <= Re <= 4 . 6 ×104 . It is shown that particularly the viscous boundary layers for the largest Ra are highly transitional and obey some properties that are directly comparable to transitional channel flows at friction Reynolds numbers below 100. This work is supported by the Deutsche Forschungsgemeinschaft.

  18. LES of High-Reynolds-Number Coanda Flow Separating from a Rounded Trailing Edge of a Circulation Control Airfoil

    NASA Technical Reports Server (NTRS)

    Nichino, Takafumi; Hahn, Seonghyeon; Shariff, Karim

    2010-01-01

    This slide presentation reviews the Large Eddy Simulation of a high reynolds number Coanda flow that is separated from a round trailing edge of a ciruclation control airfoil. The objectives of the study are: (1) To investigate detailed physics (flow structures and statistics) of the fully turbulent Coanda jet applied to a CC airfoil, by using LES (2) To compare LES and RANS results to figure out how to improve the performance of existing RANS models for this type of flow.

  19. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    PubMed

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of eosinophils for maintaining IgA+ cells between the large and small intestine, which are more pronounced during inflammation. This is an important step towards further delineation of the enigmatic functions of gut-resident eosinophils.

  20. Optimal placement of tuning masses on truss structures by genetic algorithms

    NASA Technical Reports Server (NTRS)

    Ponslet, Eric; Haftka, Raphael T.; Cudney, Harley H.

    1993-01-01

    Optimal placement of tuning masses, actuators and other peripherals on large space structures is a combinatorial optimization problem. This paper surveys several techniques for solving this problem. The genetic algorithm approach to the solution of the placement problem is described in detail. An example of minimizing the difference between the two lowest frequencies of a laboratory truss by adding tuning masses is used for demonstrating some of the advantages of genetic algorithms. The relative efficiencies of different codings are compared using the results of a large number of optimization runs.

  1. Impact of operator experience and training strategy on procedural outcomes with leadless pacing: Insights from the Micra Transcatheter Pacing Study.

    PubMed

    El-Chami, Mikhael; Kowal, Robert C; Soejima, Kyoko; Ritter, Philippe; Duray, Gabor Z; Neuzil, Petr; Mont, Lluis; Kypta, Alexander; Sagi, Venkata; Hudnall, John Harrison; Stromberg, Kurt; Reynolds, Dwight

    2017-07-01

    Leadless pacemaker systems have been designed to avoid the need for a pocket and transvenous lead. However, delivery of this therapy requires a new catheter-based procedure. This study evaluates the role of operator experience and different training strategies on procedural outcomes. A total of 726 patients underwent implant attempt with the Micra transcatheter pacing system (TPS; Medtronic, Minneapolis, MN, USA) by 94 operators trained in a teaching laboratory using a simulator, cadaver, and large animal models (lab training) or locally at the hospital with simulator/demo model and proctorship (hospital training). Procedure success, procedure duration, fluoroscopy time, and safety outcomes were compared between training methods and experience (implant case number). The Micra TPS procedure was successful in 99.2% of attempts and did not differ between the 55 operators trained in the lab setting and the 39 operators trained locally at the hospital (P = 0.189). Implant case number was also not a determinant of procedural success (P = 0.456). Each operator performed between one and 55 procedures. Procedure time and fluoroscopy duration decreased by 2.0% (P = 0.002) and 3.2% (P < 0.001) compared to the previous case. Major complication rate and pericardial effusion rate were not associated with case number (P = 0.755 and P = 0.620, respectively). There were no differences in the safety outcomes by training method. Among a large group of operators, implantation success was high regardless of experience. While procedure duration and fluoroscopy times decreased with implant number, complications were low and not associated with case number. Procedure and safety outcomes were similar between distinct training methodologies. © 2017 Wiley Periodicals, Inc.

  2. Efficient Manufacturing of Therapeutic Mesenchymal Stromal Cells Using the Quantum Cell Expansion System

    PubMed Central

    Hanley, Patrick J.; Mei, Zhuyong; Durett, April G.; Cabreira-Harrison, Marie da Graca; Klis, Mariola; Li, Wei; Zhao, Yali; Yang, Bing; Parsha, Kaushik; Mir, Osman; Vahidy, Farhaan; Bloom, Debra; Rice, R. Brent; Hematti, Peiman; Savitz, Sean I; Gee, Adrian P.

    2014-01-01

    Background The use of bone marrow-derived mesenchymal stromal cells (MSCs) as a cellular therapy for various diseases, such as graft-versus-host-disease, diabetes, ischemic cardiomyopathy, and Crohn's disease has produced promising results in early-phase clinical trials. However, for widespread application and use in later phase studies, manufacture of these cells needs to be cost effective, safe, and reproducible. Current methods of manufacturing in flasks or cell factories are labor-intensive, involve a large number of open procedures, and require prolonged culture times. Methods We evaluated the Quantum Cell Expansion system for the expansion of large numbers of MSCs from unprocessed bone marrow in a functionally closed system and compared the results to a flask-based method currently in clinical trials. Results After only two passages, we were able to expand a mean of 6.6×108 MSCs from 25 mL of bone marrow reproducibly. The mean expansion time was 21 days, and cells obtained were able to differentiate into all three lineages: chondrocytes, osteoblasts, and adipocytes. The Quantum was able to generate the target cell number of 2.0×108 cells in an average of 9-fewer days and in half the number of passages required during flask-based expansion. We estimated the Quantum would involve 133 open procedures versus 54,400 in flasks when manufacturing for a clinical trial. Quantum-expanded MSCs infused into an ischemic stroke rat model were therapeutically active. Discussion The Quantum is a novel method of generating high numbers of MSCs in less time and at lower passages when compared to flasks. In the Quantum, the risk of contamination is substantially reduced due to the substantial decrease in open procedures. PMID:24726657

  3. Estimating Sea Surface Temperature Measurement Methods Using Characteristic Differences in the Diurnal Cycle

    NASA Astrophysics Data System (ADS)

    Carella, G.; Kennedy, J. J.; Berry, D. I.; Hirahara, S.; Merchant, C. J.; Morak-Bozzo, S.; Kent, E. C.

    2018-01-01

    Lack of reliable observational metadata represents a key barrier to understanding sea surface temperature (SST) measurement biases, a large contributor to uncertainty in the global surface record. We present a method to identify SST measurement practice by comparing the observed SST diurnal cycle from individual ships with a reference from drifting buoys under similar conditions of wind and solar radiation. Compared to existing estimates, we found a larger number of engine room-intake (ERI) reports post-World War II and in the period 1960-1980. Differences in the inferred mixture of observations lead to a systematic warmer shift of the bias adjusted SST anomalies from 1980 compared to previous estimates, while reducing the ensemble spread. Changes in mean field differences between bucket and ERI SST anomalies in the Northern Hemisphere over the period 1955-1995 could be as large as 0.5°C and are not well reproduced by current bias adjustment models.

  4. Combined treatment of rapamycin and dietary restriction has a larger effect on the transcriptome and metabolome of liver.

    PubMed

    Fok, Wilson C; Bokov, Alex; Gelfond, Jonathan; Yu, Zhen; Zhang, Yiqiang; Doderer, Mark; Chen, Yidong; Javors, Martin; Wood, William H; Zhang, Yongqing; Becker, Kevin G; Richardson, Arlan; Pérez, Viviana I

    2014-04-01

    Rapamycin (Rapa) and dietary restriction (DR) have consistently been shown to increase lifespan. To investigate whether Rapa and DR affect similar pathways in mice, we compared the effects of feeding mice ad libitum (AL), Rapa, DR, or a combination of Rapa and DR (Rapa + DR) on the transcriptome and metabolome of the liver. The principal component analysis shows that Rapa and DR are distinct groups. Over 2500 genes are significantly changed with either Rapa or DR when compared with mice fed AL; more than 80% are unique to DR or Rapa. A similar observation was made when genes were grouped into pathways; two-thirds of the pathways were uniquely changed by DR or Rapa. The metabolome shows an even greater difference between Rapa and DR; no metabolites in Rapa-treated mice were changed significantly from AL mice, whereas 173 metabolites were changed in the DR mice. Interestingly, the number of genes significantly changed by Rapa + DR when compared with AL is twice as large as the number of genes significantly altered by either DR or Rapa alone. In summary, the global effects of DR or Rapa on the liver are quite different and a combination of Rapa and DR results in alterations in a large number of genes and metabolites that are not significantly changed by either manipulation alone, suggesting that a combination of DR and Rapa would be more effective in extending longevity than either treatment alone. © 2013 The Authors. Aging Cell published by the Anatomical Society and John Wiley & Sons Ltd.

  5. A comparative analysis of biclustering algorithms for gene expression data

    PubMed Central

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  6. Semiconductor neutron detectors

    NASA Astrophysics Data System (ADS)

    Gueorguiev, Andrey; Hong, Huicong; Tower, Joshua; Kim, Hadong; Cirignano, Leonard; Burger, Arnold; Shah, Kanai

    2016-09-01

    Lithium Indium Selenide (LiInSe2) has been under development in RMD Inc. and Fisk University for room temperature thermal neutron detection due to a number of promising properties. The recent advances of the crystal growth, material processing, and detector fabrication technologies allowed us to fabricate large detectors with 100 mm2 active area. The thermal neutron detection sensitivity and gamma rejection ratio (GRR) were comparable to 3He tube with 10 atm gas pressure at comparable dimensions. The synthesis, crystal growth, detector fabrication, and characterization are reported in this paper.

  7. Clustering of galaxies near damped Lyman-alpha systems with (z) = 2.6

    NASA Technical Reports Server (NTRS)

    Wolfe, A. M

    1993-01-01

    The galaxy two-point correlation function, xi, at (z) = 2.6 is determined by comparing the number of Ly-alpha-emitting galaxies in narrowband CCD fields selected for the presence of damped L-alpha absorption to their number in randomly selected control fields. Comparisons between the presented determination of (xi), a density-weighted volume average of xi, and model predictions for (xi) at large redshifts show that models in which the clustering pattern is fixed in proper coordinates are highly unlikely, while better agreement is obtained if the clustering pattern is fixed in comoving coordinates. Therefore, clustering of Ly-alpha-emitting galaxies around damped Ly-alpha systems at large redshifts is strong. It is concluded that the faint blue galaxies are drawn from a parent population different from normal galaxies, the presumed offspring of damped Ly-alpha systems.

  8. Interacting Bosons in a Double-Well Potential: Localization Regime

    NASA Astrophysics Data System (ADS)

    Rougerie, Nicolas; Spehner, Dominique

    2018-06-01

    We study the ground state of a large bosonic system trapped in a symmetric double-well potential, letting the distance between the two wells increase to infinity with the number of particles. In this context, one should expect an interaction-driven transition between a delocalized state (particles are independent and all live in both wells) and a localized state (particles are correlated, half of them live in each well). We start from the full many-body Schrödinger Hamiltonian in a large-filling situation where the on-site interaction and kinetic energies are comparable. When tunneling is negligible against interaction energy, we prove a localization estimate showing that the particle number fluctuations in each well are strongly suppressed. The modes in which the particles condense are minimizers of nonlinear Schrödinger-type functionals.

  9. Study of 3-D Dynamic Roughness Effects on Flow Over a NACA 0012 Airfoil Using Large Eddy Simulations at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Guda, Venkata Subba Sai Satish

    There have been several advancements in the aerospace industry in areas of design such as aerodynamics, designs, controls and propulsion; all aimed at one common goal i.e. increasing efficiency --range and scope of operation with lesser fuel consumption. Several methods of flow control have been tried. Some were successful, some failed and many were termed as impractical. The low Reynolds number regime of 104 - 105 is a very interesting range. Flow physics in this range are quite different than those of higher Reynolds number range. Mid and high altitude UAV's, MAV's, sailplanes, jet engine fan blades, inboard helicopter rotor blades and wind turbine rotors are some of the aerodynamic applications that fall in this range. The current study deals with using dynamic roughness as a means of flow control over a NACA 0012 airfoil at low Reynolds numbers. Dynamic 3-D surface roughness elements on an airfoil placed near the leading edge aim at increasing the efficiency by suppressing the effects of leading edge separation like leading edge stall by delaying or totally eliminating flow separation. A numerical study of the above method has been carried out by means of a Large Eddy Simulation, a mathematical model for turbulence in Computational Fluid Dynamics, owing to the highly unsteady nature of the flow. A user defined function has been developed for the 3-D dynamic roughness element motion. Results from simulations have been compared to those from experimental PIV data. Large eddy simulations have relatively well captured the leading edge stall. For the clean cases, i.e. with the DR not actuated, the LES was able to reproduce experimental results in a reasonable fashion. However DR simulation results show that it fails to reattach the flow and suppress flow separation compared to experiments. Several novel techniques of grid design and hump creation are introduced through this study.

  10. Comparative rearing parameters for bisexual and genetic sexing strains of Zeugodacus cucurbitae and Bactrocera dorsalis (Diptera: Tephritidae) on an artificial diet

    USDA-ARS?s Scientific Manuscript database

    The Sterile Insect Technique (SIT) is an important component of area wide programs to control invading or established populations of pestiferous tephritids. The SIT involves the production, sterilization, and release of large numbers of the target species, with the goal of obtaining sterile male x w...

  11. Incidence and degree of Salmonella Heidelberg colonization of day old broiler chicks using several methods of inoculation

    USDA-ARS?s Scientific Manuscript database

    Before beginning a study that involves a large number of birds, it may be helpful to know what method of inoculation would be best for the experiment in question. The objective of this study was to compare several methods of Salmonella challenge (oral gavage, intracloacal inoculation and the seeder ...

  12. The importance of natural habitats to Brazilian free-tailed bats in intensive agricultural landscapes in the Winter Garden Region of Texas, United States

    USDA-ARS?s Scientific Manuscript database

    The conversion of natural lands to agriculture affects the distribution of biological diversity across the landscape. In particular, cropland monocultures alter insect abundance and diversity compared to adjacent natural habitats, but nevertheless can provide large numbers of insect pests as prey i...

  13. Neural Signatures of Number Processing in Human Infants: Evidence for Two Core Systems Underlying Numerical Cognition

    ERIC Educational Resources Information Center

    Hyde, Daniel C.; Spelke, Elizabeth S.

    2011-01-01

    Behavioral research suggests that two cognitive systems are at the foundations of numerical thinking: one for representing 1-3 objects in parallel and one for representing and comparing large, approximate numerical magnitudes. We tested for dissociable neural signatures of these systems in preverbal infants by recording event-related potentials…

  14. Reference test methods for total water in lint cotton by Karl Fischer Titration and low temperature distillation

    USDA-ARS?s Scientific Manuscript database

    In a study of comparability of total water contents (%) of conditioned cottons by Karl Fischer Titration (KFT) and Low Temperature Distillation (LTD) reference methods, we demonstrated a match of averaged results based on a large number of replications and weighing the test specimens at the same tim...

  15. The Role of Social Capital and School Structure on Latino Access to Elite Colleges

    ERIC Educational Resources Information Center

    Gonzalez, Jeremiah J.

    2013-01-01

    Latinos make up the fastest growing population in the United States. However, this group has some of the lowest educational outcomes (Gandara & Contreras, 2009). Although large numbers of Latinos fail to achieve high levels of academic success, some Latinos are able to accomplish educational outcomes that compare with those of the most…

  16. Are the Gates Open to All? Teacher Licensure Accessibility at a Large Midwestern Urban University

    ERIC Educational Resources Information Center

    van den Hoogenhof, Suzanne

    2012-01-01

    The percentage of ethnically and linguistically diverse teachers in public education is very low, especially when compared to students. This is problematic for a number of reasons. First, the racial mismatch between the student body and teacher workforce in public schools perpetuates the achievement gap. Second, research shows that not only…

  17. Different Anaphoric Expressions Are Investigated by Event-Related Brain Potentials

    ERIC Educational Resources Information Center

    Streb, Judith; Hennighausen, Erwin; Rosler, Frank

    2004-01-01

    Event-related potentials were recorded to substantiate the claim of a distinct psycholinguistic status of (a) pronouns vs. proper names and (b) ellipses vs. proper names. In two studies 41 students read sentences in which the number of intervening words between the anaphor and its antecedent was either small or large. Comparing the far with the…

  18. An analysis of large chaparral fires in San Diego County, CA

    Treesearch

    Bob Eisele

    2015-01-01

    San Diego County, California, holds the records for the largest area burned and greatest number of structures destroyed in California. This paper analyzes 102 years of fire history, population growth, and weather records from 1910 through 2012 to examine the factors that are driving the wildfire system. Annual area burned is compared with precipitation during the...

  19. Music Education and the Brain: What Does It Take to Make a Change?

    ERIC Educational Resources Information Center

    Collins, Anita

    2014-01-01

    Neuroscientists have worked for over two decades to understand how the brain processes music, affects emotions, and changes brain development. Much of this research has been based on a model that compares the brain function of participants classified as musicians and nonmusicians. This body of knowledge reveals a large number of benefits from…

  20. Comparative immunogenomics of molluscs.

    PubMed

    Schultz, Jonathan H; Adema, Coen M

    2017-10-01

    Comparative immunology, studying both vertebrates and invertebrates, provided the earliest descriptions of phagocytosis as a general immune mechanism. However, the large scale of animal diversity challenges all-inclusive investigations and the field of immunology has developed by mostly emphasizing study of a few vertebrate species. In addressing the lack of comprehensive understanding of animal immunity, especially that of invertebrates, comparative immunology helps toward management of invertebrates that are food sources, agricultural pests, pathogens, or transmit diseases, and helps interpret the evolution of animal immunity. Initial studies showed that the Mollusca (second largest animal phylum), and invertebrates in general, possess innate defenses but lack the lymphocytic immune system that characterizes vertebrate immunology. Recognizing the reality of both common and taxon-specific immune features, and applying up-to-date cell and molecular research capabilities, in-depth studies of a select number of bivalve and gastropod species continue to reveal novel aspects of molluscan immunity. The genomics era heralded a new stage of comparative immunology; large-scale efforts yielded an initial set of full molluscan genome sequences that is available for analyses of full complements of immune genes and regulatory sequences. Next-generation sequencing (NGS), due to lower cost and effort required, allows individual researchers to generate large sequence datasets for growing numbers of molluscs. RNAseq provides expression profiles that enable discovery of immune genes and genome sequences reveal distribution and diversity of immune factors across molluscan phylogeny. Although computational de novo sequence assembly will benefit from continued development and automated annotation may require some experimental validation, NGS is a powerful tool for comparative immunology, especially increasing coverage of the extensive molluscan diversity. To date, immunogenomics revealed new levels of complexity of molluscan defense by indicating sequence heterogeneity in individual snails and bivalves, and members of expanded immune gene families are expressed differentially to generate pathogen-specific defense responses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Quantum Monte Carlo with very large multideterminant wavefunctions.

    PubMed

    Scemama, Anthony; Applencourt, Thomas; Giner, Emmanuel; Caffarel, Michel

    2016-07-01

    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The calculation of determinants and their derivatives is performed using the Sherman-Morrison formula for updating the inverse Slater matrix. An improved implementation based on the reduction of the number of column substitutions and on a very efficient implementation of the calculation of the scalar products involved is presented. It is emphasized that multideterminant expansions contain in general a large number of identical spin-specific determinants: for typical configuration interaction-type wavefunctions the number of unique spin-specific determinants Ndetσ ( σ=↑,↓) with a non-negligible weight in the expansion is of order O(Ndet). We show that a careful implementation of the calculation of the Ndet -dependent contributions can make this step negligible enough so that in practice the algorithm scales as the total number of unique spin-specific determinants,  Ndet↑+Ndet↓, over a wide range of total number of determinants (here, Ndet up to about one million), thus greatly reducing the total computational cost. Finally, a new truncation scheme for the multideterminant expansion is proposed so that larger expansions can be considered without increasing the computational time. The algorithm is illustrated with all-electron fixed-node diffusion Monte Carlo calculations of the total energy of the chlorine atom. Calculations using a trial wavefunction including about 750,000 determinants with a computational increase of ∼400 compared to a single-determinant calculation are shown to be feasible. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Using next-generation sequencing for high resolution multiplex analysis of copy number variation from nanogram quantities of DNA from formalin-fixed paraffin-embedded specimens.

    PubMed

    Wood, Henry M; Belvedere, Ornella; Conway, Caroline; Daly, Catherine; Chalkley, Rebecca; Bickerdike, Melissa; McKinley, Claire; Egan, Phil; Ross, Lisa; Hayward, Bruce; Morgan, Joanne; Davidson, Leslie; MacLennan, Ken; Ong, Thian K; Papagiannopoulos, Kostas; Cook, Ian; Adams, David J; Taylor, Graham R; Rabbitts, Pamela

    2010-08-01

    The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.

  3. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  4. A role for autophagic protein beclin 1 early in lymphocyte development.

    PubMed

    Arsov, Ivica; Adebayo, Adeola; Kucerova-Levisohn, Martina; Haye, Joanna; MacNeil, Margaret; Papavasiliou, F Nina; Yue, Zhenyu; Ortiz, Benjamin D

    2011-02-15

    Autophagy is a highly regulated and evolutionarily conserved process of cellular self-digestion. Recent evidence suggests that this process plays an important role in regulating T cell homeostasis. In this study, we used Rag1(-/-) (recombination activating gene 1(-/-)) blastocyst complementation and in vitro embryonic stem cell differentiation to address the role of Beclin 1, one of the key autophagic proteins, in lymphocyte development. Beclin 1-deficient Rag1(-/-) chimeras displayed a dramatic reduction in thymic cellularity compared with control mice. Using embryonic stem cell differentiation in vitro, we found that the inability to maintain normal thymic cellularity is likely caused by impaired maintenance of thymocyte progenitors. Interestingly, despite drastically reduced thymocyte numbers, the peripheral T cell compartment of Beclin 1-deficient Rag1(-/-) chimeras is largely normal. Peripheral T cells displayed normal in vitro proliferation despite significantly reduced numbers of autophagosomes. In addition, these chimeras had greatly reduced numbers of early B cells in the bone marrow compared with controls. However, the peripheral B cell compartment was not dramatically impacted by Beclin 1 deficiency. Collectively, our results suggest that Beclin 1 is required for maintenance of undifferentiated/early lymphocyte progenitor populations. In contrast, Beclin 1 is largely dispensable for the initial generation and function of the peripheral T and B cell compartments. This indicates that normal lymphocyte development involves Beclin 1-dependent, early-stage and distinct, Beclin 1-independent, late-stage processes.

  5. Sorting protein lists with nwCompare: a simple and fast algorithm for n-way comparison of proteomic data files.

    PubMed

    Pont, Frédéric; Fournié, Jean Jacques

    2010-03-01

    MS, the reference technology for proteomics, routinely produces large numbers of protein lists whose fast comparison would prove very useful. Unfortunately, most softwares only allow comparisons of two to three lists at once. We introduce here nwCompare, a simple tool for n-way comparison of several protein lists without any query language, and exemplify its use with differential and shared cancer cell proteomes. As the software compares character strings, it can be applied to any type of data mining, such as genomic or metabolomic datalists.

  6. The AzTEC millimeter-wave camera: Design, integration, performance, and the characterization of the (sub-)millimeter galaxy population

    NASA Astrophysics Data System (ADS)

    Austermann, Jason Edward

    One of the primary drivers in the development of large format millimeter detector arrays is the study of sub-millimeter galaxies (SMGs) - a population of very luminous high-redshift dust-obscured starbursts that are widely believed to be the dominant contributor to the Far-Infrared Background (FIB). The characterization of such a population requires the ability to map large patches of the (sub-)millimeter sky to high sensitivity within a feasible amount of time. I present this dissertation on the design, integration, and characterization of the 144-pixel AzTEC millimeter-wave camera and its application to the study of the sub-millimeter galaxy population. In particular, I present an unprecedented characterization of the "blank-field" (fields with no known mass bias) SMG number counts by mapping over 0.5 deg^2 to 1.1mm depths of ~1mJy - a previously unattained depth on these scales. This survey provides the tightest SMG number counts available, particularly for the brightest and rarest SMGs that require large survey areas for a significant number of detections. These counts are compared to the predictions of various models of the evolving mm/sub-mm source population, providing important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation. I also present the results of an AzTEC 0.15 deg^2 survey of the COSMOS field, which uncovers a significant over-density of bright SMGs that are spatially correlated to foreground mass structures, presumably as a result of gravitational lensing. Finally, I compare the results of the available SMG surveys completed to date and explore the effects of cosmic variance on the interpretation of individual surveys.

  7. Cloud computing for comparative genomics

    PubMed Central

    2010-01-01

    Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786

  8. Cloud computing for comparative genomics.

    PubMed

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  9. Increased numbers of total nucleated and CD34+ cells in blood group O cord blood: an analysis of neonatal innate factors in the Korean population.

    PubMed

    Lee, Hye Ryun; Park, Jeong Su; Shin, Sue; Roh, Eun Youn; Yoon, Jong Hyun; Han, Kyou Sup; Kim, Byung Jae; Storms, Robert W; Chao, Nelson J

    2012-01-01

    We analyzed neonatal factors that could affect hematopoietic variables of cord blood (CB) donated from Korean neonates. The numbers of total nucleated cells (TNCs), CD34+ cells, and CD34+ cells/TNCs of CB in neonates were compared according to sex, gestational age, birth weight, birth weight centile for gestational age, and ABO blood group. With 11,098 CB units analyzed, blood group O CB showed an increased number of TNCs, CD34+ cells, and CD34+ cells/TNCs compared with other blood groups. Although TNC counts were lower in males, no difference in the number of CD34+ cells was demonstrated because the number of CD34+ cells/TNCs was higher in males. An increase in the gestational age resulted in an increase in the number of TNCs and decreases in the number of CD34+ cells and CD34+ cells/TNCs. The numbers of TNCs, CD34+ cells, and CD34+ cells/TNCs increased according to increased birth weight centile as well as birth weight. CB with blood group O has unique hematologic variables in this large-scale analysis of Korean neonates, although the impact on the storage policies of CB banks or the clinical outcome of transplantation remains to be determined. © 2011 American Association of Blood Banks.

  10. The QSE-Reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Hix, W. Raphael; Parete-Koon, Suzanne T.; Freiburghaus, Christian; Thielemann, Friedrich-Karl

    2007-09-01

    Iron and neighboring nuclei are formed in massive stars shortly before core collapse and during their supernova outbursts, as well as during thermonuclear supernovae. Complete and incomplete silicon burning are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present a new hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. This allows accurate prediction of the nuclear abundance evolution, deleptonization, and energy generation at a greatly reduced computational cost when compared to a conventional nuclear reaction network. During silicon burning, the resultant QSE-reduced network is approximately an order of magnitude faster than the full network it replaces and requires the tracking of less than a third as many abundance variables, without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multidimensional applications.

  11. A statistical approach to detection of copy number variations in PCR-enriched targeted sequencing data.

    PubMed

    Demidov, German; Simakova, Tamara; Vnuchkova, Julia; Bragin, Anton

    2016-10-22

    Multiplex polymerase chain reaction (PCR) is a common enrichment technique for targeted massive parallel sequencing (MPS) protocols. MPS is widely used in biomedical research and clinical diagnostics as the fast and accurate tool for the detection of short genetic variations. However, identification of larger variations such as structure variants and copy number variations (CNV) is still being a challenge for targeted MPS. Some approaches and tools for structural variants detection were proposed, but they have limitations and often require datasets of certain type, size and expected number of amplicons affected by CNVs. In the paper, we describe novel algorithm for high-resolution germinal CNV detection in the PCR-enriched targeted sequencing data and present accompanying tool. We have developed a machine learning algorithm for the detection of large duplications and deletions in the targeted sequencing data generated with PCR-based enrichment step. We have performed verification studies and established the algorithm's sensitivity and specificity. We have compared developed tool with other available methods applicable for the described data and revealed its higher performance. We showed that our method has high specificity and sensitivity for high-resolution copy number detection in targeted sequencing data using large cohort of samples.

  12. Characterization of the crystal structure, kinematics, stresses and rotations in angular granular quartz during compaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurley, Ryan C.; Herbold, Eric B.; Pagan, Darren C.

    Three-dimensional X-ray diffraction (3DXRD), a method for quantifying the position, orientation and elastic strain of large ensembles of single crystals, has recently emerged as an important tool for studying the mechanical response of granular materials during compaction. Applications have demonstrated the utility of 3DXRD and X-ray computed tomography (XRCT) for assessing strains, particle stresses and orientations, inter-particle contacts and forces, particle fracture mechanics, and porosity evolution in situ . Although past studies employing 3DXRD and XRCT have elucidated the mechanics of spherical particle packings and angular particle packings with a small number of particles, there has been limited effort tomore » date in studying angular particle packings with a large number of particles and in comparing the mechanics of these packings with those composed of a large number of spherical particles. Therefore, the focus of the present paper is on the mechanics of several hundred angular particles during compaction using in situ 3DXRD to study the crystal structure, kinematics, stresses and rotations of angular quartz grains. Comparisons are also made between the compaction response of angular grains and that of spherical grains, and stress-induced twinning within individual grains is discussed.« less

  13. Superconducting Optoelectronic Circuits for Neuromorphic Computing

    NASA Astrophysics Data System (ADS)

    Shainline, Jeffrey M.; Buckley, Sonia M.; Mirin, Richard P.; Nam, Sae Woo

    2017-03-01

    Neural networks have proven effective for solving many difficult computational problems, yet implementing complex neural networks in software is computationally expensive. To explore the limits of information processing, it is necessary to implement new hardware platforms with large numbers of neurons, each with a large number of connections to other neurons. Here we propose a hybrid semiconductor-superconductor hardware platform for the implementation of neural networks and large-scale neuromorphic computing. The platform combines semiconducting few-photon light-emitting diodes with superconducting-nanowire single-photon detectors to behave as spiking neurons. These processing units are connected via a network of optical waveguides, and variable weights of connection can be implemented using several approaches. The use of light as a signaling mechanism overcomes fanout and parasitic constraints on electrical signals while simultaneously introducing physical degrees of freedom which can be employed for computation. The use of supercurrents achieves the low power density (1 mW /cm2 at 20-MHz firing rate) necessary to scale to systems with enormous entropy. Estimates comparing the proposed hardware platform to a human brain show that with the same number of neurons (1 011) and 700 independent connections per neuron, the hardware presented here may achieve an order of magnitude improvement in synaptic events per second per watt.

  14. Characterization of the crystal structure, kinematics, stresses and rotations in angular granular quartz during compaction

    DOE PAGES

    Hurley, Ryan C.; Herbold, Eric B.; Pagan, Darren C.

    2018-06-28

    Three-dimensional X-ray diffraction (3DXRD), a method for quantifying the position, orientation and elastic strain of large ensembles of single crystals, has recently emerged as an important tool for studying the mechanical response of granular materials during compaction. Applications have demonstrated the utility of 3DXRD and X-ray computed tomography (XRCT) for assessing strains, particle stresses and orientations, inter-particle contacts and forces, particle fracture mechanics, and porosity evolution in situ . Although past studies employing 3DXRD and XRCT have elucidated the mechanics of spherical particle packings and angular particle packings with a small number of particles, there has been limited effort tomore » date in studying angular particle packings with a large number of particles and in comparing the mechanics of these packings with those composed of a large number of spherical particles. Therefore, the focus of the present paper is on the mechanics of several hundred angular particles during compaction using in situ 3DXRD to study the crystal structure, kinematics, stresses and rotations of angular quartz grains. Comparisons are also made between the compaction response of angular grains and that of spherical grains, and stress-induced twinning within individual grains is discussed.« less

  15. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    NASA Astrophysics Data System (ADS)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  16. Factors associated with sustained remission in patients with rheumatoid arthritis.

    PubMed

    Martire, María Victoria; Marino Claverie, Lucila; Duarte, Vanesa; Secco, Anastasia; Mammani, Marta

    2015-01-01

    To find out the factors that are associated with sustained remission measured by DAS28 and boolean ACR EULAR 2011 criteria at the time of diagnosis of rheumatoid arthritis. Medical records of patients with rheumatoid arthritis in sustained remission according to DAS28 were reviewed. They were compared with patients who did not achieved values of DAS28<2.6 in any visit during the first 3 years after diagnosis. We also evaluated if patients achieved the boolean ACR/EULAR criteria. Variables analyzed: sex, age, smoking, comorbidities, rheumatoid factor, anti-CCP, ESR, CRP, erosions, HAQ, DAS28, extra-articular manifestations, time to initiation of treatment, involvement of large joints, number of tender joints, number of swollen joints, pharmacological treatment. Forty five patients that achieved sustained remission were compared with 44 controls. The variables present at diagnosis that significantly were associated with remission by DAS28 were: lower values of DAS28, HAQ, ESR, NTJ, NSJ, negative CRP, absence of erosions, male sex and absence of involvement of large joints. Only 24.71% achieved the boolean criteria. The variables associated with sustained remission by these criteria were: lower values of DAS28, HAQ, ESR, number of tender joints and number of swollen joints, negative CRP and absence of erosions. The factors associated with sustained remission were the lower baseline disease activity, the low degree of functional disability and lower joint involvement. We consider it important to recognize these factors to optimize treatment. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  17. Genetic effects of habitat restoration in the Laurentian Great Lakes: an assessment of lake sturgeon origin and genetic diversity

    USGS Publications Warehouse

    Jamie Marie Marranca,; Amy Welsh,; Roseman, Edward F.

    2015-01-01

    Lake sturgeon (Acipenser fulvescens) have experienced significant habitat loss, resulting in reduced population sizes. Three artificial reefs were built in the Huron-Erie corridor in the Great Lakes to replace lost spawning habitat. Genetic data were collected to determine the source and numbers of adult lake sturgeon spawning on the reefs and to determine if the founder effect resulted in reduced genetic diversity. DNA was extracted from larval tail clips and 12 microsatellite loci were amplified. Larval genotypes were then compared to 22 previously studied spawning lake sturgeon populations in the Great Lakes to determine the source of the parental population. The effective number of breeders (Nb) was calculated for each reef cohort. The larval genotypes were then compared to the source population to determine if there were any losses in genetic diversity that are indicative of the founder effect. The St. Clair and Detroit River adult populations were found to be the source parental population for the larvae collected on all three artificial reefs. There were large numbers of contributing adults relative to the number of sampled larvae. There was no significant difference between levels of genetic diversity in the source population and larval samples from the artificial reefs; however, there is some evidence for a genetic bottleneck in the reef populations likely due to the founder effect. Habitat restoration in the Huron-Erie corridor is likely resulting in increased habitat for the large lake sturgeon population in the system and in maintenance of the population's genetic diversity.

  18. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  19. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  20. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    NASA Technical Reports Server (NTRS)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  1. Large-scale urban point cloud labeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  2. Origin and heterogeneity of HDL subspecies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, A.V.; Gong, E.L.; Blanche, P.J.

    1987-09-01

    A major determinant of mature HDL particle size and apolar core content, in the absence of remodeling factors, is most likely the size and apolipoprotein content of the precursor particle. Depending on the number of apoA-I molecules per analog particle, the LCAT-induced transformation follows either a fusion pathway (for precursors with 2 apoA-I per particle) or a pathway (for precursors with more than 2 apoA-I per particle) that conserves the apolipoprotein number. According to our analog results, small nascent HDL probably serve as precursors to the major (apoA-I without apoA-II)-subpopulation in the size interval. Our studies with the large discoidalmore » analog suggest that HDL/sub 2/ (apoA-I without apoA-II)-subpopulations probably originate from the large discoidal nascent HDL that contain a higher number of apolipoprotein molecules per particle than the small nascent HDL. Intermediate transformation products of the large discoidal analog, described in the present study, resemble deformable species found in human lymph and are characterized by a relatively high surface-to-core lipid ratio. Whether large discoidal precursors containing apoE transform in comparable manner but with eventual interchange of apoA-I for apoE (10,15) is under investigation in our laboratory. Likewise, detailed delineation of pathways whereby the (apoA-I with apoA-II)-HDL subpopulations are formed is yet to be accomplished. 23 refs., 6 figs., 2 tabs.« less

  3. Scuba: scalable kernel-based gene prioritization.

    PubMed

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  4. Transformation of Escherichia coli with large DNA molecules by electroporation.

    PubMed Central

    Sheng, Y; Mancino, V; Birren, B

    1995-01-01

    We have examined bacterial electroporation with a specific interest in the transformation of large DNA, i.e. molecules > 100 kb. We have used DNA from bacterial artificial chromosomes (BACs) ranging from 7 to 240 kb, as well as BAC ligation mixes containing a range o different sized molecules. The efficiency of electroporation with large DNA is strongly dependent on the strain of Escherichia coli used; strains which offer comparable efficiencies for 7 kb molecules differ in their uptake of 240 kb DNA by as much as 30-fold. Even with a host strain that transforms relatively well with large DNA, transformation efficiency drops dramatically with increasing size of the DNA. Molecules of 240 kb transform approximately 30-fold less well, on a molar basis, than molecules of 80 kb. Maximum transformation of large DNA occurs with different voltage gradients and with different time constants than are optimal for smaller DNA. This provides the opportunity to increase the yield of transformants which have taken up large DNA relative to the number incorporating smaller molecules. We have demonstrated that conditions may be selected which increase the average size of BAC clones generated by electroporation and compare the overall efficiency of each of the conditions tested. Images PMID:7596828

  5. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    NASA Astrophysics Data System (ADS)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  6. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  7. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  8. Comparing Simplification Strategies for the Skeletal Muscle Proteome

    PubMed Central

    Geary, Bethany; Young, Iain S.; Cash, Phillip; Whitfield, Phillip D.; Doherty, Mary K.

    2016-01-01

    Skeletal muscle is a complex tissue that is dominated by the presence of a few abundant proteins. This wide dynamic range can mask the presence of lower abundance proteins, which can be a confounding factor in large-scale proteomic experiments. In this study, we have investigated a number of pre-fractionation methods, at both the protein and peptide level, for the characterization of the skeletal muscle proteome. The analyses revealed that the use of OFFGEL isoelectric focusing yielded the largest number of protein identifications (>750) compared to alternative gel-based and protein equalization strategies. Further, OFFGEL led to a substantial enrichment of a different sub-population of the proteome. Filter-aided sample preparation (FASP), coupled to peptide-level OFFGEL provided more confidence in the results due to a substantial increase in the number of peptides assigned to each protein. The findings presented here support the use of a multiplexed approach to proteome characterization of skeletal muscle, which has a recognized imbalance in the dynamic range of its protein complement. PMID:28248220

  9. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data.

    PubMed

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M; Song, Jiuzhou; Liu, George E

    2013-06-25

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  10. Will future climate favor more erratic wildfires in the western United States?

    Treesearch

    Lifeng Luo; Ying Tang; Shiyuan Zhong; Xindi Bian; Warren E. Heilman

    2013-01-01

    Wildfires that occurred over the western United States during August 2012 were fewer in number but larger in size when compared with all other Augusts in the twenty-first century. This unique characteristic, along with the tremendous property damage and potential loss of life that occur with large wildfires with erratic behavior, raised the question of whether future...

  11. Paving the Way and Passing the Torch: Mentors' Motivation and Experience of Supporting Women in Optical Engineering

    ERIC Educational Resources Information Center

    Kodate, Naonori; Kodate, Kashiko; Kodate, Takako

    2014-01-01

    The phenomenon of women's underrepresentation in engineering is well known. However, the slow progress in achieving better gender equality here compared with other domains has accentuated the "numbers" issue, while the quality aspects have been largely ignored. This study aims to shed light on both these aspects via the lens of mentors,…

  12. Organizational Commitment Mediating the Effects of Big Five Personality Compliance to Occupational Safety Standard Operating Procedure

    ERIC Educational Resources Information Center

    Prayitno, Hadi; Suwandi, Tjipto; Hamidah

    2016-01-01

    The incidence of work accidents in the construction industry particularly in Indonesia and generally internationally shows an increasingly worrying trend given that either quantitatively or qualitatively the number of work accidents is very large compared to that of other industrial sectors. The main causing factors are mostly due to the workers'…

  13. Effects of Honours Programme Participation in Higher Education: A Propensity Score Matching Approach

    ERIC Educational Resources Information Center

    Kool, Ada; Mainhard, Tim; Jaarsma, Debbie; van Beukelen, Peter; Brekelmans, Mieke

    2017-01-01

    Honours programmes have become part of higher education systems around the globe, and an increasing number of students are enrolled in such programmes. So far, effects of these programmes are largely under-researched. Two gaps in previous research on the effects of such programmes were addressed: (1) most studies lack a comparable control group of…

  14. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  15. Assessment of Aerobic Endurance: A Comparison between CD-ROM and Laboratory-Based Instruction.

    ERIC Educational Resources Information Center

    Kirkwood, Margaret; Sharp, Bob; de Vito, Giuseppe; Nimmo, Myra A.

    2002-01-01

    Describes a CD-ROM version of a basic course in exercise physiology that was developed in the United Kingdom to overcome problems of staff time, expense, ethical considerations, and large student numbers. Compares it to a traditional course and concludes that adding more active learning approaches to the CD-ROM would enhance student learning. (LRW)

  16. Health Outcomes among Hispanic Subgroups: Data from the National Health Interview Survey, 1992-95. Advance Data, Number 310.

    ERIC Educational Resources Information Center

    Hajat, Anjum; Lucas, Jacqueline B.; Kington, Raynard

    In this report, various health measures are compared across Hispanic subgroups in the United States. National Health Interview Survey (NHIS) data aggregated from 1992 through 1995 were analyzed. NHIS is one of the few national surveys that has a sample sufficiently large enough to allow such comparisons. Both age-adjusted and unadjusted estimates…

  17. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  18. Ultrasonographic ovarian dynamic, plasma progesterone, and non-esterified fatty acids in lame postpartum dairy cows

    PubMed Central

    Gomez, Veronica; Bothe, Hans; Rodriguez, Francisco; Velez, Juan; Lopez, Hernando; Bartolome, Julian; Archbald, Louis

    2018-01-01

    The objective of this study was to compare ovulation rate, number of large ovarian follicles, and concentrations of plasma progesterone (P4) and non-esterified fatty acids (NEFA) between lame (n = 10) and non-lame (n = 10) lactating Holstein cows. The study was conducted in an organic dairy farm, and cows were evaluated by undertaking ultrasonography and blood sampling every 3 days from 30 days postpartum for a period of 34 days. Cows which became lame during the first 30 days postpartum experienced a lower ovulation rate determined by the presence of a corpus luteum (50% presence for lame cows and 100% for non-lame cows, p ≤ 0.05). The number of large ovarian follicles in the ovaries was 5 for lame cows and 7 for non-lame cows (p = 0.09). Compared to non-lame cows, lame cows had significantly lower (p ≤ 0.05) concentrations of plasma P4. Furthermore, NEFA concentrations were lower (p ≤ 0.05) in lame cows than in non-lame cows. It is concluded that lameness in postpartum dairy cows is associated with ovulation failure and lower concentrations of P4 and NEFA. PMID:29486532

  19. Promiscuity in the Enzymatic Catalysis of Phosphate and Sulfate Transfer

    PubMed Central

    2016-01-01

    The enzymes that facilitate phosphate and sulfate hydrolysis are among the most proficient natural catalysts known to date. Interestingly, a large number of these enzymes are promiscuous catalysts that exhibit both phosphatase and sulfatase activities in the same active site and, on top of that, have also been demonstrated to efficiently catalyze the hydrolysis of other additional substrates with varying degrees of efficiency. Understanding the factors that underlie such multifunctionality is crucial both for understanding functional evolution in enzyme superfamilies and for the development of artificial enzymes. In this Current Topic, we have primarily focused on the structural and mechanistic basis for catalytic promiscuity among enzymes that facilitate both phosphoryl and sulfuryl transfer in the same active site, while comparing this to how catalytic promiscuity manifests in other promiscuous phosphatases. We have also drawn on the large number of experimental and computational studies of selected model systems in the literature to explore the different features driving the catalytic promiscuity of such enzymes. Finally, on the basis of this comparative analysis, we probe the plausible origins and determinants of catalytic promiscuity in enzymes that catalyze phosphoryl and sulfuryl transfer. PMID:27187273

  20. Spatial Holmboe Instability

    NASA Astrophysics Data System (ADS)

    Sabine, Ortiz; Marc, Chomaz Jean; Thomas, Loiseleux

    2001-11-01

    In mixing layers between two parallel streams of different densities, shear and gravity effects interplay. When the Roosby number, which compares the nonlinear acceleration terms to the Coriolis forces, is large enough, buoyancy acts as a restoring force, the Kelvin-Helmholtz mode is known to be stabilized by the stratification. If the density interface is sharp enough, two new instability modes, known as Holmboe modes, propagating in opposite directions appear. This mechanism has been study in the temporal instability framework. We analyze the associated spatial instability problem, in the Boussinesq approximation, for two immiscible inviscid fluids with broken-line velocity profile. We show how the classical scenario for transition between absolute and convective instability should be modified due to the presence of propagating waves. In convective region, the spatial theory is relevant and the slowest propagative wave is shown to be the most spatially amplified, as suggested by the intuition. Spatial theory is compared with mixing layer experiments (C.G. Koop and Browand J. Fluid Mech. 93, part 1, 135 (1979)), and wedge flows (G. Pawlak and L. Armi J. Fluid Mech. 376, 1 (1999)). Physical mechanism for the Holmboe mode destabilization is analyzed via an asymptotic expansion that explains precisely the absolute instability domain at large Richardson number.

  1. Statistical comparison of coherent structures in fully developed turbulent pipe flow with and without drag reduction

    NASA Astrophysics Data System (ADS)

    Sogaro, Francesca; Poole, Robert; Dennis, David

    2014-11-01

    High-speed stereoscopic particle image velocimetry has been performed in fully developed turbulent pipe flow at moderate Reynolds numbers with and without a drag-reducing additive (an aqueous solution of high molecular weight polyacrylamide). Three-dimensional large and very large-scale motions (LSM and VLSM) are extracted from the flow fields by a detection algorithm and the characteristics for each case are statistically compared. The results show that the three-dimensional extent of VLSMs in drag reduced (DR) flow appears to increase significantly compared to their Newtonian counterparts. A statistical increase in azimuthal extent of DR VLSM is observed by means of two-point spatial autocorrelation of the streamwise velocity fluctuation in the radial-azimuthal plane. Furthermore, a remarkable increase in length of these structures is observed by three-dimensional two-point spatial autocorrelation. These results are accompanied by an analysis of the swirling strength in the flow field that shows a significant reduction in strength and number of the vortices for the DR flow. The findings suggest that the damping of the small scales due to polymer addition results in the undisturbed development of longer flow structures.

  2. Canine and feline hematology reference values for the ADVIA 120 hematology system.

    PubMed

    Moritz, Andreas; Fickenscher, Yvonne; Meyer, Karin; Failing, Klaus; Weiss, Douglas J

    2004-01-01

    The ADVIA 120 is a laser-based hematology analyzer with software applications for animal species. Accurate reference values would be useful for the assessment of new hematologic parameters and for interlaboratory comparisons. The goal of this study was to establish reference intervals for CBC results and new parameters for RBC morphology, reticulocytes, and platelets in healthy dogs and cats using the ADVIA 120 hematology system. The ADVIA 120, with multispecies software (version 1.107-MS), was used to analyze whole blood samples from clinically healthy dogs (n=46) and cats (n=61). Data distribution was determined and reference intervals were calculated as 2.5 to 97.5 percentiles and 25 to 75 percentiles. Most data showed Gaussian or log-normal distribution. The numbers of RBCs falling outside the normocytic-normochromic range were slightly higher in cats than in dogs. Both dogs and cats had reticulocytes with low, medium, and high absorbance. Mean numbers of large platelets and platelet clumps were higher in cats compared with dogs. Reference intervals obtained on the ADVIA 120 provide valuable baseline information for assessing new hematologic parameters and for interlaboratory comparisons. Differences compared with previously published reference values can be attributed largely to differences in methodology.

  3. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  4. Presaddle and postsaddle dissipative effects in fission using complete kinematics measurements

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, J. L.; Benlliure, J.; Taïeb, J.; Alvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Boutoux, G.; Casarejos, E.; Chatillon, A.; Cortina-Gil, D.; Gorbinet, T.; Heinz, A.; Kelić-Heil, A.; Laurent, B.; Martin, J.-F.; Paradela, C.; Pellereau, E.; Pietras, B.; Ramos, D.; Rodríguez-Tajes, C.; Rossi, D. M.; Simon, H.; Vargas, J.; Voss, B.

    2016-12-01

    A complete kinematics measurement of the two fission fragments was used for the first time to investigate fission dynamics at small and large deformations. Fissioning systems with high excitation energies, compact shapes, and low angular momenta were produced in inverse kinematics by using spallation reactions of lead projectiles. A new generation experimental setup allowed for the first full and unambiguous identification in mass and atomic number of both fission fragments. This measurement permitted us to accurately determine fission cross sections, the charge distribution, and the neutron excess of the fission fragments as a function of the atomic number of the fissioning system. These data are compared with different model calculations to extract information on the value of the dissipation parameter at small and large deformations. The present results do not show any sizable dependence of the nuclear dissipation parameter on temperature or deformation.

  5. Dynamic Responses of Flexible Cylinders with Low Mass Ratio

    NASA Astrophysics Data System (ADS)

    Olaoye, Abiodun; Wang, Zhicheng; Triantafyllou, Michael

    2017-11-01

    Flexible cylinders with low mass ratios such as composite risers are attractive in the offshore industry because they require lower top tension and are less likely to buckle under self-weight compared to steel risers. However, their relatively low stiffness characteristics make them more vulnerable to vortex induced vibrations. Additionally, numerical investigation of the dynamic responses of such structures based on realistic conditions is limited by high Reynolds number, complex sheared flow profile, large aspect ratio and low mass ratio challenges. In the framework of Fourier spectral/hp element method, the current technique employs entropy-viscosity method (EVM) based large-eddy simulation approach for flow solver and fictitious added mass method for structure solver. The combination of both methods can handle fluid-structure interaction problems at high Reynolds number with low mass ratio. A validation of the numerical approach is provided by comparison with experiments.

  6. [Antibacterial prevention of suppurative complications after operations on the large intestine].

    PubMed

    Kuzin, M I; Pomelov, V S; Vandiaev, G K; Ialgashev, T Ia; Blatun, L A

    1983-05-01

    The data on comparative study of complications after operations on the large intestine are presented. During the preoperative period, 62 patients of the control group were treated with phthalylsulfathiazole, nevigramon and nystatin. Thirty-nine patients of the test group were treated with metronidazole and kanamycin monosulfate. Kanamycin monosulfate was used 3 days before the operation in a dose of 0.5 g orally 4 times a day whereas metronidazole in a dose of 0.5 g 3 times a day. The last doses of the drugs were administered 4-5 hours before the operation. After the operations the patients were treated with kanamycin sulfate for 3-5 days in a daily dose of 2 g intramuscularly. The number of the postoperative suppurative complications decreased from 22 to 5 per cent. No lethal outcomes were registered in the test group. The number of lethal outcomes in the control group amounted to 8 per cent.

  7. Numerical study of axial turbulent flow over long cylinders

    NASA Technical Reports Server (NTRS)

    Neves, J. C.; Moin, P.; Moser, R. D.

    1991-01-01

    The effects of transverse curvature are investigated by means of direct numerical simulations of turbulent axial flow over cylinders. Two cases of Reynolds number of about 3400 and layer-thickness-to-cylinder-radius ratios of 5 and 11 were simulated. All essential turbulence scales were resolved in both calculations, and a large number of turbulence statistics were computed. The results are compared with the plane channel results of Kim et al. (1987) and with experiments. With transverse curvature the skin friction coefficient increases and the turbulence statistics, when scaled with wall units, are lower than in the plane channel. The momentum equation provides a scaling that collapses the cylinder statistics, and allows the results to be interpreted in light of the plane channel flow. The azimuthal and radial length scales of the structures in the flow are of the order of the cylinder diameter. Boomerang-shaped structures with large spanwise length scales were observed in the flow.

  8. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks

    PubMed Central

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  9. Secretory vesicles of immune cells contain only a limited number of interleukin 6 molecules.

    PubMed

    Verboogen, Daniëlle R J; Ter Beest, Martin; Honigmann, Alf; van den Bogaart, Geert

    2018-05-01

    Immune cells communicate by releasing large quantities of cytokines. Although the mechanisms of cytokine secretion are increasingly understood, quantitative knowledge of the number of cytokines per vesicle is still lacking. Here, we measured with quantitative microscopy the release rate of vesicles potentially carrying interleukin-6 (IL-6) in human dendritic cells. By comparing this to the total secreted IL-6, we estimate that secretory vesicles contain about 0.5-3 IL-6 molecules, but with a large spread among cells/donors. Moreover, IL-6 did not accumulate within most cells, indicating that synthesis and not trafficking is the bottleneck for IL-6 production. IL-6 accumulated in the Golgi apparatus only in ~ 10% of the cells. Understanding how immune cells produce cytokines is important for designing new immunomodulatory drugs. © 2018 The Authors. FEBS Letters published by John Wiley & Sons Ltd on behalf of Federation of European Biochemical Societies.

  10. Glyoxalase 1 copy number variation in patients with well differentiated gastro-entero-pancreatic neuroendocrine tumours (GEP-NET)

    PubMed Central

    Xue, Mingzhan; Shafie, Alaa; Qaiser, Talha; Rajpoot, Nasir M.; Kaltsas, Gregory; James, Sean; Gopalakrishnan, Kishore; Fisk, Adrian; Dimitriadis, Georgios K.; Grammatopoulos, Dimitris K.; Rabbani, Naila; Thornalley, Paul J.; Weickert, Martin O.

    2017-01-01

    Background The glyoxalase-1 gene (GLO1) is a hotspot for copy-number variation (CNV) in human genomes. Increased GLO1 copy-number is associated with multidrug resistance in tumour chemotherapy, but prevalence of GLO1 CNV in gastro-entero-pancreatic neuroendocrine tumours (GEP-NET) is unknown. Methods GLO1 copy-number variation was measured in 39 patients with GEP-NET (midgut NET, n = 25; pancreatic NET, n = 14) after curative or debulking surgical treatment. Primary tumour tissue, surrounding healthy tissue and, where applicable, additional metastatic tumour tissue were analysed, using real time qPCR. Progression and survival following surgical treatment were monitored over 4.2 ± 0.5 years. Results In the pooled GEP-NET cohort, GLO1 copy-number in healthy tissue was 2.0 in all samples but significantly increased in primary tumour tissue in 43% of patients with pancreatic NET and in 72% of patients with midgut NET, mainly driven by significantly higher GLO1 copy-number in midgut NET. In tissue from additional metastases resection (18 midgut NET and one pancreatic NET), GLO1 copy number was also increased, compared with healthy tissue; but was not significantly different compared with primary tumour tissue. During mean 3 - 5 years follow-up, 8 patients died and 16 patients showed radiological progression. In midgut NET, a high GLO1 copy-number was associated with earlier progression. In NETs with increased GLO1 copy number, there was increased Glo1 protein expression compared to non-malignant tissue. Conclusions GLO1 copy-number was increased in a large percentage of patients with GEP-NET and correlated positively with increased Glo1 protein in tumour tissue. Analysis of GLO1 copy-number variation particularly in patients with midgut NET could be a novel prognostic marker for tumour progression. PMID:29100361

  11. Minimal Absent Words in Four Human Genome Assemblies

    PubMed Central

    Garcia, Sara P.; Pinho, Armando J.

    2011-01-01

    Minimal absent words have been computed in genomes of organisms from all domains of life. Here, we aim to contribute to the catalogue of human genomic variation by investigating the variation in number and content of minimal absent words within a species, using four human genome assemblies. We compare the reference human genome GRCh37 assembly, the HuRef assembly of the genome of Craig Venter, the NA12878 assembly from cell line GM12878, and the YH assembly of the genome of a Han Chinese individual. We find the variation in number and content of minimal absent words between assemblies more significant for large and very large minimal absent words, where the biases of sequencing and assembly methodologies become more pronounced. Moreover, we find generally greater similarity between the human genome assemblies sequenced with capillary-based technologies (GRCh37 and HuRef) than between the human genome assemblies sequenced with massively parallel technologies (NA12878 and YH). Finally, as expected, we find the overall variation in number and content of minimal absent words within a species to be generally smaller than the variation between species. PMID:22220210

  12. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  13. Monitoring the environment and human sentiment on the Great Barrier Reef: Assessing the potential of collective sensing.

    PubMed

    Becken, Susanne; Stantic, Bela; Chen, Jinyan; Alaei, Ali Reza; Connolly, Rod M

    2017-12-01

    With the growth of smartphone usage the number of social media posts has significantly increased and represents potentially valuable information for management, including of natural resources and the environment. Already, evidence of using 'human sensor' in crises management suggests that collective knowledge could be used to complement traditional monitoring. This research uses Twitter data posted from the Great Barrier Reef region, Australia, to assess whether the extent and type of data could be used to Great Barrier Reef organisations as part of their monitoring program. The analysis reveals that large amounts of tweets, covering the geographic area of interest, are available and that the pool of information providers is greatly enhanced by the large number of tourists to this region. A keyword and sentiment analysis demonstrates the usefulness of the Twitter data, but also highlights that the actual number of Reef-related tweets is comparatively small and lacks specificity. Suggestions for further steps towards the development of an integrative data platform that incorporates social media are provided. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Measuring mercury and other elemental components in tree rings

    USGS Publications Warehouse

    Gillan, C.; Hollerman, W.A.; Doyle, T.W.; Lewis, T.E.

    2004-01-01

    There has been considerable interest in measuring heavy metal pollution, such as mercury, using tree ring analysis. Since 1970, this method has provided a historical snapshot of pollutant concentrations near hazardous waste sites. Traditional methods of analysis have long been used with heavy metal pollutants such as mercury. These methods, such as atomic fluorescence and laser ablation, are sometimes time consuming and expensive to implement. In recent years, ion beam techniques, such as Particle Induced X-Ray Emission (PIXE), have been used to measure large numbers of elements. Most of the existing research in this area has been completed for low to medium atomic number pollutants, such as titanium, cobalt, nickel, and copper. Due to the reduction of sensitivity, it is often difficult or impossible to use traditional low energy (few MeV) PIXE analysis for pollutants with large atomic numbers. For example, the PIXE detection limit for mercury was recently measured to be about 1 ppm for a spiked Southern Magnolia wood sample [ref. 1]. This presentation will compare PIXE and standard chemical concentration results for a variety of wood samples.

  15. Measuring mercury and other elemental components in tree rings

    USGS Publications Warehouse

    Gillan, C.; Hollerman, W.A.; Doyle, T.W.; Lewis, T.E.

    2004-01-01

    There has been considerable interest in measuring heavy metal pollution, such as mercury, using tree ring analysis. Since 1970, this method has provided a historical snapshot of pollutant concentrations near hazardous waste sites. Traditional methods of analysis have long been used with heavy metal pollutants such as mercury. These methods, such as atomic fluorescence and laser ablation, are sometimes time consuming and expensive to implement. In recent years, ion beam techniques, such as Particle Induced X-Ray Emission (PIXE), have been used to measure large numbers of elements. Most of the existing research in this area has been completed for low to medium atomic number pollutants, such as titanium, cobalt, nickel, and copper. Due to the reduction of sensitivity, it is often difficult or impossible to use traditional low energy (few MeV) PIXE analysis for pollutants with large atomic numbers. For example, the PIXE detection limit for mercury was recently measured to be about 1 ppm for a spiked Southern Magnolia wood sample [ref. 1]. This presentation will compare PIXE and standard chemical concentration results for a variety of wood samples. Copyright 2004 by ISA.

  16. Portraiture lens concept in a mobile phone camera

    NASA Astrophysics Data System (ADS)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  17. Enhanced cellular transport and drug targeting using dendritic nanostructures

    NASA Astrophysics Data System (ADS)

    Kannan, R. M.; Kolhe, Parag; Kannan, Sujatha; Lieh-Lai, Mary

    2003-03-01

    Dendrimers and hyperbranched polymers possess highly branched architectures, with a large number of controllable, tailorable, peripheral' functionalities. Since the surface chemistry of these materials can be modified with relative ease, these materials have tremendous potential in targeted drug delivery. The large density of end groups can also be tailored to create enhanced affinity to targeted cells, and can also encapsulate drugs and deliver them in a controlled manner. We are developing tailor-modified dendritic systems for drug delivery. Synthesis, drug/ligand conjugation, in vitro cellular and in vivo drug delivery, and the targeting efficiency to the cell are being studied systematically using a wide variety of experimental tools. Results on PAMAM dendrimers and polyol hyperbranched polymers suggest that: (1) These materials complex/encapsulate a large number of drug molecules and release them at tailorable rates; (2) The drug-dendrimer complex is transported very rapidly through a A549 lung epithelial cancel cell line, compared to free drug, perhaps by endocytosis. The ability of the drug-dendrimer-ligand complexes to target specific asthma and cancer cells is currently being explored using in vitro and in vivo animal models.

  18. Clonal evolution in relapsed and refractory diffuse large B-cell lymphoma is characterized by high dynamics of subclones.

    PubMed

    Melchardt, Thomas; Hufnagl, Clemens; Weinstock, David M; Kopp, Nadja; Neureiter, Daniel; Tränkenschuh, Wolfgang; Hackl, Hubert; Weiss, Lukas; Rinnerthaler, Gabriel; Hartmann, Tanja N; Greil, Richard; Weigert, Oliver; Egle, Alexander

    2016-08-09

    Little information is available about the role of certain mutations for clonal evolution and the clinical outcome during relapse in diffuse large B-cell lymphoma (DLBCL). Therefore, we analyzed formalin-fixed-paraffin-embedded tumor samples from first diagnosis, relapsed or refractory disease from 28 patients using next-generation sequencing of the exons of 104 coding genes. Non-synonymous mutations were present in 74 of the 104 genes tested. Primary tumor samples showed a median of 8 non-synonymous mutations (range: 0-24) with the used gene set. Lower numbers of non-synonymous mutations in the primary tumor were associated with a better median OS compared with higher numbers (28 versus 15 months, p=0.031). We observed three patterns of clonal evolution during relapse of disease: large global change, subclonal selection and no or minimal change possibly suggesting preprogrammed resistance. We conclude that targeted re-sequencing is a feasible and informative approach to characterize the molecular pattern of relapse and it creates novel insights into the role of dynamics of individual genes.

  19. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia.

    PubMed

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA-DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA-DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place.

  20. MHC variability supports dog domestication from a large number of wolves: high diversity in Asia

    PubMed Central

    Niskanen, A K; Hagström, E; Lohi, H; Ruokonen, M; Esparza-Salas, R; Aspi, J; Savolainen, P

    2013-01-01

    The process of dog domestication is still somewhat unresolved. Earlier studies indicate that domestic dogs from all over the world have a common origin in Asia. So far, major histocompatibility complex (MHC) diversity has not been studied in detail in Asian dogs, although high levels of genetic diversity are expected at the domestication locality. We sequenced the second exon of the canine MHC gene DLA–DRB1 from 128 Asian dogs and compared our data with a previously published large data set of MHC alleles, mostly from European dogs. Our results show that Asian dogs have a higher MHC diversity than European dogs. We also estimated that there is only a small probability that new alleles have arisen by mutation since domestication. Based on the assumption that all of the currently known 102 DLA–DRB1 alleles come from the founding wolf population, we simulated the number of founding wolf individuals. Our simulations indicate an effective population size of at least 500 founding wolves, suggesting that the founding wolf population was large or that backcrossing has taken place. PMID:23073392

  1. A novel approach for copy number variation analysis by combining multiplex PCR with matrix-assisted laser desorption ionization time-of-flight mass spectrometry.

    PubMed

    Gao, Yonghui; Chen, Xiaoli; Wang, Jianhua; Shangguan, Shaofang; Dai, Yaohua; Zhang, Ting; Liu, Junling

    2013-06-20

    With the increasing interest in copy number variation as it pertains to human genomic variation, common phenotypes, and disease susceptibility, there is a pressing need for methods to accurately identify copy number. In this study, we developed a simple approach that combines multiplex PCR with matrix-assisted laser desorption ionization time-of-flight mass spectrometry for submicroscopic copy number variation detection. Two pairs of primers were used to simultaneously amplify query and endogenous control regions in the same reaction. Using a base extension reaction, the two amplicons were then distinguished and quantified in a mass spectrometry map. The peak ratio between the test region and the endogenous control region was manually calculated. The relative copy number could be determined by comparing the peak ratio between the test and control samples. This method generated a copy number measurement comparable to those produced by two other commonly used methods - multiplex ligation-dependent probe amplification and quantitative real-time PCR. Furthermore, it can discriminate a wide range of copy numbers. With a typical 384-format SpectroCHIP, at least six loci on 384 samples can be analyzed simultaneously in a hexaplex assay, making this assay adaptable for high throughput, and potentially applicable for large-scale association studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Asymptotic Distributions of Coalescence Times and Ancestral Lineage Numbers for Populations with Temporally Varying Size

    PubMed Central

    Chen, Hua; Chen, Kun

    2013-01-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939

  3. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    PubMed

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  4. Experimental Investigation of Free-Convection Heat Transfer in Vertical Tube at Large Grashof Numbers

    NASA Technical Reports Server (NTRS)

    Eckert, E R G; Diaguila, A J

    1955-01-01

    Report presents the results of an investigation conducted to study free-convection heat transfer in a stationary vertical tube closed at the bottom. The walls of the tube were heated, and heated air in the tube was continuously replaced by fresh cool air at the top. The tube was designed to provide a gravitational field with Grashof numbers of a magnitude comparable with those generated by the centrifugal field in rotating-blade coolant passages (10(8) to 10(13)). Local heat-transfer coefficients in the turbulent-flow range and the temperature field within the fluid were obtained.

  5. HST hot-Jupiter transmission spectral survey: from clear to cloudy exoplanets

    NASA Astrophysics Data System (ADS)

    Sing, David K.; Fortney, Jonathan J.; Nikolov, Nikolay; Wakeford, Hannah; Kataria, Tiffany; Evans, Tom M.; Aigrain, Suzanne; Ballester, Gilda E.; Burrows, Adam Seth; Deming, Drake; Desert, Jean-Michel; Gibson, Neale; Henry, Gregory W.; Huitson, Catherine; Knutson, Heather; Lecavelier des Etangs, Alain; Pont, Frederic; Showman, Adam P.; Vidal-Madjar, Alfred; Williamson, Michael W.; Wilson, Paul A.

    2016-01-01

    The large number of transiting exoplanets has prompted a new era of atmospheric studies, with comparative exoplanetology now possible. Here we present the comprehensive results from a Large program with the Hubble Space Telecope, which has recently obtained optical and near-IR transmission spectra for eight hot-Jupiter exoplanets in conjunction with warm Spitzer transit photometry. The spectra show a wide range of spectral behavior, which indicates diverse cloud and haze properties in their atmospheres. We will discuss the overall findings from the survey, comment on common trends observed in the exoplanet spectra, and remark on their theoretical implications.

  6. A Comparative Study of Four Methods for the Detection of Nematode Eggs and Large Protozoan Cysts in Mandrill Faecal Material.

    PubMed

    Pouillevet, Hanae; Dibakou, Serge-Ely; Ngoubangoye, Barthélémy; Poirotte, Clémence; Charpentier, Marie J E

    2017-01-01

    Coproscopical methods like sedimentation and flotation techniques are widely used in the field for studying simian gastrointestinal parasites. Four parasites of known zoonotic potential were studied in a free-ranging, non-provisioned population of mandrills (Mandrillus sphinx): 2 nematodes (Necatoramericanus/Oesophagostomum sp. complex and Strongyloides sp.) and 2 protozoan species (Balantidium coli and Entamoeba coli). Different coproscopical techniques are available but they are rarely compared to evaluate their efficiency to retrieve parasites. In this study 4 different field-friendly methods were compared. A sedimentation method and 3 different McMaster methods (using sugar, salt, and zinc sulphate solutions) were performed on 47 faecal samples collected from different individuals of both sexes and all ages. First, we show that McMaster flotation methods are appropriate to detect and thus quantify large protozoan cysts. Second, zinc sulphate McMaster flotation allows the retrieval of a higher number of parasite taxa compared to the other 3 methods. This method further shows the highest probability to detect each of the studied parasite taxa. Altogether our results show that zinc sulphate McMaster flotation appears to be the best technique to use when studying nematodes and large protozoa. © 2017 S. Karger AG, Basel.

  7. Accuracy of the lattice Boltzmann method for describing the behavior of a gas in the continuum limit.

    PubMed

    Kataoka, Takeshi; Tsutahara, Michihisa

    2010-11-01

    The accuracy of the lattice Boltzmann method (LBM) for describing the behavior of a gas in the continuum limit is systematically investigated. The asymptotic analysis for small Knudsen numbers is carried out to derive the corresponding fluid-dynamics-type equations, and the errors of the LBM are estimated by comparing them with the correct fluid-dynamics-type equations. We discuss the following three important cases: (I) the Mach number of the flow is much smaller than the Knudsen number, (II) the Mach number is of the same order as the Knudsen number, and (III) the Mach number is finite. From the von Karman relation, the above three cases correspond to the flows of (I) small Reynolds number, (II) finite Reynolds number, and (III) large Reynolds number, respectively. The analysis is made with the information only of the fundamental properties of the lattice Boltzmann models without stepping into their detailed form. The results are therefore applicable to various lattice Boltzmann models that satisfy the fundamental properties used in the analysis.

  8. What is the publication rate for presentations given at the British Academic Conference in Otolaryngology (BACO)?

    PubMed

    Lau, A S; Adan, G H; Krishnan, M; Leong, S C

    2017-04-01

    The publication rate of some large academic meetings such as the American Academy of Otolaryngology-Head and Neck Surgery has been reported as 32%. We aimed to compare the rate of publication at the British Academic Conference in Otolaryngology (BACO) to allow surveillance of research activity in the United Kingdom (UK). The abstract records of both BACO 2009 and 2012 were examined. The MEDLINE database was searched using PubMed (http://www.ncbi.nlm.nih.gov/pubmed) and an iterative approach. We recorded time to publication as well as the authors' region and journal. publication rate by conference, region and journal. Twice the number of presentations were made at BACO 2012 (n = 814) compared to BACO 2009 (n = 387). Absolute numbers of publications were 158 in 2012 and 92 in 2009. Overall, the publication rate dropped from 24% overall in 2009 to 19% in 2012. This difference in proportions was not significant (P = 0.08). The number of abstracts accepted for BACO 2012 doubled from BACO 2009 in nearly every subspecialty category, except the general/training category, which trebled. For both conferences, head and neck was the largest subspecialty abstract category, as well as the largest subspecialty publication category. This study showed that the majority of abstracts presented at BACO 2009 and 2012 did not progress to publication. The rate of publication was similar to that seen in other general ENT meetings but do not compare favourably to the 69% rate seen for presentations made at the Otorhinolaryngological Research Society (ORS). The large increase in accepted abstracts at BACO 2012 may reflect growing competition for entry to specialist training. © 2016 John Wiley & Sons Ltd.

  9. In silico peptide-binding predictions of passerine MHC class I reveal similarities across distantly related species, suggesting convergence on the level of protein function.

    PubMed

    Follin, Elna; Karlsson, Maria; Lundegaard, Claus; Nielsen, Morten; Wallin, Stefan; Paulsson, Kajsa; Westerdahl, Helena

    2013-04-01

    The major histocompatibility complex (MHC) genes are the most polymorphic genes found in the vertebrate genome, and they encode proteins that play an essential role in the adaptive immune response. Many songbirds (passerines) have been shown to have a large number of transcribed MHC class I genes compared to most mammals. To elucidate the reason for this large number of genes, we compared 14 MHC class I alleles (α1-α3 domains), from great reed warbler, house sparrow and tree sparrow, via phylogenetic analysis, homology modelling and in silico peptide-binding predictions to investigate their functional and genetic relationships. We found more pronounced clustering of the MHC class I allomorphs (allele specific proteins) in regards to their function (peptide-binding specificities) compared to their genetic relationships (amino acid sequences), indicating that the high number of alleles is of functional significance. The MHC class I allomorphs from house sparrow and tree sparrow, species that diverged 10 million years ago (MYA), had overlapping peptide-binding specificities, and these similarities across species were also confirmed in phylogenetic analyses based on amino acid sequences. Notably, there were also overlapping peptide-binding specificities in the allomorphs from house sparrow and great reed warbler, although these species diverged 30 MYA. This overlap was not found in a tree based on amino acid sequences. Our interpretation is that convergent evolution on the level of the protein function, possibly driven by selection from shared pathogens, has resulted in allomorphs with similar peptide-binding repertoires, although trans-species evolution in combination with gene conversion cannot be ruled out.

  10. Fast Simulation of Dynamic Ultrasound Images Using the GPU.

    PubMed

    Storve, Sigurd; Torp, Hans

    2017-10-01

    Simulated ultrasound data is a valuable tool for development and validation of quantitative image analysis methods in echocardiography. Unfortunately, simulation time can become prohibitive for phantoms consisting of a large number of point scatterers. The COLE algorithm by Gao et al. is a fast convolution-based simulator that trades simulation accuracy for improved speed. We present highly efficient parallelized CPU and GPU implementations of the COLE algorithm with an emphasis on dynamic simulations involving moving point scatterers. We argue that it is crucial to minimize the amount of data transfers from the CPU to achieve good performance on the GPU. We achieve this by storing the complete trajectories of the dynamic point scatterers as spline curves in the GPU memory. This leads to good efficiency when simulating sequences consisting of a large number of frames, such as B-mode and tissue Doppler data for a full cardiac cycle. In addition, we propose a phase-based subsample delay technique that efficiently eliminates flickering artifacts seen in B-mode sequences when COLE is used without enough temporal oversampling. To assess the performance, we used a laptop computer and a desktop computer, each equipped with a multicore Intel CPU and an NVIDIA GPU. Running the simulator on a high-end TITAN X GPU, we observed two orders of magnitude speedup compared to the parallel CPU version, three orders of magnitude speedup compared to simulation times reported by Gao et al. in their paper on COLE, and a speedup of 27000 times compared to the multithreaded version of Field II, using numbers reported in a paper by Jensen. We hope that by releasing the simulator as an open-source project we will encourage its use and further development.

  11. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  12. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  13. Effects of the seasonal cycle on superrotation in planetary atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.

    2014-05-20

    The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less

  14. Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems

    NASA Astrophysics Data System (ADS)

    Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao

    2016-02-01

    A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.

  15. Pressure Distributions for the GA(W)-2 Airfoil with 20% Aileron, 25% Slotted Flap and 30% Fowler Flap

    NASA Technical Reports Server (NTRS)

    Wentz, W. H., Jr.; Fiscko, K. A.

    1978-01-01

    Surface pressure distributions were measured for the 13% thick GA(W)-2 airfoil section fitted with 20% aileron, 25% slotted flap and 30% Fowler flap. All tests were conducted at a Reynolds number of 2.2 x 10 to the 6th power and a Mach number of 0.13. Pressure distribution and force and moment coefficient measurements are compared with theoretical results for a number of cases. Agreement between theory and experiment is generally good for low angles of attack and small flap deflections. For high angles and large flap deflections where regions of separation are present, the theory is inadequate. Theoretical drag predictions are poor for all flap-extended cases.

  16. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  17. Dual-Level Method for Estimating Multistructural Partition Functions with Torsional Anharmonicity.

    PubMed

    Bao, Junwei Lucas; Xing, Lili; Truhlar, Donald G

    2017-06-13

    For molecules with multiple torsions, an accurate evaluation of the molecular partition function requires consideration of multiple structures and their torsional-potential anharmonicity. We previously developed a method called MS-T for this problem, and it requires an exhaustive conformational search with frequency calculations for all the distinguishable conformers; this can become expensive for molecules with a large number of torsions (and hence a large number of structures) if it is carried out with high-level methods. In the present work, we propose a cost-effective method to approximate the MS-T partition function when there are a large number of structures, and we test it on a transition state that has eight torsions. This new method is a dual-level method that combines an exhaustive conformer search carried out by a low-level electronic structure method (for instance, AM1, which is very inexpensive) and selected calculations with a higher-level electronic structure method (for example, density functional theory with a functional that is suitable for conformational analysis and thermochemistry). To provide a severe test of the new method, we consider a transition state structure that has 8 torsional degrees of freedom; this transition state structure is formed along one of the reaction pathways of the hydrogen abstraction reaction (at carbon-1) of ketohydroperoxide (KHP; its IUPAC name is 4-hydroperoxy-2-pentanone) by OH radical. We find that our proposed dual-level method is able to significantly reduce the computational cost for computing MS-T partition functions for this test case with a large number of torsions and with a large number of conformers because we carry out high-level calculations for only a fraction of the distinguishable conformers found by the low-level method. In the example studied here, the dual-level method with 40 high-level optimizations (1.8% of the number of optimizations in a coarse-grained full search and 0.13% of the number of optimizations in a fine-grained full search) reproduces the full calculation of the high-level partition function within a factor of 1.0 to 2.0 from 200 to 1000 K. The error in the dual-level method can be further reduced to factors of 0.6 to 1.1 over the whole temperature interval from 200 to 2400 K by optimizing 128 structures (5.9% of the number of optimizations in a fine-grained full search and 0.41% of the number of optimizations in a fine-grained full search). These factor-of-two or better errors are small compared to errors up to a factor of 1.0 × 10 3 if one neglects multistructural effects for the case under study.

  18. Application of the High Gradient hydrodynamics code to simulations of a two-dimensional zero-pressure-gradient turbulent boundary layer over a flat plate

    NASA Astrophysics Data System (ADS)

    Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.

    2013-11-01

    The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.

  19. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  20. Large- and small-scale constraints on power spectra in Omega = 1 universes

    NASA Technical Reports Server (NTRS)

    Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.

    1993-01-01

    The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.

  1. Heat transport in Rayleigh-Bénard convection and angular momentum transport in Taylor-Couette flow: a comparative study.

    PubMed

    Brauckmann, Hannes J; Eckhardt, Bruno; Schumacher, Jörg

    2017-03-13

    Rayleigh-Bénard convection and Taylor-Couette flow are two canonical flows that have many properties in common. We here compare the two flows in detail for parameter values where the Nusselt numbers, i.e. the thermal transport and the angular momentum transport normalized by the corresponding laminar values, coincide. We study turbulent Rayleigh-Bénard convection in air at Rayleigh number Ra=10 7 and Taylor-Couette flow at shear Reynolds number Re S =2×10 4 for two different mean rotation rates but the same Nusselt numbers. For individual pairwise related fields and convective currents, we compare the probability density functions normalized by the corresponding root mean square values and taken at different distances from the wall. We find one rotation number for which there is very good agreement between the mean profiles of the two corresponding quantities temperature and angular momentum. Similarly, there is good agreement between the fluctuations in temperature and velocity components. For the heat and angular momentum currents, there are differences in the fluctuations outside the boundary layers that increase with overall rotation and can be related to differences in the flow structures in the boundary layer and in the bulk. The study extends the similarities between the two flows from global quantities to local quantities and reveals the effects of rotation on the transport.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  2. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.

    2011-04-10

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers frommore » their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.« less

  3. Heat transport in Rayleigh–Bénard convection and angular momentum transport in Taylor–Couette flow: a comparative study

    PubMed Central

    Brauckmann, Hannes J.

    2017-01-01

    Rayleigh–Bénard convection and Taylor–Couette flow are two canonical flows that have many properties in common. We here compare the two flows in detail for parameter values where the Nusselt numbers, i.e. the thermal transport and the angular momentum transport normalized by the corresponding laminar values, coincide. We study turbulent Rayleigh–Bénard convection in air at Rayleigh number Ra=107 and Taylor–Couette flow at shear Reynolds number ReS=2×104 for two different mean rotation rates but the same Nusselt numbers. For individual pairwise related fields and convective currents, we compare the probability density functions normalized by the corresponding root mean square values and taken at different distances from the wall. We find one rotation number for which there is very good agreement between the mean profiles of the two corresponding quantities temperature and angular momentum. Similarly, there is good agreement between the fluctuations in temperature and velocity components. For the heat and angular momentum currents, there are differences in the fluctuations outside the boundary layers that increase with overall rotation and can be related to differences in the flow structures in the boundary layer and in the bulk. The study extends the similarities between the two flows from global quantities to local quantities and reveals the effects of rotation on the transport. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167575

  4. Transition of the Laminar Boundary Layer on a Delta Wing with 74 degree Sweep in Free Flight at Mach Numbers from 2.8 to 5.3

    NASA Technical Reports Server (NTRS)

    Chapman, Gary T.

    1961-01-01

    The tests were conducted at Mach numbers from 2.8 to 5.3, with model surface temperatures small compared to boundary-layer recovery temperature. The effects of Mach number, temperature ratio, unit Reynolds number, leading-edge diameter, and angle of attack were investigated in an exploratory fashion. The effect of heat-transfer condition (i.e., wall temperature to total temperature ratio) and Mach number can not be separated explicitly in free-flight tests. However, the data of the present report, as well as those of NACA TN 3473, were found to be more consistent when plotted versus temperature ratio. Decreasing temperature ratio increased the transition Reynolds number. The effect of unit Reynolds number was small as was the effect of leading-edge diameter within the range tested. At small values of angle of attack, transition moved forward on the windward surface and rearward on the leeward surface. This trend was reversed at high angles of attack (6 deg to 18 deg). Possible reasons for this are the reduction of crossflow on the windward side and the influence of the lifting vortices on the leeward surface. When the transition results on the 740 delta wing were compared to data at similar test conditions for an unswept leading edge, the results bore out the results of earlier research at nearly zero heat transfer; namely, sweep causes a large reduction in the transition Reynolds number.

  5. Very large hail occurrence in Poland from 2007 to 2015

    NASA Astrophysics Data System (ADS)

    Pilorz, Wojciech

    2015-10-01

    Very large hail is known as a presence of a hailstone greater or equal to 5 cm in diameter. This phenomenon is rare but its significant consequences, not only to agriculture but also to automobiles, households and people outdoor makes it essential thing to examine. Hail appearance is strictly connected with storms frequency and its kind. The most hail-endangered kind of storm is supercell storm. Geographical distribution of hailstorms was compared with geographical distribution of storms in Poland. Similarities were found. The area of the largest number of storms is southeastern Poland. Analyzed European Severe Weather Database (ESWD) data showed that most of very large hail reports occurred in this part of Poland. The probable reason for this situation is the longest period of lasting tropical airmasses in southeastern Poland. Spatial distribution analysis shows also more hail incidents over Upper Silesia, Lesser Poland, Subcarpathia and Świętokrzyskie regions. The information source about hail occurrence was ESWD - open database, where everyone can add report and find reports which meet given search criteria. 69 hailstorms in the period of 2007 - 2015 were examined. They caused 121 very large hail reports. It was found that there is large disproportion in number of hailstorms and hail reports between individual years. Very large hail season in Poland begins in May and ends in September with cumulation in July. Most of hail occurs between 12:00 and 17:00 UTC, but there were some cases of very large (one extremely large) hail at night and early morning hours. However very large hail is a spectacular phenomenon, its local character determines potentially high information loss rate and it is the most significant problem in hail research.

  6. COMPARISON OF TWO METHODS FOR THE ISOLATION OF SALMONELLAE FROM IMPORTED FOODS.

    PubMed

    TAYLOR, W I; HOBBS, B C; SMITH, M E

    1964-01-01

    Two methods for the detection of salmonellae in foods were compared in 179 imported meat and egg samples. The number of positive samples and replications, and the number of strains and kinds of serotypes were statistically comparable by both the direct enrichment method of the Food Hygiene Laboratory in England, and the pre-enrichment method devised for processed foods in the United States. Boneless frozen beef, veal, and horsemeat imported from five countries for consumption in England were found to have salmonellae present in 48 of 116 (41%) samples. Dried egg products imported from three countries were observed to have salmonellae in 10 of 63 (16%) samples. The high incidence of salmonellae isolated from imported foods illustrated the existence of an international health hazard resulting from the continuous introduction of exogenous strains of pathogenic microorganisms on a large scale.

  7. Large eddy simulation on buoyant gas diffusion near building

    NASA Astrophysics Data System (ADS)

    Tominaga, Yoshihide; Murakami, Shuzo; Mochida, Akashi

    1992-12-01

    Large eddy simulations on turbulent diffusion of buoyant gases near a building model are carried out for three cases in which the densimetric Froude Number (Frd) was specified at - 8.6, zero and 8.6 respectively. The accuracy of these simulations is examined by comparing the numerically predicted results with wind tunnel experiments conducted. Two types of sub-grid scale models, the standard Smagorinsky model (type 1) and the modified Smagorinsky model (type 2) are compared. The former does not take account of the production of subgrid energy by buoyancy force but the latter incorporates this effect. The latter model (type 2) gives more accurate results than those given by the standard Smagorinsky model (type 1) in terms of the distributions of kappa greater than sign C less than sign greater than sign C(sup - 2) less than sign.

  8. Design characteristics of acute care units in china.

    PubMed

    Lu, Yi; Wang, Yijia

    2014-01-01

    To describe the current state of design characteristics of acute care units in China's public hospitals and compare these with characteristics with acute care units in the United States. The healthcare construction industry in China is one of the fastest growing sectors in China and, arguably, in the world. Understanding the physical design of acute care units in China is of great importance because it will influence a large population. Descriptive study was performed of unit configuration, size, patient visibility, distance to nursing station and supplies, and lighting conditions in 25 units in 19 public hospitals built after 2003. Data and information were collected based on spatial and visibility analysis. The study identified major design characteristics of the recently built (from 2003 onward) acute care units in China, comparing them, where appropriate, with those in U.S. It found there are three dominant types of unit layout: single-corridor (52%), triangular (36%), and double-corridor (12%). The number of private rooms is very low (11%), compared with two- or three-bed rooms. Centralized nursing stations are the only type of nurses' working area. China also has a large unit size in terms of number of patient beds. The average number of patient beds in a unit is 40.6 in China (versus 32.9 in U.S.). The care units in China have longer walking distance from nursing station to patient bedside. The percentage of beds visible from a nursing station is lower in China than in the U.S. The access to natural light and direct sunlight in patient rooms is greater in China compared with those in U.S.-100% of patient rooms in China have natural lighting. A majority of them face south or southeast and thus receiving direct sunlight (91.4%). Because of the differences in economies and building codes, there are dramatic differences between the spatial characteristics of acute care units in China and the United States. © 2014 Vendome Group, LLC.

  9. Are periprosthetic tissue reactions observed after revision of total disc replacement comparable to the reactions observed after total hip or knee revision surgery?

    PubMed Central

    Punt, Ilona M.; Austen, Shennah; Cleutjens, Jack P.M.; Kurtz, Steven M.; ten Broeke, René H.M.; van Rhijn, Lodewijk W.; Willems, Paul C.; van Ooij, André

    2011-01-01

    Study design Comparative study. Objective To compare periprosthetic tissue reactions observed after total disc replacement (TDR), total hip arthroplasty (THA) and total knee arthroplasty (TKA) revision surgery. Summary of background data Prosthetic wear debris leading to particle disease, followed by osteolysis, is often observed after THA and TKA. Although the presence of polyethylene (PE) particles and periprosthetic inflammation after TDR has been proven recently, osteolysis is rarely observed. The clinical relevance of PE wear debris in the spine remains poorly understood. Methods Number, size and shape of PE particles, as well as quantity and type of inflammatory cells in periprosthetic tissue retrieved during Charité TDR (n=22), THA (n=10) and TKA (n=4) revision surgery were compared. Tissue samples were stained with hematoxylin/eosin and examined by using light microscopy with bright field and polarized light. Results After THA, large numbers of PE particles <6 µm were observed, which were mainly phagocytosed by macrophages. The TKA group had a broad size range with many larger PE particles and more giant cells. In TDR, the size range was similar to that observed in TKA. However, the smallest particles were the most prevalent with 75% of the particles being <6 µm, as seen in revision THA. In TDR, both macrophages and giant cells were present with a higher number of macrophages. Conclusions Both small and large PE particles are present after TDR revision surgery compatible with both THA and TKA wear patterns. The similarities between periprosthetic tissue reactions in the different groups may give more insight in the clinical relevance of PE particles and inflammatory cells in the lumbar spine. The current findings may help to improve TDR design as applied from technologies previously developed in THA and TKA with the goal of a longer survival of TDR. PMID:21336235

  10. Synteny conservation between two distantly-related Rosaceae genomes: Prunus (the stone fruits) and Fragaria (the strawberry)

    PubMed Central

    Vilanova, Santiago; Sargent, Daniel J; Arús, Pere; Monfort, Amparo

    2008-01-01

    Background The Rosaceae encompass a large number of economically-important diploid and polyploid fruit and ornamental species in many different genera. The basic chromosome numbers of these genera are x = 7, 8 and 9 and all have compact and relatively similar genome sizes. Comparative mapping between distantly-related genera has been performed to a limited extent in the Rosaceae including a comparison between Malus (subfamily Maloideae) and Prunus (subfamily Prunoideae); however no data has been published to date comparing Malus or Prunus to a member of the subfamily Rosoideae. In this paper we compare the genome of Fragaria, a member of the Rosoideae, to Prunus, a member of the Prunoideae. Results The diploid genomes of Prunus (2n = 2x = 16) and Fragaria (2n = 2x = 14) were compared through the mapping of 71 anchor markers – 40 restriction fragment length polymorphisms (RFLPs), 29 indels or single nucleotide polymorphisms (SNPs) derived from expressed sequence tags (ESTs) and two simple-sequence repeats (SSRs) – on the reference maps of both genera. These markers provided good coverage of the Prunus (78%) and Fragaria (78%) genomes, with maximum gaps and average densities of 22 cM and 7.3 cM/marker in Prunus and 32 cM and 8.0 cM/marker in Fragaria. Conclusion Our results indicate a clear pattern of synteny, with most markers of each chromosome of one of these species mapping to one or two chromosomes of the other. A large number of rearrangements (36), most of which produced by inversions (27) and the rest (9) by translocations or fission/fusion events could also be inferred. We have provided the first framework for the comparison of the position of genes or DNA sequences of these two economically valuable and yet distantly-related genera of the Rosaceae. PMID:18564412

  11. The Influence of Heat Flux Boundary Heterogeneity on Heat Transport in Earth's Core

    NASA Astrophysics Data System (ADS)

    Davies, C. J.; Mound, J. E.

    2017-12-01

    Rotating convection in planetary systems can be subjected to large lateral variations in heat flux from above; for example, due to the interaction between the metallic cores of terrestrial planets and their overlying silicate mantles. The boundary anomalies can significantly reorganise the pattern of convection and influence global diagnostics such as the Nusselt number. We have conducted a suite of numerical simulations of rotating convection in a spherical shell geometry comparing convection with homogeneous boundary conditions to that with two patterns of heat flux variation at the outer boundary: one hemispheric pattern, and one derived from seismic tomographic imaging of Earth's lower mantle. We consider Ekman numbers down to 10-6 and flux-based Rayleigh numbers up to 800 times critical. The heterogeneous boundary conditions tend to increase the Nusselt number relative to the equivalent homogeneous case by altering both the flow and temperature fields, particularly near the top of the convecting region. The enhancement in Nusselt number tends to increase as the amplitude and wavelength of the boundary heterogeneity is increased and as the system becomes more supercritical. In our suite of models, the increase in Nusselt number can be as large as 25%. The slope of the Nusselt-Rayleigh scaling also changes when boundary heterogeneity is included, which has implications when extrapolating to planetary conditions. Additionally, regions of effective thermal stratification can develop when strongly heterogeneous heat flux conditions are applied at the outer boundary.

  12. Robust estimation of microbial diversity in theory and in practice

    PubMed Central

    Haegeman, Bart; Hamelin, Jérôme; Moriarty, John; Neal, Peter; Dushoff, Jonathan; Weitz, Joshua S

    2013-01-01

    Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics (‘Hill diversities'), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. PMID:23407313

  13. Increased vitamin D receptor gene expression and rs11568820 and rs4516035 promoter polymorphisms in autistic disorder.

    PubMed

    Balta, Burhan; Gumus, Hakan; Bayramov, Ruslan; Korkmaz Bayramov, Keziban; Erdogan, Murat; Oztop, Didem Behice; Dogan, Muhammet Ensar; Taheri, Serpil; Dundar, Munis

    2018-05-18

    Although there are a large number of sequence variants of different genes and copy number variations at various loci identified in autistic disorder (AD) patients, the pathogenesis of AD has not been elucidated completely. Recently, in AD patients, a large number of expression array and transcriptome studies have shown an increase in the expression of genes especially related to innate immune response. Antimicrobial effects of vitamin D and VDR are exerted through Toll-Like-Receptors (TLR) which have an important role in the innate immune response, are expressed by antigen presenting cells and recognize foreign microorganisms. In this study, age and gender matched 30 patients diagnosed with AD and 30 healthy controls were included in the study. Comparatively whole blood VDR gene expression and rs11568820 and rs4516035 SNP profile of the promoter region of the VDR gene were investigated by real time PCR. Whole blood VDR gene expression was significantly higher in the AD group compared to control subjects (p < 0.0001). There were no significant differences among allele and genotype distribution of rs11568820 and rs4516035 polymorphisms between AD patients and controls. The increase of VDR gene expression in patients with AD may be in accordance with an increase in the innate immune response in patients with AD. Furthermore, this study will stimulate new studies in order to clarify the relationship among AD, vitamin D, VDR, and innate immunity.

  14. Relativistic Electrons Produced by Foreshock Disturbances Observed Upstream of Earth's Bow Shock.

    PubMed

    Wilson, L B; Sibeck, D G; Turner, D L; Osmane, A; Caprioli, D; Angelopoulos, V

    2016-11-18

    Charged particles can be reflected and accelerated by strong (i.e., high Mach number) astrophysical collisionless shock waves, streaming away to form a foreshock region in communication with the shock. Foreshocks are primarily populated by suprathermal ions that can generate foreshock disturbances-large-scale (i.e., tens to thousands of thermal ion Larmor radii), transient (∼5-10  per day) structures. They have recently been found to accelerate ions to energies of several keV. Although electrons in Saturn's high Mach number (M>40) bow shock can be accelerated to relativistic energies (nearly 1000 keV), it has hitherto been thought impossible to accelerate electrons beyond a few tens of keV at Earth's low Mach number (1≤M<20) bow shock. Here we report observations of electrons energized by foreshock disturbances to energies up to at least ∼300  keV. Although such energetic electrons have been previously observed, their presence has been attributed to escaping magnetospheric particles or solar events. These relativistic electrons are not associated with any solar or magnetospheric activity. Further, due to their relatively small Larmor radii (compared to magnetic gradient scale lengths) and large thermal speeds (compared to shock speeds), no known shock acceleration mechanism can energize thermal electrons up to relativistic energies. The discovery of relativistic electrons associated with foreshock structures commonly generated in astrophysical shocks could provide a new paradigm for electron injections and acceleration in collisionless plasmas.

  15. The role of colonic mast cells and myenteric plexitis in patients with diverticular disease.

    PubMed

    Bassotti, Gabrio; Villanacci, Vincenzo; Nascimbeni, Riccardo; Antonelli, Elisabetta; Cadei, Moris; Manenti, Stefania; Lorenzi, Luisa; Titi, Amin; Salerni, Bruno

    2013-02-01

    Gut mast cells represent an important cell population involved in intestinal homeostasis and inflammatory processes. However, their possible role has not to date been investigated in colonic diverticular disease. This study aims to evaluate colonic mast cells in patients undergoing surgery for diverticular disease. Surgical resection samples from 27 patients undergoing surgery for diverticular disease (12 emergency procedures for severe disease and 15 elective procedures) were evaluated. The number of mast cells was assessed in the various layers by means of a specific antibody (tryptase) and compared with those evaluated in ten controls. In patients with mast cells degranulation, double immunohistochemistry, also assessing nerve fibres, was carried out. In addition, the presence of myenteric plexitis was sought. Compared with controls, the number of mast cells in diverticular patients was significantly increased, both as an overall figure and in the various layers of the large bowel. In patients in whom mast cells degranulation was present, these were always closed to nerve fibres. No differences were found between the two subgroups of patients with respect to the number and distribution of mast cells; however, all patients undergoing emergency surgery (but none of those undergoing elective procedures) had myenteric plexitis, represented by lymphocytic infiltration in 67 % and eosinophilic infiltration in 33 % of cases. Patients with diverticular disease display an increase of mast cells in the large bowel. The presence of myenteric plexitis in those with complicated, severe disease, suggest that this could represent a histopathologic marker of more aggressive disease.

  16. Regional estimation of response routine parameters

    NASA Astrophysics Data System (ADS)

    Tøfte, Lena S.

    2015-04-01

    Reducing the number of calibration parameters is of a considerable advantage when area distributed hydrological models are to be calibrated, both due to equifinality and over-parameterization of the model in general, and for making the calibration process more efficient. A simple non-threshold response model for drainage in natural catchments based on among others Kirchner's article in WRR 2009 is implemented in the gridded hydrological model in the ENKI framework. This response model takes only the hydrogram into account; it has one state and two parameters, and is adapted to catchments that are dominated by terrain drainage. In former analyses of natural discharge series from a large number of catchments in different regions of Norway, we found that these response model parameters can be calculated from some known catchment characteristics, as catchment area and lake percentage, found in maps or data bases, meaning that the parameters can easily be found also for ungauged catchments. In the presented work from the EU project COMPLEX a large region in Mid-Norway containing 27 simulated catchments of different sizes and characteristics is calibrated. Results from two different calibration strategies are compared: 1) removing the response parameters from the calibration by calculating them in advance, based on the results from our former studies, and 2) including the response parameters in the calibration, both as maps with different values for each catchment, and as a constant number for the total region. The resulting simulation performances are compared and discussed.

  17. Performing monkeys of Bangladesh: characterizing their source and genetic variation.

    PubMed

    Hasan, M Kamrul; Feeroz, M Mostafa; Jones-Engel, Lisa; Engel, Gregory A; Akhtar, Sharmin; Kanthaswamy, Sree; Smith, David Glenn

    2016-04-01

    The acquisition and training of monkeys to perform is a centuries-old tradition in South Asia, resulting in a large number of rhesus macaques kept in captivity for this purpose. The performing monkeys are reportedly collected from free-ranging populations, and may escape from their owners or may be released into other populations. In order to determine whether this tradition involving the acquisition and movement of animals has influenced the population structure of free-ranging rhesus macaques in Bangladesh, we first characterized the source of these monkeys. Biological samples from 65 performing macaques collected between January 2010 and August 2013 were analyzed for genetic variation using 716 base pairs of mitochondrial DNA. Performing monkey sequences were compared with those of free-ranging rhesus macaque populations in Bangladesh, India and Myanmar. Forty-five haplotypes with 116 (16 %) polymorphic nucleotide sites were detected among the performing monkeys. As for the free-ranging rhesus population, most of the substitutions (89 %) were transitions, and no indels (insertion/deletion) were observed. The estimate of the mean number of pair-wise differences for the performing monkey population was 10.1264 ± 4.686, compared to 14.076 ± 6.363 for the free-ranging population. Fifteen free-ranging rhesus macaque populations were identified as the source of performing monkeys in Bangladesh; several of these populations were from areas where active provisioning has resulted in a large number of macaques. The collection of performing monkeys from India was also evident.

  18. Performing monkeys of Bangladesh: characterizing their source and genetic variation

    PubMed Central

    Hasan, M Kamrul; Feeroz, M Mostafa; Jones-Engel, Lisa; Engel, Gregory A; Akhtar, Sharmin; Kanthaswamy, Sree; Smith, David Glenn

    2016-01-01

    The acquisition and training of monkeys to perform is a century's old tradition in South Asia, resulting in a large number of rhesus macaques kept in captivity for this purpose. The performing monkeys are reportedly collected from free-ranging populations and may escape from their owners or be released into other populations. In order to determine whether this tradition, that involves the acquisition and movement of animals, has influenced the population structure of free-ranging rhesus macaques in Bangladesh we first characterized the source of these monkeys. Biological samples from 65 performing macaques, collected between January 2010 and August 2013 were analyzed for genetic variation using 716 base pairs of mitochondrial DNA. Performing monkey sequences were compared with those of free-ranging rhesus macaque populations in Bangladesh, India and Myanmar. Forty-five haplotypes with 116 (16%) polymorphic nucleotide sites were detected among the performing monkeys. As for the free-ranging rhesus population, most of the substitutions (89%) were transitions and no indels (insertion/deletion) were observed. The estimate of the mean number of pair-wise difference for the performing monkey population was 10.1264 ± 4.686, compared to 14.076 ± 6.363 for the free-ranging population. Fifteen free-ranging rhesus macaque populations were identified as the source of performing monkeys in Bangladesh; several of these populations were from areas where active provisioning has resulted in a large number of macaques. Collection of performing monkeys from India was also evident. PMID:26758818

  19. Incorporating Functional Annotations for Fine-Mapping Causal Variants in a Bayesian Framework Using Summary Statistics.

    PubMed

    Chen, Wenan; McDonnell, Shannon K; Thibodeau, Stephen N; Tillmans, Lori S; Schaid, Daniel J

    2016-11-01

    Functional annotations have been shown to improve both the discovery power and fine-mapping accuracy in genome-wide association studies. However, the optimal strategy to incorporate the large number of existing annotations is still not clear. In this study, we propose a Bayesian framework to incorporate functional annotations in a systematic manner. We compute the maximum a posteriori solution and use cross validation to find the optimal penalty parameters. By extending our previous fine-mapping method CAVIARBF into this framework, we require only summary statistics as input. We also derived an exact calculation of Bayes factors using summary statistics for quantitative traits, which is necessary when a large proportion of trait variance is explained by the variants of interest, such as in fine mapping expression quantitative trait loci (eQTL). We compared the proposed method with PAINTOR using different strategies to combine annotations. Simulation results show that the proposed method achieves the best accuracy in identifying causal variants among the different strategies and methods compared. We also find that for annotations with moderate effects from a large annotation pool, screening annotations individually and then combining the top annotations can produce overly optimistic results. We applied these methods on two real data sets: a meta-analysis result of lipid traits and a cis-eQTL study of normal prostate tissues. For the eQTL data, incorporating annotations significantly increased the number of potential causal variants with high probabilities. Copyright © 2016 by the Genetics Society of America.

  20. Drug allergies documented in electronic health records of a large healthcare system.

    PubMed

    Zhou, L; Dhopeshwarkar, N; Blumenthal, K G; Goss, F; Topaz, M; Slight, S P; Bates, D W

    2016-09-01

    The prevalence of drug allergies documented in electronic health records (EHRs) of large patient populations is understudied. We aimed to describe the prevalence of common drug allergies and patient characteristics documented in EHRs of a large healthcare network over the last two decades. Drug allergy data were obtained from EHRs of patients who visited two large tertiary care hospitals in Boston from 1990 to 2013. The prevalence of each drug and drug class was calculated and compared by sex and race/ethnicity. The number of allergies per patient was calculated and the frequency of patients having 1, 2, 3…, or 10+ drug allergies was reported. We also conducted a trend analysis by comparing the proportion of each allergy to the total number of drug allergies over time. Among 1 766 328 patients, 35.5% of patients had at least one reported drug allergy with an average of 1.95 drug allergies per patient. The most commonly reported drug allergies in this population were to penicillins (12.8%), sulfonamide antibiotics (7.4%), opiates (6.8%), and nonsteroidal anti-inflammatory drugs (NSAIDs) (3.5%). The relative proportion of allergies to angiotensin-converting enzyme (ACE) inhibitors and HMG CoA reductase inhibitors (statins) have more than doubled since early 2000s. Drug allergies were most prevalent among females and white patients except for NSAIDs, ACE inhibitors, and thiazide diuretics, which were more prevalent in black patients. Females and white patients may be more likely to experience a reaction from common medications. An increase in reported allergies to ACE inhibitors and statins is noteworthy. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Error and bias in size estimates of whale sharks: implications for understanding demography.

    PubMed

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.

  2. Efficient Maintenance and Update of Nonbonded Lists in Macromolecular Simulations.

    PubMed

    Chowdhury, Rezaul; Beglov, Dmitri; Moghadasi, Mohammad; Paschalidis, Ioannis Ch; Vakili, Pirooz; Vajda, Sandor; Bajaj, Chandrajit; Kozakov, Dima

    2014-10-14

    Molecular mechanics and dynamics simulations use distance based cutoff approximations for faster computation of pairwise van der Waals and electrostatic energy terms. These approximations traditionally use a precalculated and periodically updated list of interacting atom pairs, known as the "nonbonded neighborhood lists" or nblists, in order to reduce the overhead of finding atom pairs that are within distance cutoff. The size of nblists grows linearly with the number of atoms in the system and superlinearly with the distance cutoff, and as a result, they require significant amount of memory for large molecular systems. The high space usage leads to poor cache performance, which slows computation for large distance cutoffs. Also, the high cost of updates means that one cannot afford to keep the data structure always synchronized with the configuration of the molecules when efficiency is at stake. We propose a dynamic octree data structure for implicit maintenance of nblists using space linear in the number of atoms but independent of the distance cutoff. The list can be updated very efficiently as the coordinates of atoms change during the simulation. Unlike explicit nblists, a single octree works for all distance cutoffs. In addition, octree is a cache-friendly data structure, and hence, it is less prone to cache miss slowdowns on modern memory hierarchies than nblists. Octrees use almost 2 orders of magnitude less memory, which is crucial for simulation of large systems, and while they are comparable in performance to nblists when the distance cutoff is small, they outperform nblists for larger systems and large cutoffs. Our tests show that octree implementation is approximately 1.5 times faster in practical use case scenarios as compared to nblists.

  3. Accelerated Reader and Its Effect on Fifth-Grade Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Nichols, Jan Shelton

    2013-01-01

    Schools in the United States have been using the Accelerated Reader (AR) program since the mid-1980s. A search of the literature related to the effectiveness of the AR program revealed that many of the studies were conducted more than a decade ago, and a large number of those studies failed to utilize a control group to provide comparative data.…

  4. Counting Microfiche: The Utilization of the Microform Section of the ANSI Standard Z39.7-1983 "Library Statistics"; Microfiche Curl; and "Poly" or "Cell"?

    ERIC Educational Resources Information Center

    Caldwell-Wood, Naomi; And Others

    1987-01-01

    The first of three articles describes procedures for using ANSI statistical methods for estimating the number of pieces in large homogeneous collections of microfiche. The second discusses causes of curl, its control, and measurement, and the third compares the advantages and disadvantages of cellulose acetate and polyester base for microforms.…

  5. Micro-buffy coats of whole blood: a method for the electron microscopic study of mononuclear cells.

    PubMed

    Nunes, J F; Soares, J O; Alves de Matos, A P

    1979-09-01

    A method for the electron microscopic study of human peripheral lymphocytes by which very small buffy coats are obtained through centrifugation of heparinized whole blood in glass or plastic microhematocrit tubes is presented. This method is time saving and efficient, yielding well preserved material and a comparatively large number of mononuclear cells (mainly lymphocytes) in each thin section.

  6. How Higher Education Shuts the Door on the Racial Poverty Gap.

    ERIC Educational Resources Information Center

    Journal of Blacks in Higher Education, 2003

    2003-01-01

    For the past 35 years, U.S. Blacks on average have been three times as likely as Whites to live below the poverty line. A large factor in the overall black poverty gap is the huge number of black children being raised in poverty. However, but the poverty gap shrinks when college-educated blacks are compared to college-educated whites. (SM)

  7. A Flexible Foreign Language Curriculum for Departmental Growth and Program Excellence: A Report from an Exemplary Program.

    ERIC Educational Resources Information Center

    Harper, Jane

    The Northeast Campus of Tarrant County Junior College (TX) experienced a foreign language enrollment increase of 196% between 1973 and 1983, compared to a collegewide enrollment increase of only 91%. The program's success is due largely to the number and variety of curricular offerings. In addition to a 6-course sequence of 3- and 4-hour courses…

  8. Education for the Alleviation of Poverty: A Comparative Study of Conditional Cash Transfer Programs to Improve Educational Outcomes in Nicaragua and Colombia

    ERIC Educational Resources Information Center

    Stackhouse, Shannon Alexis

    2009-01-01

    The importance of education for individual well-being, social cohesion and economic growth is widely accepted by researchers and policymakers alike. Yet there exist vast numbers of people around the world, largely poor, who continue to lag behind wealthier people, often within their own nations. Conditional cash transfer programs were created to…

  9. Public/Private Cost-Sharing in Higher Education: An In-Depth Look at the German System Using a Comparative Study

    ERIC Educational Resources Information Center

    Gwosc, Christoph; Schwarzenberger, Astrid

    2009-01-01

    This article presents an empirical analysis of the public funding system for higher education in Germany and a comparison with five other European countries. The large number of separate student support items in Germany makes it an exception and makes the system obscure. The allocations of public expenditure to German institutions are below…

  10. Using Modern Statistical Methods to Analyze Demographics of Kansas ABE/GED Students Who Transition to Community or Technical College Programs

    ERIC Educational Resources Information Center

    Zacharakis, Jeff; Wang, Haiyan; Patterson, Margaret Becker; Andersen, Lori

    2015-01-01

    This research analyzed linked high-quality state data from K-12, adult education, and postsecondary state datasets in order to better understand the association between student demographics and successful completion of a postsecondary program. Due to the relatively small sample size compared to the large number of features, we analyzed the data…

  11. Busy Teachers: A Case of Comparing Online Teacher-Created Activities with the Ready-Made Activity Resource Books

    ERIC Educational Resources Information Center

    Khoshhal, Yasin

    2016-01-01

    With the ever-growing needs for more resources, the lack of concentration on preparing an exclusive activity for a particular classroom can be observed in a large number of educational contexts. The present study investigates the efficiency of ready-made activities for busy teachers. To this end, an activity from the ready-made resource book,…

  12. Mating competitiveness of sterile male Anopheles coluzzii in large cages.

    PubMed

    Maïga, Hamidou; Damiens, David; Niang, Abdoulaye; Sawadogo, Simon P; Fatherhaman, Omnia; Lees, Rosemary S; Roux, Olivier; Dabiré, Roch K; Ouédraogo, Georges A; Tripet, Fréderic; Diabaté, Abdoulaye; Gilles, Jeremie R L

    2014-11-26

    Understanding the factors that account for male mating competitiveness is critical to the development of the sterile insect technique (SIT). Here, the effects of partial sterilization with 90 Gy of radiation on sexual competitiveness of Anopheles coluzzii allowed to mate in different ratios of sterile to untreated males have been assessed. Moreover, competitiveness was compared between males allowed one versus two days of contact with females. Sterile and untreated males four to six days of age were released in large cages (~1.75 sq m) with females of similar age at the following ratios of sterile males: untreated males: untreated virgin females: 100:100:100, 300:100:100, 500:100:100 (three replicates of each) and left for two days. Competitiveness was determined by assessing the egg hatch rate and the insemination rate, determined by dissecting recaptured females. An additional experiment was conducted with a ratio of 500:100:100 and a mating period of either one or two days. Two controls of 0:100:100 (untreated control) and 100:0:100 (sterile control) were used in each experiment. When males and females consort for two days with different ratios, a significant difference in insemination rate was observed between ratio treatments. The competitiveness index (C) of sterile males compared to controls was 0.53. The number of days of exposure to mates significantly increased the insemination rate, as did the increased number of males present in the untreated: sterile male ratio treatments, but the number of days of exposure did not have any effect on the hatch rate. The comparability of the hatch rates between experiments suggest that An. coluzzii mating competitiveness experiments in large cages could be run for one instead of two days, shortening the required length of the experiment. Sterilized males were half as competitive as untreated males, but an effective release ratio of at least five sterile for one untreated male has the potential to impact the fertility of a wild female population. However, further trials in field conditions with wild males and females should be undertaken to estimate the ratio of sterile males to wild males required to produce an effect on wild populations.

  13. Multilevel fast multipole method based on a potential formulation for 3D electromagnetic scattering problems.

    PubMed

    Fall, Mandiaye; Boutami, Salim; Glière, Alain; Stout, Brian; Hazart, Jerome

    2013-06-01

    A combination of the multilevel fast multipole method (MLFMM) and boundary element method (BEM) can solve large scale photonics problems of arbitrary geometry. Here, MLFMM-BEM algorithm based on a scalar and vector potential formulation, instead of the more conventional electric and magnetic field formulations, is described. The method can deal with multiple lossy or lossless dielectric objects of arbitrary geometry, be they nested, in contact, or dispersed. Several examples are used to demonstrate that this method is able to efficiently handle 3D photonic scatterers involving large numbers of unknowns. Absorption, scattering, and extinction efficiencies of gold nanoparticle spheres, calculated by the MLFMM, are compared with Mie's theory. MLFMM calculations of the bistatic radar cross section (RCS) of a gold sphere near the plasmon resonance and of a silica coated gold sphere are also compared with Mie theory predictions. Finally, the bistatic RCS of a nanoparticle gold-silver heterodimer calculated with MLFMM is compared with unmodified BEM calculations.

  14. Comparing GOSAT Observations of Localized CO2 Enhancements by Large Emitters with Inventory-Based Estimates

    NASA Technical Reports Server (NTRS)

    Janardanan, Rajesh; Maksyutov, Shamil; Oda, Tomohiro; Saito, Makoto; Kaiser, Johannes W.; Ganshin, Alexander; Stohl, Andreas; Matsunaga, Tsuneo; Yoshida, Yukio; Yokota, Tatsuya

    2016-01-01

    We employed an atmospheric transport model to attribute column-averaged CO2 mixing ratios (XCO2) observed by Greenhouse gases Observing SATellite (GOSAT) to emissions due to large sources such as megacities and power plants. XCO2 enhancements estimated from observations were compared to model simulations implemented at the spatial resolution of the satellite observation footprint (0.1deg × 0.1deg). We found that the simulated XCO2 enhancements agree with the observed over several continental regions across the globe, for example, for North America with an observation to simulation ratio of 1.05 +/- 0.38 (p<0.1), but with a larger ratio over East Asia (1.22 +/- 0.32; p<0.05). The obtained observation-model discrepancy (22%) for East Asia is comparable to the uncertainties in Chinese emission inventories (approx.15%) suggested by recent reports. Our results suggest that by increasing the number of observations around emission sources, satellite instruments like GOSAT can provide a tool for detecting biases in reported emission inventories.

  15. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    PubMed

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  16. Application of the stepwise focusing method to optimize the cost-effectiveness of genome-wide association studies with limited research budgets for genotyping and phenotyping.

    PubMed

    Ohashi, J; Clark, A G

    2005-05-01

    The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.

  17. Comparison of Methods for Xenomonitoring in Vectors of Lymphatic Filariasis in Northeastern Tanzania

    PubMed Central

    Irish, Seth R.; Stevens, William M. B.; Derua, Yahya A.; Walker, Thomas; Cameron, Mary M.

    2015-01-01

    Monitoring Wuchereria bancrofti infection in mosquitoes (xenomonitoring) can play an important role in determining when lymphatic filariasis has been eliminated, or in focusing control efforts. As mosquito infection rates can be low, a method for collecting large numbers of mosquitoes is necessary. Gravid traps collected large numbers of Culex quinquefasciatus in Tanzania, and a collection method that targets mosquitoes that have already fed could result in increased sensitivity in detecting W. bancrofti-infected mosquitoes. The aim of this experiment was to test this hypothesis by comparing U.S. Centers for Disease Control and Prevention (CDC) light traps with CDC gravid traps in northeastern Tanzania, where Cx. quinquefasciatus is a vector of lymphatic filariasis. After an initial study where small numbers of mosquitoes were collected, a second study collected 16,316 Cx. quinquefasciatus in 60 gravid trap-nights and 240 light trap-nights. Mosquitoes were pooled and tested for presence of W. bancrofti DNA. Light and gravid traps collected similar numbers of mosquitoes per trap-night, but the physiological status of the mosquitoes was different. The estimated infection rate in mosquitoes collected in light traps was considerably higher than in mosquitoes collected in gravid traps, so light traps can be a useful tool for xenomonitoring work in Tanzania. PMID:26350454

  18. The effect of environmental harshness on neurogenesis: a large-scale comparison.

    PubMed

    Chancellor, Leia V; Roth, Timothy C; LaDage, Lara D; Pravosudov, Vladimir V

    2011-03-01

    Harsh environmental conditions may produce strong selection pressure on traits, such as memory, that may enhance fitness. Enhanced memory may be crucial for survival in animals that use memory to find food and, thus, particularly important in environments where food sources may be unpredictable. For example, animals that cache and later retrieve their food may exhibit enhanced spatial memory in harsh environments compared with those in mild environments. One way that selection may enhance memory is via the hippocampus, a brain region involved in spatial memory. In a previous study, we established a positive relationship between environmental severity and hippocampal morphology in food-caching black-capped chickadees (Poecile atricapillus). Here, we expanded upon this previous work to investigate the relationship between environmental harshness and neurogenesis, a process that may support hippocampal cytoarchitecture. We report a significant and positive relationship between the degree of environmental harshness across several populations over a large geographic area and (1) the total number of immature hippocampal neurons, (2) the number of immature neurons relative to the hippocampal volume, and (3) the number of immature neurons relative to the total number of hippocampal neurons. Our results suggest that hippocampal neurogenesis may play an important role in environments where increased reliance on memory for cache recovery is critical. Copyright © 2010 Wiley Periodicals, Inc.

  19. An improved filter elution and cell culture assay procedure for evaluating public groundwater systems for culturable enteroviruses.

    PubMed

    Dahling, Daniel R

    2002-01-01

    Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.

  20. Effect of supplemented sericin on the development, cell number, cryosurvival and number of lipid droplets in cultured bovine embryos.

    PubMed

    Hosoe, Misa; Inaba, Yasushi; Hashiyada, Yutaka; Imai, Kei; Kajitani, Kenji; Hasegawa, Yuichi; Irie, Mamoru; Teramoto, Hidetoshi; Takahashi, Toru; Niimura, Sueo

    2017-02-01

    Sericin was investigated as an alternative to fetal bovine serum (FBS) for bovine embryo culture. In vitro matured oocytes were developed using 0.05%, 0.1% or 0.15% sericin. The developmental rate, cryosurvival rate and blastulation time of these embryos were compared with those of embryos developed using 5% FBS. The number of lipid droplets was compared among the blastocysts developed using 5% FBS, using 0.05% sericin and in vivo. The rate of cleavage and blastocyst formation was similar among all groups. Blastulation occurred significantly earlier in the embryos developed using 5% FBS than in those developed using sericin at any concentration (P < 0.05). At 72 h after thawing, the cryosurvival rate of the blastocysts developed using 5% FBS and 0.05% sericin were significantly higher compared with those developed using 0.1% and 0.15% sericin (P < 0.05). The blastocysts developed using 0.05% sericin and in vivo produced a significantly fewer number of medium and large lipid droplets than those developed using 5% FBS. These results suggest that the blastocysts developed using 0.05% sericin show characteristics similar to those of the blastocysts developed in vivo and that the use of sericin as an alternative to FBS is feasible. © 2016 Japanese Society of Animal Science.

  1. [Maternal and perinatal outcomes in Bolivian pregnant women in the city of São Paulo: a cross-sectional case-control study].

    PubMed

    Sass, Nelson; de Figueredo Junior, Alcides Rocha; Siqueira, José Martins; da Silva, Fabio Roberto Oliveira; Sato, Jussara Leiko; Nakamura, Mary Uchiyama; de Sousa, Eduardo

    2010-08-01

    to evaluate the characteristics regarding care of Bolivian pregnant women and their outcomes in Hospital Municipal Vereador José Storopolli. a cross-sectional retrospective case-control study comparing two groups of pregnant women from 2003 to 2007. The Study Group included 312 Bolivian pregnant women and the Control Group, 314 Brazilian women. The groups were compared with respect to demographic variables, the presence of maternal complications and perinatal outcomes. Statistical analysis was performed by χ2 test and, when necessary, by applying Yates' correction. compared to Brazilian mothers, a smaller number of Bolivian women received prenatal care (16.4 versus 5.1%, p<0.001) and among those that did, the percentage of those who had less than five visits was higher (50 versus 19.3%, p<0.001). Compared to the Brazilian group, the Bolivian group had fewer unwed mothers (12.1 versus 25.4%, p<0.001) and a lower number of nulliparous women (34.1 versus 43.6%, p=0.017). Congenital syphilis had a higher incidence in the Bolivian group (2.9 versus 0.5%, p<0.05), as well as a higher number of newborns classified as large for gestational age (14.6 versus 5.8%, p <0.001). the failure to attend prenatal care or its completion with an inadequate number of consultations, and the higher number of cases of congenital syphilis observed among the Bolivian women show the great vulnerability of this ethnic minority group to health problems. Consequently, it is necessary a strategic planning of the sectors responsible for coordinating assistance in our country, in order to reduce this disparity, either through socio-economic improvements or by the implementation of health care tailored to the needs of this group.

  2. T cells in chronic lymphocytic leukemia display dysregulated expression of immune checkpoints and activation markers.

    PubMed

    Palma, Marzia; Gentilcore, Giusy; Heimersson, Kia; Mozaffari, Fariba; Näsman-Glaser, Barbro; Young, Emma; Rosenquist, Richard; Hansson, Lotta; Österborg, Anders; Mellstedt, Håkan

    2017-03-01

    Chronic lymphocytic leukemia is characterized by impaired immune functions largely due to profound T-cell defects. T-cell functions also depend on co-signaling receptors, inhibitory or stimulatory, known as immune checkpoints, including cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4) and programmed death-1 (PD-1). Here we analyzed the T-cell phenotype focusing on immune checkpoints and activation markers in chronic lymphocytic leukemia patients (n=80) with different clinical characteristics and compared them to healthy controls. In general, patients had higher absolute numbers of CD3 + cells and the CD8 + subset was particularly expanded in previously treated patients. Progressive patients had higher numbers of CD4 + and CD8 + cells expressing PD-1 compared to healthy controls, which was more pronounced in previously treated patients ( P =0.0003 and P =0.001, respectively). A significant increase in antigen-experienced T cells was observed in patients within both the CD4 + and CD8 + subsets, with a significantly higher PD-1 expression. Higher numbers of CD4 + and CD8 + cells with intracellular CTLA-4 were observed in patients, as well as high numbers of proliferating (Ki67 + ) and activated (CD69 + ) CD4 + and CD8 + cells, more pronounced in patients with active disease. The numbers of Th1, Th2, Th17 and regulatory T cells were substantially increased in patients compared to controls ( P <0.05), albeit decreasing to low levels in pre-treated patients. In conclusion, chronic lymphocytic leukemia T cells display increased expression of immune checkpoints, abnormal subset distribution, and a higher proportion of proliferating cells compared to healthy T cells. Disease activity and previous treatment shape the T-cell profile of chronic lymphocytic leukemia patients in different ways. Copyright© Ferrata Storti Foundation.

  3. Heart failure in numbers: Estimates for the 21st century in Portugal.

    PubMed

    Fonseca, Cândida; Brás, Daniel; Araújo, Inês; Ceia, Fátima

    2018-02-01

    Heart failure is a major public health problem that affects a large number of individuals and is associated with high mortality and morbidity. This study aims to estimate the probable scenario for HF prevalence and its consequences in the short-, medium- and long-term in Portugal. This assessment is based on the EPICA (Epidemiology of Heart Failure and Learning) project, which was designed to estimate the prevalence of chronic heart failure in mainland Portugal in 1998. Estimates of heart failure prevalence were performed for individuals aged over 25 years, distributed by age group and gender, based on data from the 2011 Census by Statistics Portugal. The expected demographic changes, particularly the marked aging of the population, mean that a large number of Portuguese will likely be affected by this syndrome. Assuming that current clinical practices are maintained, the prevalence of heart failure in mainland Portugal will increase by 30% by 2035 and by 33% by 2060, compared to 2011, resulting in 479 921 and 494 191 affected individuals, respectively. In addition to the large number of heart failure patients expected, it is estimated that the hospitalizations and mortality associated with this syndrome will significantly increase its economic impact. Therefore, it is extremely important to raise awareness of this syndrome, as this will favor diagnosis and early referral of patients, facilitating better management of heart failure and helping to decrease the burden it imposes on Portugal. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering.

    PubMed

    Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan

    2017-03-01

    Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.

  5. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    PubMed

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large-spot machine, which gives the planning system more freedom to compensate for the higher sensitivity to uncertainties and interplay effects for lung cancer treatments. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Steady Secondary Flows Generated by Periodic Compression and Expansion of an Ideal Gas in a Pulse Tube

    NASA Technical Reports Server (NTRS)

    Lee, Jeffrey M.

    1999-01-01

    This study establishes a consistent set of differential equations for use in describing the steady secondary flows generated by periodic compression and expansion of an ideal gas in pulse tubes. Also considered is heat transfer between the gas and the tube wall of finite thickness. A small-amplitude series expansion solution in the inverse Strouhal number is proposed for the two-dimensional axisymmetric mass, momentum and energy equations. The anelastic approach applies when shock and acoustic energies are small compared with the energy needed to compress and expand the gas. An analytic solution to the ordered series is obtained in the strong temperature limit where the zeroth-order temperature is constant. The solution shows steady velocities increase linearly for small Valensi number and can be of order I for large Valensi number. A conversion of steady work flow to heat flow occurs whenever temperature, velocity or phase angle gradients are present. Steady enthalpy flow is reduced by heat transfer and is scaled by the Prandtl times Valensi numbers. Particle velocities from a smoke-wire experiment were compared with predictions for the basic and orifice pulse tube configurations. The theory accurately predicted the observed steady streaming.

  7. Enhanced Stellar Activity for Slow Antisolar Differential Rotation?

    NASA Astrophysics Data System (ADS)

    Brandenburg, Axel; Giampapa, Mark S.

    2018-03-01

    High-precision photometry of solar-like members of the open cluster M67 with Kepler/K2 data has recently revealed enhanced activity for stars with a large Rossby number, which is the ratio of rotation period to the convective turnover time. Contrary to the well established behavior for shorter rotation periods and smaller Rossby numbers, the chromospheric activity of the more slowly rotating stars of M67 was found to increase with increasing Rossby number. Such behavior has never been reported before, although it was theoretically predicted to emerge as a consequence of antisolar differential rotation (DR) for stars with Rossby numbers larger than that of the Sun, because in those models the absolute value of the DR was found to exceed that for solar-like DR. Using gyrochronological relations and an approximate age of 4 Gyr for the members of M67, we compare with computed rotation rates using just the B ‑ V color. The resulting rotation–activity relation is found to be compatible with that obtained by employing the measured rotation rate. This provides additional support for the unconventional enhancement of activity at comparatively low rotation rates and the possible presence of antisolar differential rotation.

  8. Antinuclear Antibodies predict a higher number of Pregnancy Loss in Unexplained Recurrent Pregnancy Loss.

    PubMed

    Sakthiswary, R; Rajalingam, S; Norazman, M R; Hussein, H

    The etiology of recurrent pregnancy loss (RPL) is unknown in a significant proportion of patients. Autoimmune processes have been implicated in the pathogenesis. The role of antinuclear antibody (ANA) in this context is largely undetermined. In an attempt to address the lack of evidence in this area, we explored the clinical significance of antinuclear antibody (ANA) in unexplained RPL. We studied 68 patients with RPL and 60 healthy controls from September 2005 to May 2012. All subjects were tested for ANA by immunofluorescence testing, and a titer of 1: 80 and above was considered positive. We compared the pregnancy outcome between the ANA positive and ANA negative RPL cases. The incidence of ANA positivity among the cases (35.3%) was significantly higher than the controls (13.3%) (p=0.005). ANA positive cases showed significantly higher number of RPL (p=0.006) and lower number of successful pregnancies (p=0.013) compared to the ANA negative cases . The ANA titre had a significant association with the number of RPL (p<0.05, r=0.724) but not with the number of successful pregnancies (p=0.054). ANA positivity predicts a less favorable pregnancy outcome in RPL. Our findings suggest that the ANA titre is a useful positive predictor of the number of RPL. Hence, ANA test is a potential prognostic tool for this condition which merits further research.

  9. A Survey of Residents' Perceptions of the Effect of Large-Scale Economic Developments on Perceived Safety, Violence, and Economic Benefits.

    PubMed

    Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi

    2015-01-01

    Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.

  10. Mitochondrial DNA copy number in peripheral blood cell and hypertension risk among mining workers: a case-control study in Chinese coal miners.

    PubMed

    Lei, L; Guo, J; Shi, X; Zhang, G; Kang, H; Sun, C; Huang, J; Wang, T

    2017-09-01

    Alteration of mitochondrial DNA (mtDNA) copy number, which reflects oxidant-induced cell damage, has been observed in a wide range of human diseases. However, whether it correlates with hypertension has not been elucidated. We aimed to explore the association between mtDNA copy number and the risk of hypertension in Chinese coal miners. A case-control study was performed with 378 hypertension patients and 325 healthy controls in a large coal mining group located in North China. Face-to-face interviews were conducted by trained staffs with necessary medical knowledge. The mtDNA copy number was measured by a quantitative real-time PCR assay using DNA extracted from peripheral blood. No significant differences in mtDNA copy number were observed between hypertension patients and healthy controls. However, in both case and control groups, the mtDNA copy number was statistically significantly lower in the elder population (≥45 years old) compared with the younger subjects (<45 years old; 7.17 vs 6.64, P=0.005 and 7.21 vs 6.84, P=0.036). A significantly higher mtDNA copy number could be found in hypertension patients consuming alcohol regularly compared with no alcohol consumption patients (7.09 vs 6.69); mtDNA copy number was also positively correlated with age and alcohol consumption. Hypertension was found significantly correlated with factors such as age, work duration, monthly family income and drinking status. Our results suggest that the mtDNA copy number is not associated with hypertension in coal miners.

  11. International Migration of Doctors, and Its Impact on Availability of Psychiatrists in Low and Middle Income Countries

    PubMed Central

    Jenkins, Rachel; Kydd, Robert; Mullen, Paul; Thomson, Kenneth; Sculley, James; Kuper, Susan; Carroll, Joanna; Gureje, Oye; Hatcher, Simon; Brownie, Sharon; Carroll, Christopher; Hollins, Sheila; Wong, Mai Luen

    2010-01-01

    Background Migration of health professionals from low and middle income countries to rich countries is a large scale and long-standing phenomenon, which is detrimental to the health systems in the donor countries. We sought to explore the extent of psychiatric migration. Methods In our study, we use the respective professional databases in each country to establish the numbers of psychiatrists currently registered in the UK, US, New Zealand, and Australia who originate from other countries. We also estimate the impact of this migration on the psychiatrist population ratios in the donor countries. Findings We document large numbers of psychiatrists currently registered in the UK, US, New Zealand and Australia originating from India (4687 psychiatrists), Pakistan (1158), Bangladesh (149) , Nigeria (384) , Egypt (484), Sri Lanka (142), Philippines (1593). For some countries of origin, the numbers of psychiatrists currently registered within high-income countries' professional databases are very small (e.g., 5 psychiatrists of Tanzanian origin registered in the 4 high-income countries we studied), but this number is very significant compared to the 15 psychiatrists currently registered in Tanzania). Without such emigration, many countries would have more than double the number of psychiatrists per 100, 000 population (e.g. Bangladesh, Myanmar, Afghanistan, Egypt, Syria, Lebanon); and some countries would have had five to eight times more psychiatrists per 100,000 (e.g. Philippines, Pakistan, Sri Lanka, Liberia, Nigeria and Zambia). Conclusions Large numbers of psychiatrists originating from key low and middle income countries are currently registered in the UK, US, New Zealand and Australia, with concomitant impact on the psychiatrist/population ratio n the originating countries. We suggest that creative international policy approaches are needed to ensure the individual migration rights of health professionals do not compromise societal population rights to health, and that there are public and fair agreements between countries within an internationally agreed framework. PMID:20140216

  12. Frequent loss of lineages and deficient duplications accounted for low copy number of disease resistance genes in Cucurbitaceae

    PubMed Central

    2013-01-01

    Background The sequenced genomes of cucumber, melon and watermelon have relatively few R-genes, with 70, 75 and 55 copies only, respectively. The mechanism for low copy number of R-genes in Cucurbitaceae genomes remains unknown. Results Manual annotation of R-genes in the sequenced genomes of Cucurbitaceae species showed that approximately half of them are pseudogenes. Comparative analysis of R-genes showed frequent loss of R-gene loci in different Cucurbitaceae species. Phylogenetic analysis, data mining and PCR cloning using degenerate primers indicated that Cucurbitaceae has limited number of R-gene lineages (subfamilies). Comparison between R-genes from Cucurbitaceae and those from poplar and soybean suggested frequent loss of R-gene lineages in Cucurbitaceae. Furthermore, the average number of R-genes per lineage in Cucurbitaceae species is approximately 1/3 that in soybean or poplar. Therefore, both loss of lineages and deficient duplications in extant lineages accounted for the low copy number of R-genes in Cucurbitaceae. No extensive chimeras of R-genes were found in any of the sequenced Cucurbitaceae genomes. Nevertheless, one lineage of R-genes from Trichosanthes kirilowii, a wild Cucurbitaceae species, exhibits chimeric structures caused by gene conversions, and may contain a large number of distinct R-genes in natural populations. Conclusions Cucurbitaceae species have limited number of R-gene lineages and each genome harbors relatively few R-genes. The scarcity of R-genes in Cucurbitaceae species was due to frequent loss of R-gene lineages and infrequent duplications in extant lineages. The evolutionary mechanisms for large variation of copy number of R-genes in different plant species were discussed. PMID:23682795

  13. The Belgian repository of fundamental atomic data and stellar spectra (BRASS). I. Cross-matching atomic databases of astrophysical interest

    NASA Astrophysics Data System (ADS)

    Laverick, M.; Lobel, A.; Merle, T.; Royer, P.; Martayan, C.; David, M.; Hensberge, H.; Thienpont, E.

    2018-04-01

    Context. Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the Universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. Aims: The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. In a second step synthetic spectra will be compared against extremely high-quality observed spectra, for a large number of BAFGK spectral type stars, in order to critically evaluate the atomic data of a large number of important stellar lines. Methods: Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a new non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. Results: We report on the number of cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that 2% of our line list and Vienna atomic line database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different retrieved literature log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transition pairs are available to download at http://brass.sdf.org

  14. A microphysical parameterization of aqSOA and sulfate formation in clouds

    NASA Astrophysics Data System (ADS)

    McVay, Renee; Ervens, Barbara

    2017-07-01

    Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.

  15. Trophic status drives interannual variability in nesting numbers of marine turtles.

    PubMed

    Broderick, A C; Godley, B J; Hays, G C

    2001-07-22

    Large annual fluctuations are seen in breeding numbers in many populations of non-annual breeders. We examined the interannual variation in nesting numbers of populations of green (Chelonia mydas) (n = 16 populations), loggerhead (Caretta caretta) (n = 10 populations), leatherback (Dermochelys coriacea) (n = 9 populations) and hawksbill turtles (Eretmochelys imbricata) (n = 10 populations). Interannual variation was greatest in the green turtle. When comparing green and loggerhead turtles nesting in Cyprus we found that green turtles were more likely to change the interval between laying seasons and showed greater variation in the number of clutches laid in a season. We suggest that these differences are driven by the varying trophic statuses of the different species. Green turtles are herbivorous, feeding on sea grasses and macro-algae, and this primary production will be more tightly coupled with prevailing environmental conditions than the carnivorous diet of the loggerhead turtle.

  16. Evaluation of a Computer-Based Training Program for Enhancing Arithmetic Skills and Spatial Number Representation in Primary School Children.

    PubMed

    Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael

    2016-01-01

    Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.

  17. A resource-based game theoretical approach for the paradox of the plankton.

    PubMed

    Huang, Weini; de Araujo Campos, Paulo Roberto; Moraes de Oliveira, Viviane; Fagundes Ferrreira, Fernando

    2016-01-01

    The maintenance of species diversity is a central focus in ecology. It is not rare to observe more species than the number of limiting resources, especially in plankton communities. However, such high species diversity is hard to achieve in theory under the competitive exclusion principles, known as the plankton paradox. Previous studies often focus on the coexistence of predefined species and ignore the fact that species can evolve. We model multi-resource competitions using evolutionary games, where the number of species fluctuates under extinction and the appearance of new species. The interspecific and intraspecific competitions are captured by a dynamical payoff matrix, which has a size of the number of species. The competition strength (payoff entries) is obtained from comparing the capability of species in consuming resources, which can change over time. This allows for the robust coexistence of a large number of species, providing a possible solution to the plankton paradox.

  18. A resource-based game theoretical approach for the paradox of the plankton

    PubMed Central

    de Araujo Campos, Paulo Roberto; Moraes de Oliveira, Viviane

    2016-01-01

    The maintenance of species diversity is a central focus in ecology. It is not rare to observe more species than the number of limiting resources, especially in plankton communities. However, such high species diversity is hard to achieve in theory under the competitive exclusion principles, known as the plankton paradox. Previous studies often focus on the coexistence of predefined species and ignore the fact that species can evolve. We model multi-resource competitions using evolutionary games, where the number of species fluctuates under extinction and the appearance of new species. The interspecific and intraspecific competitions are captured by a dynamical payoff matrix, which has a size of the number of species. The competition strength (payoff entries) is obtained from comparing the capability of species in consuming resources, which can change over time. This allows for the robust coexistence of a large number of species, providing a possible solution to the plankton paradox. PMID:27602293

  19. Determinant Computation on the GPU using the Condensation Method

    NASA Astrophysics Data System (ADS)

    Anisul Haque, Sardar; Moreno Maza, Marc

    2012-02-01

    We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.

  20. Advances in fish vaccine delivery.

    PubMed

    Plant, Karen P; Lapatra, Scott E

    2011-12-01

    Disease prevention is essential to the continued development of aquaculture around the world. Vaccination is the most effective method of combating disease and currently there are a number of vaccines commercially available for use in fish. The majority of aquatic vaccines are delivered by injection, which is by far the most effective method when compared to oral or immersion deliveries. However it is labor intensive, costly and not feasible for large numbers of fish under 20 g. Attempts to develop novel oral and immersion delivery methods have resulted in varying degrees of success but may have great potential for the future. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  2. The railway suicide death of a famous German football player: impact on the subsequent frequency of railway suicide acts in Germany.

    PubMed

    Ladwig, Karl-Heinz; Kunrath, Sabine; Lukaschek, Karoline; Baumert, Jens

    2012-01-01

    The railway suicide of Robert Enke, an internationally respected German football goal keeper, sent shockwaves throughout the world of football. We analyzed its impact on the frequency of subsequent railway suicide acts (RS). Two analytic approaches were performed applying German Railway Event database Safety (EDS) data: first, an inter-year approach comparing the incidence of RS during a predefined "index period" with identical time windows in 2006 to 2008; second, an intra-year approach comparing the number of RS 28 days before and after the incidence. To analyze a possible "compensatory deficit", the number of RS in the subsequent first quarter of 2010 was compared with the identical time windows in the preceding three years. Incidence ratios with 95% confidence intervals were estimated by Poisson regression. Findings were controlled for temperature. Compared to the preceding three years, the incidence ratio (IR) of the number of RS in the index period increased by 1.81 (1.48-2.21; p<0.001), leading to an overall percentage change of 81% (48-121%; p<0.001). Comparing the number of suicides 28 days before and after the incidence revealed an even more pronounced increase of IR (2.2; 1.6-3.0). No modifications of these associations were observed by daytime, by location of the suicide and fatality. No compensatory deficit occurred in the post-acute period. The substantial increase of RS in the aftermath of the footballer's suicide death brought about copycat behavior in an unforeseen amount, even though the media reporting was largely sensitive and preventive measures were taken. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Adhesion of Mineral and Soot Aerosols can Strongly Affect their Scattering and Absorption Properties

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Jana M.

    2012-01-01

    We use the numerically exact superposition T-matrix method to compute the optical cross sections and the Stokes scattering matrix for polydisperse mineral aerosols (modeled as homogeneous spheres) covered with a large number of much smaller soot particles. These results are compared with the Lorenz-Mie results for a uniform external mixture of mineral and soot aerosols. We show that the effect of soot particles adhering to large mineral particles can be to change the extinction and scattering cross sections and the asymmetry parameter quite substantially. The effect on the phase function and degree of linear polarization can be equally significant.

  4. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  5. In search of multipath interference using large molecules

    PubMed Central

    Cotter, Joseph P.; Brand, Christian; Knobloch, Christian; Lilach, Yigal; Cheshnovsky, Ori; Arndt, Markus

    2017-01-01

    The superposition principle is fundamental to the quantum description of both light and matter. Recently, a number of experiments have sought to directly test this principle using coherent light, single photons, and nuclear spin states. We extend these experiments to massive particles for the first time. We compare the interference patterns arising from a beam of large dye molecules diffracting at single, double, and triple slit material masks to place limits on any high-order, or multipath, contributions. We observe an upper bound of less than one particle in a hundred deviating from the expectations of quantum mechanics over a broad range of transverse momenta and de Broglie wavelength. PMID:28819641

  6. Current superimposition variable flux reluctance motor with 8 salient poles

    NASA Astrophysics Data System (ADS)

    Takahara, Kazuaki; Hirata, Katsuhiro; Niguchi, Noboru; Kohara, Akira

    2017-12-01

    We propose a current superimposition variable flux reluctance motor for a traction motor of electric vehicles and hybrid electric vehicles, which consists of 10 salient poles in the rotor and 12 slots in the stator. However, iron losses of this motor in high rotation speed ranges is large because the number of salient poles is large. In this paper, we propose a current superimposition variable flux reluctance motor that consists of 8 salient poles and 12 slots. The characteristics of the 10-pole-12-slot and 8-pole-12-slot current superimposition variable flux reluctance motors are compared using finite element analysis under vector control.

  7. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2002-08-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, and relational databases, as well as ACeDB. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  8. Small and Large Number Processing in Infants and Toddlers with Williams Syndrome

    ERIC Educational Resources Information Center

    Van Herwegen, Jo; Ansari, Daniel; Xu, Fei; Karmiloff-Smith, Annette

    2008-01-01

    Previous studies have suggested that typically developing 6-month-old infants are able to discriminate between small and large numerosities. However, discrimination between small numerosities in young infants is only possible when variables continuous with number (e.g. area or circumference) are confounded. In contrast, large number discrimination…

  9. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. The large-amplitude combustion oscillation in a single-side expansion scramjet combustor

    NASA Astrophysics Data System (ADS)

    Ouyang, Hao; Liu, Weidong; Sun, Mingbo

    2015-12-01

    The combustion oscillation in scramjet combustor is believed not existing and ignored for a long time. Compared with the flame pulsation, the large-amplitude combustion oscillation in scramjet combustor is indeed unfamiliar and difficult to be observed. In this study, the specifically designed experiments are carried out to investigate this unusual phenomenon in a single-side expansion scramjet combustor. The entrance parameter of combustor corresponds to scramjet flight Mach number 4.0 with a total temperature of 947 K. The obtained results show that the large-amplitude combustion oscillation can exist in scramjet combustor, which is not occasional and can be reproduced. Under the given conditions of this study, moreover, the large-amplitude combustion oscillation is regular and periodic, whose principal frequency is about 126 Hz. The proceeding of the combustion oscillation is accompanied by the transformation of the flame-holding pattern and combustion mode transition between scramjet mode combustion and ramjet mode combustion.

  12. Comparison of birds detected from roadside and off-road point counts in the Shenandoah National Park

    USGS Publications Warehouse

    Keller, C.M.E.; Fuller, M.R.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Roadside point counts are generally used for large surveys to increase the number of samples. We examined differences in species detected from roadside versus off-road (200-m and 400-ha) point counts in the Shenandoah National Park. We also compared the list of species detected in the first 3 minutes to those detected in 10 minutes for potential species biases. Results from 81 paired roadside and off-road counts indicated that roadside counts had higher numbers of several edge species but did not have lower numbers of nonedge forest species. More individuals and species were detected from roadside points because of this increase in edge species. Sixty-five percent of the species detected in 10 minutes were recorded in the first 3 minutes.

  13. High-Fidelity PIV of a Naturally Grown High Reynolds Number Turbulent Boundary Layer

    NASA Astrophysics Data System (ADS)

    Biles, Drummond; White, Chris; Klewicki, Joeseph

    2017-11-01

    High-fidelity particle image velocimetry data acquired in the Flow Physics Facility (FPF) at the University of New Hampshire is presented. Having a test section length of 72m, the FPF employs the ``big and slow'' approach to obtain well-resolved turbulent boundary layer measurements at high Reynolds number. We report on PIV measurements acquired in the streamwise-wall-normal plane at a downstream position 59m from the test-section inlet over the friction Reynolds number range 7000 < Reτ < 15000 . Local flow tracer seeding is employed through a wall-mounted slot fed by a large volume plenum located 13.4m upstream of the PIV measurement station. Both time-independent and time-dependent turbulent flow statistics are presented and compared to existing data.

  14. Linear reduction method for predictive and informative tag SNP selection.

    PubMed

    He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander

    2005-01-01

    Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.

  15. Some observations of tip-vortex cavitation

    NASA Astrophysics Data System (ADS)

    Arndt, R. E. A.; Arakeri, V. H.; Higuchi, H.

    1991-08-01

    Cavitation has been observed in the trailing vortex system of an elliptic platform hydrofoil. A complex dependence on Reynolds number and gas content is noted at inception. Some of the observations can be related to tension effects associated with the lack of sufficiently large-sized nuclei. Inception measurements are compared with estimates of pressure in the vortex obtained from LDV measurements of velocity within the vortex. It is concluded that a complete correlation is not possible without knowledge of the fluctuating levels of pressure in tip-vortex flows. When cavitation is fully developed, the observed tip-vortex trajectory flows. When cavitation is fully developed, the observed tip-vortex trajectory shows a surprising lack of dependence on any of the physical parameters varied, such as angle of attack, Reynolds number, cavitation number, and dissolved gas content.

  16. Comparison of visualized turbine endwall secondary flows and measured heat transfer patterns

    NASA Technical Reports Server (NTRS)

    Gaugler, R. E.; Russell, L. M.

    1983-01-01

    Various flow visualization techniques were used to define the secondary flows near the endwall in a large heat transfer data. A comparison of the visualized flow patterns and the measured Stanton number distribution was made for cases where the inlet Reynolds number and exit Mach number were matched. Flows were visualized by using neutrally buoyant helium-filled soap bubbles, by using smoke from oil soaked cigars, and by a few techniques using permanent marker pen ink dots and synthetic wintergreen oil. Details of the horseshoe vortex and secondary flows can be directly compared with heat transfer distribution. Near the cascade entrance there is an obvious correlation between the two sets of data, but well into the passage the effect of secondary flow is not as obvious.

  17. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  18. Physicochemical and histological changes in the arterial wall of nonhuman primates during progression and regression of atherosclerosis.

    PubMed Central

    Small, D M; Bond, M G; Waugh, D; Prack, M; Sawyer, J K

    1984-01-01

    To identify the temporal changes occurring during progression and regression of atherosclerosis in nonhuman primates, we have studied the physicochemical and histological characteristics of arterial wall lesions during a 30-mo progression period of diet-induced hypercholesterolemia and during a 12-mo period of regression. Three groups of cynomolgous monkeys (Macaca fascicularis) were studied. Control groups were fed a basal chow diet for 18, 24, and 30 mo and were compared with progression groups that were fed a high-cholesterol-containing diet for up to 30 mo. Regression groups were fed a high-cholesterol diet for 18 mo to induce atherosclerosis and then fed monkey chow for up to 12 mo. The progression group monkeys were killed at 6, 12, 18, 24, and 30 mo, and the regression animals were killed at 24 and 30 mo (i.e., after 6 and 12 mo of being fed a noncholesterol-containing chow diet). Histology and morphometry, physical microscopy for cholesterol monohydrate crystals, foam cell and droplet melting points and chemical composition studies were completed on a large number of individual arterial lesions. Control animals had very little cholesterol ester, rare foam cells, and no extracellular cholesterol ester droplets or cholesterol crystals. During progression, the arteries first increased cholesterol ester content to produce high melting (approximately 45 degrees C) foam cell-rich lesions essentially devoid of cholesterol crystals. With time, the number of cholesterol crystals increased so that by 30 mo large numbers were present. Foam cells decreased with time but their melting temperature remained high while that of extracellular droplets fell to approximately 38 degrees C. Between 18 and 30 mo necrosis appeared and worsened. After 6-mo regression, unexpected changes occurred in the lesions. Compared with 24-mo progression, the chemical composition showed a relative increase in free cholesterol, a decrease in cholesterol ester and microscopy revealed large numbers of cholesterol crystals. Concomitantly, foam cells decreased and the melting temperature of both intra- and extracellular cholesterol ester markedly decreased. After 12-mo regression cholesterol decreased, cholesterol crystals and necrosis diminished and collagen appeared increased. Thus, during progression there is initially an increase in the number of foam cells containing very high-melting intracellular cholesterol ester droplets. By 30 mo, cholesterol crystals and necrosis dominate and high-melting foam cells appear only at lesion margins, suggesting that the initial process continues at the lesion edge. The lower melting point of extracellular esters indicates a lipid composition different from intracellular droplets. Thus, the changes observed in these animals generally reflect those predicted for progression of human atherosclerosis. During the initial 6 mo of regression, necrosis remains, the number of foam cell decreases, and cholesterol ester content decreases; however the relative proportion of free cholesterol content increases, and large numbers of cholesterol content are formed. Thus, large and rapid decreases in serum cholesterol concentration to produce regression in fact may result in the precipitation of cholesterol monohydrate and an apparent worsening of the lesions. More prolonged regression (12-mo) tends to return the lipid composition of the artery wall towards normal, partially reduces cholesterol crystals, and results in an improved but scarred intima. Images PMID:6725553

  19. Analogue Evaluation of the Effects of Opportunities to Respond and Ratios of Known Items within Drill Rehearsal of Esperanto Words

    ERIC Educational Resources Information Center

    Szadokierski, Isadora; Burns, Matthew K.

    2008-01-01

    Drill procedures have been used to increase the retention of various types of information, but little is known about the causal mechanisms of these techniques. The current study compared the effect of two key features of drill procedures, a large number of opportunities to respond (OTR) and a drill ratio that maintains a high percentage of known…

  20. Comparing Perception of Stroop Stimuli in Focused versus Divided Attention Paradigms: Evidence for Dramatic Processing Differences

    ERIC Educational Resources Information Center

    Eidels, Ami; Townsend, James T.; Algom, Daniel

    2010-01-01

    A huge set of focused attention experiments show that when presented with color words printed in color, observers report the ink color faster if the carrier word is the name of the color rather than the name of an alternative color, the Stroop effect. There is also a large number (although not so numerous as the Stroop task) of so-called…

  1. Nondestructive ultrasonic testing of materials

    DOEpatents

    Hildebrand, Bernard P.

    1994-01-01

    Reflection wave forms obtained from aged and unaged material samples can be compared in order to indicate trends toward age-related flaws. Statistical comparison of a large number of data points from such wave forms can indicate changes in the microstructure of the material due to aging. The process is useful for predicting when flaws may occur in structural elements of high risk structures such as nuclear power plants, airplanes, and bridges.

  2. Nondestructive ultrasonic testing of materials

    DOEpatents

    Hildebrand, B.P.

    1994-08-02

    Reflection wave forms obtained from aged and unaged material samples can be compared in order to indicate trends toward age-related flaws. Statistical comparison of a large number of data points from such wave forms can indicate changes in the microstructure of the material due to aging. The process is useful for predicting when flaws may occur in structural elements of high risk structures such as nuclear power plants, airplanes, and bridges. 4 figs.

  3. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  4. Comparison of Birds Detected from Roadside and Off-Road Point Counts in the Shenandoah National Park

    Treesearch

    Cherry M.E. Keller; Mark R. Fuller

    1995-01-01

    Roadside point counts are generally used for large surveys to increase the number of samples. We examined differences in species detected from roadside versus off-road (200-m and 400-m) point counts in the Shenandoah National Park. We also compared the list of species detected in the first 3 minutes to those detected in 10 minutes for potential species biases. Results...

  5. Posttransplant oxygen inhalation improves the outcome of subcutaneous islet transplantation: A promising clinical alternative to the conventional intrahepatic site.

    PubMed

    Komatsu, H; Rawson, J; Barriga, A; Gonzalez, N; Mendez, D; Li, J; Omori, K; Kandeel, F; Mullen, Y

    2018-04-01

    Subcutaneous tissue is a promising site for islet transplantation, due to its large area and accessibility, which allows minimally invasive procedures for transplantation, graft monitoring, and removal of malignancies as needed. However, relative to the conventional intrahepatic transplantation site, the subcutaneous site requires a large number of islets to achieve engraftment success and diabetes reversal, due to hypoxia and low vascularity. We report that the efficiency of subcutaneous islet transplantation in a Lewis rat model is significantly improved by treating recipients with inhaled 50% oxygen, in conjunction with prevascularization of the graft bed by agarose-basic fibroblast growth factor. Administration of 50% oxygen increased oxygen tension in the subcutaneous site to 140 mm Hg, compared to 45 mm Hg under ambient air. In vitro, islets cultured under 140 mm Hg oxygen showed reduced central necrosis and increased insulin release, compared to those maintained in 45 mm Hg oxygen. Six hundred syngeneic islets subcutaneously transplanted into the prevascularized graft bed reversed diabetes when combined with postoperative 50% oxygen inhalation for 3 days, a number comparable to that required for intrahepatic transplantation; in the absence of oxygen treatment, diabetes was not reversed. Thus, we show oxygen inhalation to be a simple and promising approach to successfully establishing subcutaneous islet transplantation. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.

  6. Comparison of Submental Blood Collection with the Retroorbital and Submandibular Methods in Mice (Mus musculus)

    PubMed Central

    Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L

    2016-01-01

    Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712

  7. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  8. Comparing spatial regression to random forests for large ...

    EPA Pesticide Factsheets

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  9. Effects of a temperature-dependent rheology on large scale continental extension

    NASA Technical Reports Server (NTRS)

    Sonder, Leslie J.; England, Philip C.

    1988-01-01

    The effects of a temperature-dependent rheology on large-scale continental extension are investigated using a thin viscous sheet model. A vertically-averaged rheology is used that is consistent with laboratory experiments on power-law creep of olivine and that depends exponentially on temperature. Results of the calculations depend principally on two parameters: the Peclet number, which describes the relative rates of advection and diffusion of heat, and a dimensionless activation energy, which controls the temperature dependence of the rheology. At short times following the beginning of extension, deformation occurs with negligible change in temperature, so that only small changes in lithospheric strength occur due to attenuation of the lithosphere. However, after a certain critical time interval, thermal diffusion lowers temperatures in the lithosphere, strongly increasing lithospheric strength and slowing the rate of extension. This critical time depends principally on the Peclet number and is short compared with the thermal time constant of the lithosphere. The strength changes cause the locus of high extensional strain rates to shift with time from regions of high strain to regions of low strain. Results of the calculations are compared with observations from the Aegean, where maximum extensional strains are found in the south, near Crete, but maximum present-day strain rates are largest about 300 km further north.

  10. Small-Angle and Ultrasmall-Angle Neutron Scattering (SANS/USANS) Study of New Albany Shale: A Treatise on Microporosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahadur, Jitendra; Radlinski, Andrzej P.; Melnichenko, Yuri B.

    We applied small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) techniques to study the microstructure of several New Albany shales of different maturity. It has been established that the total porosity decreases with maturity and increases somewhat for post-mature samples. A new method of SANS data analysis was developed, which allows the extraction of information about the size range and number density of micropores from the relatively flat scattering intensity observed in the limit of the large scattering vector Q. Macropores and significant number of mesopores are surface fractals, and their structure can be described in terms of themore » polydisperse spheres (PDSP) model. The model-independent Porod invariant method was employed to estimate total porosity, and the results were compared with the PDSP model results. It has been demonstrated that independent evaluation of incoherent background is crucial for accurate interpretation of the scattering data in the limit of large Q-values. Moreover, pore volumes estimated by the N 2 and CO 2 adsorption, as well as via the mercury intrusion technique, have been compared with those measured by SANS/USANS, and possible reasons for the observed discrepancies are discussed.« less

  11. Small-Angle and Ultrasmall-Angle Neutron Scattering (SANS/USANS) Study of New Albany Shale: A Treatise on Microporosity

    DOE PAGES

    Bahadur, Jitendra; Radlinski, Andrzej P.; Melnichenko, Yuri B.; ...

    2014-12-17

    We applied small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) techniques to study the microstructure of several New Albany shales of different maturity. It has been established that the total porosity decreases with maturity and increases somewhat for post-mature samples. A new method of SANS data analysis was developed, which allows the extraction of information about the size range and number density of micropores from the relatively flat scattering intensity observed in the limit of the large scattering vector Q. Macropores and significant number of mesopores are surface fractals, and their structure can be described in terms of themore » polydisperse spheres (PDSP) model. The model-independent Porod invariant method was employed to estimate total porosity, and the results were compared with the PDSP model results. It has been demonstrated that independent evaluation of incoherent background is crucial for accurate interpretation of the scattering data in the limit of large Q-values. Moreover, pore volumes estimated by the N 2 and CO 2 adsorption, as well as via the mercury intrusion technique, have been compared with those measured by SANS/USANS, and possible reasons for the observed discrepancies are discussed.« less

  12. Decay of grid turbulence in superfluid helium-4: Mesh dependence

    NASA Astrophysics Data System (ADS)

    Yang, J.; Ihas, G. G.

    2018-03-01

    Temporal decay of grid turbulence is experimentally studied in superfluid 4He in a large square channel. The second sound attenuation method is used to measure the turbulent vortex line density (L) with a phase locked tracking technique to minimize frequency shift effects induced by temperature fluctuations. Two different grids (0.8 mm and 3.0 mm mesh) are pulled to generate turbulence. Different power laws for decaying behavior are predicted by a theory. According to this theory, L should decay as t‑11/10 when the length scale of energy containing eddies grows from the grid mesh size to the size of the channel. At later time, after the energy containing eddy size becomes comparable to the channel, L should follow t‑3/2. Our recent experimental data exhibit evidence for t‑11/10 during the early time and t‑2 instead of t‑3/2 for later time. Moreover, a consistent bump/plateau feature is prominent between the two decay regimes for smaller (0.8 mm) grid mesh holes but absent with a grid mesh hole of 3.0 mm. This implies that in the large channel different types of turbulence are generated, depending on mesh hole size (mesh Reynolds number) compared to channel Reynolds number.

  13. The single chest tube versus double chest tube application after pulmonary lobectomy: A systematic review and meta-analysis.

    PubMed

    Zhang, Xuefei; Lv, Desheng; Li, Mo; Sun, Ge; Liu, Changhong

    2016-12-01

    Draining of the chest cavity with two chest tubes after pulmonary lobectomy is a common practice. The objective of this study was to evaluate whether using two tubes after a pulmonary lobectomy is more effective than using a single tube. We performed a meta-analysis of five randomized studies that compared the single chest tube with double chest tube application after pulmonary lobectomy. The primary end-point was amount of drainage and duration of chest tube drainage. The secondary end-points were the patient's numbers of new drain insertion after operation, hospital stay after operation, the patient's numbers of subcutaneous emphysema after operation, the patient's numbers of residual pleural air space, pain score, the number of patients who need thoracentesis, and cost. Five randomized controlled trials totaling 502 patients were included. Meta-analysis results are as follows: There were statistically significant differences in amount of drainage (risk ratio [RR] = -0.15; 95% confidence interval [CI] = -3.17, -0.12, P = 0. 03), duration of chest tube drainage (RR = -0.43; 95% CI = -0.57, -0.19, P = 0.02), pain score (P < 0.05). Compared with patients receiving the double chest tube group, there were no statistically significant differences between the two groups with regard to the patient's numbers of new drain insertion after operation. Compared with the double chest tube, the single chest tube significantly decreases amount of drainage, duration of chest tube drainage, pain score, the number of patients who need thoracentesis, and cost. Although there is convincing evidence to confirm the results mentioned herein, they still need to be confirmed by large-sample, multicenter, randomized, controlled trials.

  14. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  15. Analysis of the Giacobini-Zinner bow wave

    NASA Technical Reports Server (NTRS)

    Smith, E. J.; Slavin, J. A.; Bame, S. J.; Thomsen, M. F.; Cowley, S. W. H.; Richardson, I. G.; Hovestadt, D.; Ipavich, F. M.; Ogilvie, K. W.; Coplan, M. A.

    1986-01-01

    The cometary bow wave of P/Giacobini-Zinner has been analyzed using the complete set of ICE field and particle observations to determine if it is a shock. Changes in the magnetic field and plasma flow velocities from upstream to downstream have been analyzed to determine the direction of the normal and the propagation velocity of the bow wave. The velocity has then been compared with the fast magnetosonic wave speed upstream to derive the Mach number and establish whether it is supersonic, i.e., a shock, or subsonic, i.e., a large amplitude wave. The various measurements have also been compared with values derived from a Rankine-Hugoniot analysis. The results indicate that, inbound, the bow wave is a shock with M = 1.5. Outbound, a subsonic Mach number is obtained, however, arguments are presented that the bow wave is also likely to be a shock at this location.

  16. Comparison of Two Methods for the Isolation of Salmonellae From Imported Foods

    PubMed Central

    Taylor, Welton I.; Hobbs, Betty C.; Smith, Muriel E.

    1964-01-01

    Two methods for the detection of salmonellae in foods were compared in 179 imported meat and egg samples. The number of positive samples and replications, and the number of strains and kinds of serotypes were statistically comparable by both the direct enrichment method of the Food Hygiene Laboratory in England, and the pre-enrichment method devised for processed foods in the United States. Boneless frozen beef, veal, and horsemeat imported from five countries for consumption in England were found to have salmonellae present in 48 of 116 (41%) samples. Dried egg products imported from three countries were observed to have salmonellae in 10 of 63 (16%) samples. The high incidence of salmonellae isolated from imported foods illustrated the existence of an international health hazard resulting from the continuous introduction of exogenous strains of pathogenic microorganisms on a large scale. PMID:14106941

  17. Morgenröthe or business as usual: a personal account of the 2nd Annual EULAR Congress, Prague

    PubMed Central

    Wollheim, Frank A

    2001-01-01

    The 2nd Annual European League Against Rheumatism (EULAR) Congress, held in Prague, 13–16 June 2001, was an impressive event with a record turnout of 8300 delegates. It offered a large variety of first-class state of the art lectures by some 180 invited worldwide speakers. Several new and ongoing therapeutic developments were discussed. The aim to attract the young scientific community was only partly achieved, and the dependence on industry posed some problems. The organization, however, was a big improvement compared with the previous congress in this series. The number of submitted abstracts was relatively low (1200) compared with the number of delegates. Accommodation of satellite symposia and organization of poster sessions remain problem areas of this meeting. The Annual EULAR Congress emerges as one of the two most important annual congresses of rheumatology, the other being the American College of Rheumatology meeting.

  18. An Energy Efficient Cooperative Hierarchical MIMO Clustering Scheme for Wireless Sensor Networks

    PubMed Central

    Nasim, Mehwish; Qaisar, Saad; Lee, Sungyoung

    2012-01-01

    In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO) communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes. PMID:22368459

  19. Preconditioning for Numerical Simulation of Low Mach Number Three-Dimensional Viscous Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli

    1997-01-01

    A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.

  20. Attracting underrepresented minority students to the sciences with an interest and utility value intervention: Catching and holding interest in recruitment materials

    NASA Astrophysics Data System (ADS)

    Stribling, Tracy M.

    In order to explore recruitment methods for attracting undergraduate underrepresented minority (URM) students to the sciences, an applied intervention involving the manipulation of the construct of interest was implemented. Using Bridges to the Baccalaureate--a scientific research program available to community college URM students--as the context for the intervention, I redesigned the original recruitment brochure into two new brochures: one designed to catch interest and one designed to catch interest as well as hold it. Largely attributable to inherent limitations of applied research, no differences were found between the number of applications submitted the year the intervention was implemented compared to the previous baseline year, nor were any differences found between the number of applications submitted by students who received the interest brochure compared to those who received the utility value brochure.

  1. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample.

    PubMed

    Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J

    2017-05-11

    Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p < 0.001). Quartile of treatment starts was significantly associated with the number of strategies endorsed (p < 0.01), with the top quartile endorsing a median of 33 strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation strategies were able to deliver more evidence-based treatment in HCV. The current assessment also demonstrates the feasibility of electronic self-reporting to evaluate ERIC strategies on a large scale. These results provide initial evidence for the clinical relevance of the ERIC strategies in a real-world implementation setting on a large scale. This is an initial step in identifying which strategies are associated with the uptake of evidence-based practices in nationwide healthcare systems.

  2. Simulation of a large size inductively coupled plasma generator and comparison with experimental data

    NASA Astrophysics Data System (ADS)

    Lei, Fan; Li, Xiaoping; Liu, Yanming; Liu, Donglin; Yang, Min; Yu, Yuanyuan

    2018-01-01

    A two-dimensional axisymmetric inductively coupled plasma (ICP) model with its implementation in the COMSOL (Multi-physics simulation software) platform is described. Specifically, a large size ICP generator filled with argon is simulated in this study. Distributions of the number density and temperature of electrons are obtained for various input power and pressure settings and compared. In addition, the electron trajectory distribution is obtained in simulation. Finally, using experimental data, the results from simulations are compared to assess the veracity of the two-dimensional fluid model. The purpose of this comparison is to validate the veracity of the simulation model. An approximate agreement was found (variation tendency is the same). The main reasons for the numerical magnitude discrepancies are the assumption of a Maxwellian distribution and a Druyvesteyn distribution for the electron energy and the lack of cross sections of collision frequencies and reaction rates for argon plasma.

  3. Rapidly Measuring the Speed of Unconscious Learning: Amnesics Learn Quickly and Happy People Slowly

    PubMed Central

    Dienes, Zoltan; Baddeley, Roland J.; Jansari, Ashok

    2012-01-01

    Background We introduce a method for quickly determining the rate of implicit learning. Methodology/Principal Findings The task involves making a binary prediction for a probabilistic sequence over 10 minutes; from this it is possible to determine the influence of events of a different number of trials in the past on the current decision. This profile directly reflects the learning rate parameter of a large class of learning algorithms including the delta and Rescorla-Wagner rules. To illustrate the use of the method, we compare a person with amnesia with normal controls and we compare people with induced happy and sad moods. Conclusions/Significance Learning on the task is likely both associative and implicit. We argue theoretically and demonstrate empirically that both amnesia and also transient negative moods can be associated with an especially large learning rate: People with amnesia can learn quickly and happy people slowly. PMID:22457759

  4. Drug Resistance and R Factors in the Bowel Bacteria of London Patients before and after Admission to Hospital

    PubMed Central

    Datta, Naomi

    1969-01-01

    The content of drug-resistant coliform bacteria in faecal specimens collected before admission from patients awaiting non-urgent surgery were compared with specimens collected in hospital. Resistant strains of Escherichia coli were isolated from 52% of preadmission specimens and were present in large numbers in 28%. Tetracycline, sulphonamide, and streptomycin resistance were commonest: 60% of resistant strains carried transmissible R factors and multiple resistance was commoner than single. No characteristically resistant intestinal bacteria of any genera were found in hospital specimens as compared with those from outside. PMID:4976456

  5. Prediction of Turbulent Temperature Fluctuations in Hot Jets

    NASA Technical Reports Server (NTRS)

    Debonis, James R.

    2017-01-01

    Large-eddy simulations were used to investigate turbulent temperature fluctuations and turbulent heat flux in hot jets. A high-resolution finite-difference Navier-Stokes solver, WRLES, was used to compute the flow from a 2-inch round nozzle. Several different flow conditions, consisting of different jet Mach numbers and temperature ratios, were examined. Predictions of mean and fluctuating velocities were compared to previously obtained particle image velocimetry data. Predictions of mean and fluctuating temperature were compared to new data obtained using Raman spectroscopy. Based on the good agreement with experimental data for the individual quantities, the combined quantity turbulent heat flux was examined.

  6. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  7. Development and evaluation of deep intra-uterine artificial insemination using cryopreserved sexed spermatozoa in bottlenose dolphins (Tursiops truncatus).

    PubMed

    Robeck, Todd R; Montano, G A; Steinman, K J; Smolensky, P; Sweeney, J; Osborn, S; O'Brien, J K

    2013-06-01

    Since its development in bottlenose dolphins, widespread application of AI with sex-selected, frozen-thawed (FT) spermatozoa has been limited by the significant expense of the sorting process. Reducing the total number of progressively motile sperm (PMS) required for an AI would reduce the sorting cost. As such, this research compared the efficacy of small-dose deep uterine AI with sexed FT spermatozoa (SEXED-SMALL; ~50×10(6)PMS, n=20), to a moderate dose deposited mid-horn (SEXED-STD, ~200×10(6)PMS; n=20), and a large dose of FT non-sexed spermatozoa deposited in the uterine body (NONSEXED-LARGE, 660×10(6)PMS, n=9). Ten of the 11 calves resulting from use of sexed spermatozoa were of the predetermined sex. Similar rates of conception (NONSEXED-LARGE: 78%, SEXED-STD: 60%, SEXED-SMALL: 57%) and total pregnancy loss (TPL: NONSEXED-LARGE: 28.6%; SEXED-STD: 41.0%; SEXED-SMALL: 63.6%) were observed across groups, but early pregnancy loss (EPL,

  8. Comparable Analysis of the Distribution Functions of Runup Heights of the 1896, 1933 and 2011 Japanese Tsunamis in the Sanriku Area

    NASA Astrophysics Data System (ADS)

    Choi, B. H.; Min, B. I.; Yoshinobu, T.; Kim, K. O.; Pelinovsky, E.

    2012-04-01

    Data from a field survey of the 2011 tsunami in the Sanriku area of Japan is presented and used to plot the distribution function of runup heights along the coast. It is shown that the distribution function can be approximated using a theoretical log-normal curve [Choi et al, 2002]. The characteristics of the distribution functions derived from the runup-heights data obtained during the 2011 event are compared with data from two previous gigantic tsunamis (1896 and 1933) that occurred in almost the same region. The number of observations during the last tsunami is very large (more than 5,247), which provides an opportunity to revise the conception of the distribution of tsunami wave heights and the relationship between statistical characteristics and number of observations suggested by Kajiura [1983]. The distribution function of the 2011 event demonstrates the sensitivity to the number of observation points (many of them cannot be considered independent measurements) and can be used to determine the characteristic scale of the coast, which corresponds to the statistical independence of observed wave heights.

  9. Vorticity, backscatter and counter-gradient transport predictions using two-level simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ranjan, R.; Menon, S.

    2018-04-01

    The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.

  10. Analogue evaluation of the effects of opportunities to respond and ratios of known items within drill rehearsal of Esperanto words.

    PubMed

    Szadokierski, Isadora; Burns, Matthew K

    2008-10-01

    Drill procedures have been used to increase the retention of various types of information, but little is known about the causal mechanisms of these techniques. The current study compared the effect of two key features of drill procedures, a large number of opportunities to respond (OTR) and a drill ratio that maintains a high percentage of known to unknown items (90% known). Using a factorial design, 27 4th graders were taught the pronunciation and meaning of Esperanto words using four versions of incremental rehearsal that varied on two factors, percentage of known words (high - 90% vs. moderate - 50%) and the number of OTR (high vs. low). A within-subject ANOVA revealed a significant main effect for OTR and non-significant effects for drill ratio and the interaction between the two variables. Moreover, it was found that increasing OTR from low to high yielded a large effect size (d=2.46), but increasing the percentage of known material from moderate (50%) to high (90%) yielded a small effect (d=0.16). These results suggest that a high number of OTR may be a key feature of flashcard drill techniques in promoting learning and retention.

  11. An improved methodology of asymmetric flow field flow fractionation hyphenated with inductively coupled mass spectrometry for the determination of size distribution of gold nanoparticles in dietary supplements.

    PubMed

    Mudalige, Thilak K; Qu, Haiou; Linder, Sean W

    2015-11-13

    Engineered nanoparticles are available in large numbers of commercial products claiming various health benefits. Nanoparticle absorption, distribution, metabolism, excretion, and toxicity in a biological system are dependent on particle size, thus the determination of size and size distribution is essential for full characterization. Number based average size and size distribution is a major parameter for full characterization of the nanoparticle. In the case of polydispersed samples, large numbers of particles are needed to obtain accurate size distribution data. Herein, we report a rapid methodology, demonstrating improved nanoparticle recovery and excellent size resolution, for the characterization of gold nanoparticles in dietary supplements using asymmetric flow field flow fractionation coupled with visible absorption spectrometry and inductively coupled plasma mass spectrometry. A linear relationship between gold nanoparticle size and retention times was observed, and used for characterization of unknown samples. The particle size results from unknown samples were compared to results from traditional size analysis by transmission electron microscopy, and found to have less than a 5% deviation in size for unknown product over the size range from 7 to 30 nm. Published by Elsevier B.V.

  12. [Distribution of lymphocyte subpopulations and plasma cells in the colonic mucosa of children with ulcerative colitis].

    PubMed

    Arató, A; Savilahti, E; Tainio, V M

    1990-09-02

    The distribution of lymphocyte subpopulations and plasma cells of the colonic and rectal mucosae were studied in eight children with ulcerative colitis and 12 healthy controls. In four patients the examinations were also carried out 3 months after the beginning of treatment. No difference in the number of intraepithelial lymphocytes was found between the patients and controls. The majority of these cells were T-cells, and among them the suppressor/cytotoxic cells were preponderant. In the lamina propria of both untreated and treated patients the numbers of T-cells, helper T-cells, and B-cells were elevated compared to controls. In the patients the number of IgG-containing cells was three times that of the controls; the number of IgE positive cells was also elevated. The numbers of IgA- and IgM-containing cells were not different from that of the controls. The results suggest that in ulcerative colitis the place of primary immunological processes inside the large bowel mucosa is the lamina propria.

  13. Cross-species correlation between queen mating numbers and worker ovary sizes suggests kin conflict may influence ovary size evolution in honeybees

    NASA Astrophysics Data System (ADS)

    Rueppell, Olav; Phaincharoen, Mananya; Kuster, Ryan; Tingek, Salim

    2011-09-01

    During social evolution, the ovary size of reproductively specialized honey bee queens has dramatically increased while their workers have evolved much smaller ovaries. However, worker division of labor and reproductive competition under queenless conditions are influenced by worker ovary size. Little comparative information on ovary size exists in the different honey bee species. Here, we report ovariole numbers of freshly dissected workers from six Apis species from two locations in Southeast Asia. The average number of worker ovarioles differs significantly among species. It is strongly correlated with the average mating number of queens, irrespective of body size. Apis dorsata, in particular, is characterized by numerous matings and very large worker ovaries. The relation between queen mating number and ovary size across the six species suggests that individual selection via reproductive competition plays a role in worker ovary size evolution. This indicates that genetic diversity, generated by multiple mating, may bear a fitness cost at the colony level.

  14. CNV-seq, a new method to detect copy number variation using high-throughput sequencing.

    PubMed

    Xie, Chao; Tammi, Martti T

    2009-03-06

    DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.

  15. Auxin Synthesis-Encoding Transgene Enhances Grape Fecundity1[OA

    PubMed Central

    Costantini, Elisa; Landi, Lucia; Silvestroni, Oriana; Pandolfini, Tiziana; Spena, Angelo; Mezzetti, Bruno

    2007-01-01

    Grape (Vitis vinifera) yield is largely dependent on the fecundity of the cultivar. The average number of inflorescences per shoot (i.e. shoot fruitfulness) is a trait related to fecundity of each grapevine. Berry number and weight per bunch are other features affecting grape yield. An ovule-specific auxin-synthesizing (DefH9-iaaM) transgene that increases the indole-3-acetic acid content of grape transgenic berries was transformed into cultivars Silcora and Thompson Seedless, which differ in the average number of inflorescences per shoots. Thompson Seedless naturally has very low shoot fruitfulness, whereas Silcora has medium shoot fruitfulness. The average number of inflorescences per shoot in DefH9-iaaM Thompson Seedless was doubled compared to its wild-type control. Berry number per bunch was increased in both transgenic cultivars. The quality and nutritional value of transgenic berries were substantially equivalent to their control fruits. The data presented indicate that auxin enhances fecundity in grapes, thus enabling to increase yield with lower production costs. PMID:17337528

  16. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  17. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  18. Impact of compressibility on heat transport characteristics of large terrestrial planets

    NASA Astrophysics Data System (ADS)

    Čížková, Hana; van den Berg, Arie; Jacobs, Michel

    2017-07-01

    We present heat transport characteristics for mantle convection in large terrestrial exoplanets (M ⩽ 8M⊕) . Our thermal convection model is based on a truncated anelastic liquid approximation (TALA) for compressible fluids and takes into account a selfconsistent thermodynamic description of material properties derived from mineral physics based on a multi-Einstein vibrational approach. We compare heat transport characteristics in compressible models with those obtained with incompressible models based on the classical- and extended Boussinesq approximation (BA and EBA respectively). Our scaling analysis shows that heat flux scales with effective dissipation number as Nu ∼Dieff-0.71 and with Rayleigh number as Nu ∼Raeff0.27. The surface heat flux of the BA models strongly overestimates the values from the corresponding compressible models, whereas the EBA models systematically underestimate the heat flux by ∼10%-15% with respect to a corresponding compressible case. Compressible models are also systematically warmer than the EBA models. Compressibility effects are therefore important for mantle dynamic processes, especially for large rocky exoplanets and consequently also for formation of planetary atmospheres, through outgassing, and the existence of a magnetic field, through thermal coupling of mantle and core dynamic systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, H; UT Southwestern Medical Center, Dallas, TX; Hilts, M

    Purpose: To commission a multislice computed tomography (CT) scanner for fast and reliable readout of radiation therapy (RT) dose distributions using CT polymer gel dosimetry (PGD). Methods: Commissioning was performed for a 16-slice CT scanner using images acquired through a 1L cylinder filled with water. Additional images were collected using a single slice machine for comparison purposes. The variability in CT number associated with the anode heel effect was evaluated and used to define a new slice-by-slice background image subtraction technique. Image quality was assessed for the multislice system by comparing image noise and uniformity to that of the singlemore » slice machine. The consistency in CT number across slices acquired simultaneously using the multislice detector array was also evaluated. Finally, the variability in CT number due to increasing x-ray tube load was measured for the multislice scanner and compared to the tube load effects observed on the single slice machine. Results: Slice-by-slice background subtraction effectively removes the variability in CT number across images acquired simultaneously using the multislice scanner and is the recommended background subtraction method when using a multislice CT system. Image quality for the multislice machine was found to be comparable to that of the single slice scanner. Further study showed CT number was consistent across image slices acquired simultaneously using the multislice detector array for each detector configuration of the slice thickness examined. In addition, the multislice system was found to eliminate variations in CT number due to increasing x-ray tube load and reduce scanning time by a factor of 4 when compared to imaging a large volume using a single slice scanner. Conclusion: A multislice CT scanner has been commissioning for CT PGD, allowing images of an entire dose distribution to be acquired in a matter of minutes. Funding support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC)« less

  20. Microbial community analysis using MEGAN.

    PubMed

    Huson, Daniel H; Weber, Nico

    2013-01-01

    Metagenomics, the study of microbes in the environment using DNA sequencing, depends upon dedicated software tools for processing and analyzing very large sequencing datasets. One such tool is MEGAN (MEtaGenome ANalyzer), which can be used to interactively analyze and compare metagenomic and metatranscriptomic data, both taxonomically and functionally. To perform a taxonomic analysis, the program places the reads onto the NCBI taxonomy, while functional analysis is performed by mapping reads to the SEED, COG, and KEGG classifications. Samples can be compared taxonomically and functionally, using a wide range of different charting and visualization techniques. PCoA analysis and clustering methods allow high-level comparison of large numbers of samples. Different attributes of the samples can be captured and used within analysis. The program supports various input formats for loading data and can export analysis results in different text-based and graphical formats. The program is designed to work with very large samples containing many millions of reads. It is written in Java and installers for the three major computer operating systems are available from http://www-ab.informatik.uni-tuebingen.de. © 2013 Elsevier Inc. All rights reserved.

  1. Large-scale dynamos in rapidly rotating plane layer convection

    NASA Astrophysics Data System (ADS)

    Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.

    2018-05-01

    Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.

  2. Characteristics and incidence of large eggs in Trichuris muris.

    PubMed

    Koyama, Koichi

    2013-05-01

    The production of small numbers of large eggs among the standard-sized eggs of Trichuris trichiura is well known. Large eggs have also been observed in Trichuris muris, but they have not been studied previously. This paper compares the characteristics of the large eggs (LEs, ≥74.5 μm long) and standard-sized eggs (SEs, <74.5 μm long) in cultures of T. muris. Among 112,554 cultured eggs, LEs occurred at very low frequency (0.03 %, i.e., about three large eggs per 10(4) cultured eggs). Embryonated eggs represented 93.72 % of SEs, but only 25.00 % of LEs were embryonated. Embryonated LEs and SEs contained fully matured larvae. An atypical category of unembryonated egg, which contained an incompletely developed larva, an abnormal larva, or granular components, was common among the LEs. However, similar atypical unembryonated SEs were rarely observed. These observations suggest that the LEs that occur very infrequently in T. muris result from an abnormality of embryonation (larval development).

  3. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  4. The Intuitiveness of the Law of Large Numbers

    ERIC Educational Resources Information Center

    Lem, Stephanie

    2015-01-01

    In this paper two studies are reported in which two contrasting claims concerning the intuitiveness of the law of large numbers are investigated. While Sedlmeier and Gigerenzer ("J Behav Decis Mak" 10:33-51, 1997) claim that people have an intuition that conforms to the law of large numbers, but that they can only employ this intuition…

  5. Demographics, clinical disease characteristics, and quality of life in a large cohort of psoriasis patients with and without psoriatic arthritis

    PubMed Central

    Truong, B; Rich-Garg, N; Ehst, BD; Deodhar, AA; Ku, JH; Vakil-Gilani, K; Danve, A; Blauvelt, A

    2015-01-01

    Innovation What is already known about the topic: psoriasis (PsO) is a common skin disease with major impact on quality of life (QoL). Patient-reported data on QoL from large number of PsO patients with and without psoriatic arthritis (PsA) are limited. What this study adds: In a large cohort referred to a university psoriasis center, patients with PsO and concomitant PsA (~30% in this group) had greater degrees of skin and nail involvement and experienced greater negative impacts on QoL. Despite large numbers of patients with moderate-to-severe disease, use of systemic therapy by community practitioners was uncommon. Background PsO and PsA are common diseases that have marked adverse impacts on QoL. The disease features and patient-reported QoL data comparing PsO and PsA patients are limited. Objective To identify and compare demographics, clinical disease characteristics, and QoL scores in a large cohort of PsO patients with and without PsA. Methods All PsO patients seen in a psoriasis specialty clinic, named the Center of Excellence for Psoriasis and Psoriatic Arthritis, were enrolled in an observational cohort. Demographic, QoL, and clinical data were collected from patient-reported questionnaires and from physical examinations performed by Center of Excellence for Psoriasis and Psoriatic Arthritis dermatologists and a rheumatologists. Cross sectional descriptive data were collected and comparisons between patients with PsO alone and those with concomitant PsA are presented. Results A total of 568 patients were enrolled in the database. Mean age of PsO onset was 28 years and mean disease duration was 18 years. Those with family history had an earlier onset of PsO by ~7 years. Mean body surface area involvement with PsO was 14%. Mean body mass index was 30.7. Prevalence of PsA was 29.8%. PsA patients had a higher mean body surface area compared to patients with PsO alone (16.7% vs 13.4%, P<0.05), higher prevalence of psoriatic nail changes (54.4% vs 36%, P<0.0002), and worse QoL scores as assessed by the Short Form-12 (67 vs 52, P<0.00001), Psoriasis Quality of Life-12 questionnaire (62 vs 71, P<0.01), and Routine Assessment of Patient Index Data 3 (2.3 vs 4.7, P<0.01). Strikingly, 49% of patients with PsO had never received any systemic therapy. Conclusion These data highlight that PsO has marked negative impacts on QoL, while those patients with concomitant PsA are affected to a much greater degree. Despite large numbers of patients presenting with moderate-to-severe disease, use of systemic therapy for both PsO and PsA was uncommon. PMID:26622188

  6. An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Wallace, James M.; Ong, L.; Balint, J.-L.

    1993-01-01

    The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.

  7. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  8. Automated design of paralogue ratio test assays for the accurate and rapid typing of copy number variation

    PubMed Central

    Veal, Colin D.; Xu, Hang; Reekie, Katherine; Free, Robert; Hardwick, Robert J.; McVey, David; Brookes, Anthony J.; Hollox, Edward J.; Talbot, Christopher J.

    2013-01-01

    Motivation: Genomic copy number variation (CNV) can influence susceptibility to common diseases. High-throughput measurement of gene copy number on large numbers of samples is a challenging, yet critical, stage in confirming observations from sequencing or array Comparative Genome Hybridization (CGH). The paralogue ratio test (PRT) is a simple, cost-effective method of accurately determining copy number by quantifying the amplification ratio between a target and reference amplicon. PRT has been successfully applied to several studies analyzing common CNV. However, its use has not been widespread because of difficulties in assay design. Results: We present PRTPrimer (www.prtprimer.org) software for automated PRT assay design. In addition to stand-alone software, the web site includes a database of pre-designed assays for the human genome at an average spacing of 6 kb and a web interface for custom assay design. Other reference genomes can also be analyzed through local installation of the software. The usefulness of PRTPrimer was tested within known CNV, and showed reproducible quantification. This software and database provide assays that can rapidly genotype CNV, cost-effectively, on a large number of samples and will enable the widespread adoption of PRT. Availability: PRTPrimer is available in two forms: a Perl script (version 5.14 and higher) that can be run from the command line on Linux systems and as a service on the PRTPrimer web site (www.prtprimer.org). Contact: cjt14@le.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:23742985

  9. Free-ranging dogs assess the quantity of opponents in intergroup conflicts.

    PubMed

    Bonanni, Roberto; Natoli, Eugenia; Cafazzo, Simona; Valsecchi, Paola

    2011-01-01

    In conflicts between social groups, the decision of competitors whether to attack/retreat should be based on the assessment of the quantity of individuals in their own and the opposing group. Experimental studies on numerical cognition in animals suggest that they may represent both large and small numbers as noisy mental magnitudes subject to scalar variability, and small numbers (≤4) also as discrete object-files. Consequently, discriminating between large quantities, but not between smaller ones, should become easier as the asymmetry between quantities increases. Here, we tested these hypotheses by recording naturally occurring conflicts in a population of free-ranging dogs, Canis lupus familiaris, living in a suburban environment. The overall probability of at least one pack member approaching opponents aggressively increased with a decreasing ratio of the number of rivals to that of companions. Moreover, the probability that more than half of the pack members withdrew from a conflict increased when this ratio increased. The skill of dogs in correctly assessing relative group size appeared to improve with increasing the asymmetry in size when at least one pack comprised more than four individuals, and appeared affected to a lesser extent by group size asymmetries when dogs had to compare only small numbers. These results provide the first indications that a representation of quantity based on noisy mental magnitudes may be involved in the assessment of opponents in intergroup conflicts and leave open the possibility that an additional, more precise mechanism may operate with small numbers.

  10. Efficient collective influence maximization in cascading processes with first-order transitions

    PubMed Central

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-01-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988

  11. Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator

    NASA Astrophysics Data System (ADS)

    Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.

    2018-02-01

    Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.

  12. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  13. Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.

    2014-01-01

    The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.

  14. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  15. A comparison of spacecraft penetration hazards due to meteoroids and manmade earth-orbiting objects

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.

    1976-01-01

    The ability of a typical double-walled spacecraft structure to protect against penetration by high-velocity incident objects is reviewed. The hazards presented by meteoroids are compared to the current and potential hazards due to manmade orbiting objects. It is shown that the nature of the meteoroid number-mass relationship makes adequate protection for large space facilities a conceptually straightforward structural problem. The present level of manmade orbiting objects (an estimated 10,000 in early 1975) does not pose an unacceptable risk to manned space operations proposed for the near future, but it does produce penetration probabilities in the range of 1-10 percent for a 100-m diameter sphere in orbit for 1,000 days. The number-size distribution of manmade objects is such that adequate protection is difficult to achieve for large permanent space facilities, to the extent that future restrictions on such facilities may result if the growth of orbiting objects continues at its historical rate.

  16. The Linear Bias in the Zeldovich Approximation and a Relation between the Number Density and the Linear Bias of Dark Halos

    NASA Astrophysics Data System (ADS)

    Fan, Zuhui

    2000-01-01

    The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.

  17. System engineering analysis of derelict collision prevention options

    NASA Astrophysics Data System (ADS)

    McKnight, Darren S.; Di Pentino, Frank; Kaczmarek, Adam; Dingman, Patrick

    2013-08-01

    Sensitivities to the future growth of orbital debris and the resulting hazard to operational satellites due to collisional breakups of large derelict objects are being studied extensively. However, little work has been done to quantify the technical and operational tradeoffs between options for minimizing future derelict fragmentations that act as the primary source for future debris hazard growth. The two general categories of debris mitigation examined for prevention of collisions involving large derelict objects (rocket bodies and payloads) are active debris removal (ADR) and just-in-time collision avoidance (JCA). Timing, cost, and effectiveness are compared for ADR and JCA solutions highlighting the required enhancements in uncooperative element set accuracy, rapid ballistic launch, despin/grappling systems, removal technologies, and remote impulsive devices. The primary metrics are (1) the number of derelict objects moved/removed per the number of catastrophic collisions prevented and (2) cost per collision event prevented. A response strategy that contains five different activities, including selective JCA and ADR, is proposed as the best approach going forward.

  18. A Weight Comparison of Several Attitude Controls for Satellites

    NASA Technical Reports Server (NTRS)

    Adams, James J.; Chilton, Robert G.

    1959-01-01

    A brief theoretical study has been made for the purpose for estimating and comparing the weight of three different types of controls that can be used to change the attitude of a satellite. The three types of controls are jet reaction, inertia wheel, and a magnetic bar which interacts with the magnetic field of the earth. An idealized task which imposed severe requirements on the angular motion of the satellite was used as the basis for comparison. The results showed that a control for one axis can be devised which will weigh less than 1 percent of the total weight of the satellite. The inertia-wheel system offers weight-saving possibilities if a large number of cycles of operation are required, whereas the jet system would be preferred if a limited number of cycles are required. The magnetic-bar control requires such a large magnet that it is impractical for the example application but might be of value for supplying small trimming moments about certain axes.

  19. Single snapshot DOA estimation

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  20. Walking in Two French Neighborhoods: A Study of How Park Numbers and Locations Relate to Everyday Walking

    PubMed Central

    Rioux, Liliane; Werner, Carol M.; Mokounkolo, Rene; Brown, Barbara B.

    2017-01-01

    Research indicates that people are drawn to green spaces with attractive amenities. This study extends that finding by comparing walking patterns in two neighborhoods with different numbers of parks; parks did not differ in rated attractiveness nor did neighborhoods differ substantially in rated walkability. Adults, aged 32–86 years (n = 90), drew their 3 most recent walking routes on maps of their neighborhood. Analyses showed that participants’ round trips were longer by 265.5 meters (.16 mile) in the neighborhood with a single, large, centrally located park (p < .02). However, participants in the neighborhood with multiple, small, more distributed parks, visited more streets, p < .001, more streets with green spaces, p < .038, and used more varied routes, p < .012. Results suggest there are potential benefits to both layouts. Large centralized parks may invite longer walks; smaller, well-distributed parks may invite more varied routes suggestive of appropriation and motivation processes. Both layouts might be combined in a single neighborhood to attract more walkers. PMID:28579664

  1. Applicability of a Conservative Margin Approach for Assessing NDE Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, ajay M.

    2007-01-01

    Nondestructive Evaluation (NDE) procedures are required to detect flaws in structures with a high percentage detectability and high confidence. Conventional Probability of Detection (POD) methods are statistical in nature and require detection data from a relatively large number of flaw specimens. In many circumstances, due to the high cost and long lead time, it is impractical to build the large set of flaw specimens that is required by the conventional POD methodology. Therefore, in such situations it is desirable to have a flaw detectability estimation approach that allows for a reduced number of flaw specimens but provides a high degree of confidence in establishing the flaw detectability size. This paper presents an alternative approach called the conservative margin approach (CMA). To investigate the applicability of the CMA approach, flaw detectability sizes determined by the CMA and POD approaches have been compared on actual datasets. The results of these comparisons are presented and the applicability of the CMA approach is discussed.

  2. Efficient collective influence maximization in cascading processes with first-order transitions

    NASA Astrophysics Data System (ADS)

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-03-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.

  3. Adoption of endovenous laser treatment as the primary treatment modality for varicose veins: the Auckland City Hospital experience.

    PubMed

    Fernando, Ruchira S W; Muthu, Carl

    2014-08-01

    To assess the effectiveness of adopting endovenous laser treatment (EVLT) as the primary treatment modality for varicose veins at Auckland City Hospital (Auckland, New Zealand). The outcomes of 354 consecutive EVLT procedures performed between 2007 and 2013 were reviewed. Data was collected from a prospectively maintained procedural database and by retrospective chart review. Of the 319 patients who had an ultrasound, at 1 month post-procedure there was a saphenous vein occlusion rate of 96%. Side effects were minimal with no cases of DVT or skin burns and one case of self-limiting neuralgia. The procedure was well tolerated with a median pain score of 3. Since the adoption of EVLT there has been a large increase in the number of patients treated for varicose veins (28 in 2007 compared to 176 in 2013). EVLT is a safe and effective treatment for varicose veins and its adoption has allowed a large increase in the number of varicose vein patients treated at Auckland City Hospital.

  4. Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations

    NASA Astrophysics Data System (ADS)

    Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane

    2018-04-01

    Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.

  5. CLO-PLA: a database of clonal and bud-bank traits of the Central European flora.

    PubMed

    Klimešová, Jitka; Danihelka, Jiří; Chrtek, Jindřich; de Bello, Francesco; Herben, Tomáš

    2017-04-01

    This dataset presents comprehensive and easy-to-use information on 29 functional traits of clonal growth, bud banks, and lifespan of members of the Central European flora. The source data were compiled from a number of published sources (see the reference file) and the authors' own observations or studies. In total, 2,909 species are included (2,745 herbs and 164 woody species), out of which 1,532 (i.e., 52.7% of total) are classified as possessing clonal growth organs (1,480, i.e., 53.9%, if woody plants are excluded). This provides a unique, and largely unexplored, set of traits of clonal growth that can be used in studies on comparative plant ecology, plant evolution, community assembly, and ecosystem functioning across the large flora of Central Europe. It can be directly imported into a number of programs and packages that perform trait-based and phylogenetic analyses aimed to answer a variety of open and pressing ecological questions. © 2017 by the Ecological Society of America.

  6. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  7. Large Scale Application of Vibration Sensors for Fan Monitoring at Commercial Layer Hen Houses

    PubMed Central

    Chen, Yan; Ni, Ji-Qin; Diehl, Claude A.; Heber, Albert J.; Bogan, Bill W.; Chai, Li-Long

    2010-01-01

    Continuously monitoring the operation of each individual fan can significantly improve the measurement quality of aerial pollutant emissions from animal buildings that have a large number of fans. To monitor the fan operation by detecting the fan vibration is a relatively new technique. A low-cost electronic vibration sensor was developed and commercialized. However, its large scale application has not yet been evaluated. This paper presents long-term performance results of this vibration sensor at two large commercial layer houses. Vibration sensors were installed on 164 fans of 130 cm diameter to continuously monitor the fan on/off status for two years. The performance of the vibration sensors was compared with fan rotational speed (FRS) sensors. The vibration sensors exhibited quick response and high sensitivity to fan operations and therefore satisfied the general requirements of air quality research. The study proved that detecting fan vibration was an effective method to monitor the on/off status of a large number of single-speed fans. The vibration sensor itself was $2 more expensive than a magnetic proximity FRS sensor but the overall cost including installation and data acquisition hardware was $77 less expensive than the FRS sensor. A total of nine vibration sensors failed during the study and the failure rate was related to the batches of product. A few sensors also exhibited unsteady sensitivity. As a new product, the quality of the sensor should be improved to make it more reliable and acceptable. PMID:22163544

  8. Reproduction and caste ratios under stress in trematode colonies with a division of labour.

    PubMed

    Lloyd, Melanie M; Poulin, Robert

    2013-06-01

    Trematodes form clonal colonies in their first intermediate host. Individuals are, depending on species, rediae or sporocysts (which asexually reproduce) and cercariae (which develop within rediae or sporocysts and infect the next host). Some species use a division of labour within colonies, with 2 distinct redial morphs: small rediae (non-reproducing) and large rediae (individuals which produce cercariae). The theory of optimal caste ratio predicts that the ratio of caste members (small to large rediae) responds to environmental variability. This was tested in Philophthalmus sp. colonies exposed to host starvation and competition with the trematode, Maritrema novaezealandensis. Philophthalmus sp. infected snails, with and without M. novaezealandensis, were subjected to food treatments. Reproductive output, number of rediae, and the ratio of small to large rediae were compared among treatments. Philophthalmus sp. colonies responded to host starvation and competition; reproductive output was higher in well-fed snails of both infection types compared with snails in lower food treatments and well-fed, single infected snails compared with well-fed double infected snails. Furthermore, the caste ratio in Philophthalmus sp. colonies was altered in response to competition. This is the first study showing caste ratio responses to environmental pressures in trematodes with a division of labour.

  9. Meta-Storms: efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data.

    PubMed

    Su, Xiaoquan; Xu, Jian; Ning, Kang

    2012-10-01

    It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.

  10. Hybrid maize breeding with doubled haploids: I. One-stage versus two-stage selection for testcross performance.

    PubMed

    Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E

    2006-03-01

    Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.

  11. Maternal and neonatal outcomes after bariatric surgery; a systematic review and meta-analysis: do the benefits outweigh the risks?

    PubMed

    Kwong, Wilson; Tomlinson, George; Feig, Denice S

    2018-02-15

    Obesity during pregnancy is associated with a number of adverse obstetric outcomes that include gestational diabetes mellitus, macrosomia, and preeclampsia. Increasing evidence shows that bariatric surgery may decrease the risk of these outcomes. Our aim was to evaluate the benefits and risks of bariatric surgery in obese women according to obstetric outcomes. We performed a systematic literature search using MEDLINE, Embase, Cochrane, Web of Science, and PubMed from inception up to December 12, 2016. Studies were included if they evaluated patients who underwent bariatric surgery, reported subsequent pregnancy outcomes, and compared these outcomes with a control group. Two reviewers extracted study outcomes independently, and risk of bias was assessed with the use of the Newcastle-Ottawa Quality Assessment Scale. Pooled odds ratios for each outcome were estimated with the Dersimonian and Laird random effects model. After a review of 2616 abstracts, 20 cohort studies and approximately 2.8 million subjects (8364 of whom had bariatric surgery) were included in the metaanalysis. In our primary analysis, patients who underwent bariatric surgery showed reduced rates of gestational diabetes mellitus (odds ratio, 0.20; 95% confidence interval, 0.11-0.37, number needed to benefit, 5), large-for-gestational-age infants (odds ratio, 0.31; 95% confidence interval, 0.17-0.59; number needed to benefit, 6), gestational hypertension (odds ratio, 0.38; 95% confidence interval, 0.19-0.76; number needed to benefit, 11), all hypertensive disorders (odds ratio, 0.38; 95% confidence interval, 0.27-0.53; number needed to benefit, 8), postpartum hemorrhage (odds ratio, 0.32; 95% confidence interval, 0.08-1.37; number needed to benefit, 21), and caesarean delivery rates (odds ratio, 0.50; 95% confidence interval, 0.38-0.67; number needed to benefit, 9); however, group of patients showed an increase in small-for-gestational-age infants (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 21), intrauterine growth restriction (odds ratio, 2.16; 95% confidence interval, 1.34-3.48; number needed to harm, 66), and preterm deliveries (odds ratio, 1.35; 95% confidence interval, 1.02-1.79; number needed to harm, 35) when compared with control subjects who were matched for presurgery body mass index. There were no differences in rates of preeclampsia, neonatal intensive care unit admissions, stillbirths, malformations, and neonatal death. Malabsorptive surgeries resulted in a greater increase in small-for-gestational-age infants (P=.0466) and a greater decrease in large-for-gestational-age infants (P=<.0001) compared with restrictive surgeries. There were no differences in outcomes when we used administrative databases vs clinical charts. Although bariatric surgery is associated with a reduction in the risk of several adverse obstetric outcomes, there is a potential for an increased risk of other important outcomes that should be considered when bariatric surgery is discussed with reproductive-age women. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Aligning Metabolic Pathways Exploiting Binary Relation of Reactions.

    PubMed

    Huang, Yiran; Zhong, Cheng; Lin, Hai Xiang; Huang, Jing

    2016-01-01

    Metabolic pathway alignment has been widely used to find one-to-one and/or one-to-many reaction mappings to identify the alternative pathways that have similar functions through different sets of reactions, which has important applications in reconstructing phylogeny and understanding metabolic functions. The existing alignment methods exhaustively search reaction sets, which may become infeasible for large pathways. To address this problem, we present an effective alignment method for accurately extracting reaction mappings between two metabolic pathways. We show that connected relation between reactions can be formalized as binary relation of reactions in metabolic pathways, and the multiplications of zero-one matrices for binary relations of reactions can be accomplished in finite steps. By utilizing the multiplications of zero-one matrices for binary relation of reactions, we efficiently obtain reaction sets in a small number of steps without exhaustive search, and accurately uncover biologically relevant reaction mappings. Furthermore, we introduce a measure of topological similarity of nodes (reactions) by comparing the structural similarity of the k-neighborhood subgraphs of the nodes in aligning metabolic pathways. We employ this similarity metric to improve the accuracy of the alignments. The experimental results on the KEGG database show that when compared with other state-of-the-art methods, in most cases, our method obtains better performance in the node correctness and edge correctness, and the number of the edges of the largest common connected subgraph for one-to-one reaction mappings, and the number of correct one-to-many reaction mappings. Our method is scalable in finding more reaction mappings with better biological relevance in large metabolic pathways.

  13. User Statistics for an Online Health Game Targeted at Children.

    PubMed

    Alblas, Eva E; Folkvord, Frans; Anschütz, Doeschka J; Ketelaar, Paul E; Granic, Isabela; Mensink, Fréderike; Buijzen, Moniek; van 't Riet, Jonathan P

    2017-10-01

    Given that many households in western countries nowadays have home access to the Internet, developing health-promoting online interventions has the potential to reach large audiences. Studies assessing usage data of online health interventions are important and relevant but, as of yet, scarce. The present study reviewed usage data from Monkey Do, an existing online health game developed specifically for children from 4 to 8 years old. In addition, the effect of advertising on usage was examined. In an observational study, a web-based analysis program was used to examine usage data of all visits to the online health game for the first 31 months following the launch. We reported descriptives for usage data. We analyzed the relationship between advertising and usage with a Mann-Whitney U test, and used a Pearson's chi-square test to investigate the association between advertising and the number of first-time visitors. In the period of data analysis, there were 224,859 sessions. Around 34% of the visitors played the game more than once. Compared with first-time visitors, the average session time of returning visitors was doubled. The game was most frequently accessed via search engine query, on a desktop computer (compared to mobile devices). Advertising was found to be positively related to the number of sessions and the number of first-time visitors. Placing a game online can reach a large audience, but it is important to also consider how to stimulate retention. Furthermore, repeated advertisement for an online game appears to be necessary to maintain visitors over time.

  14. Sequence Polymorphisms and Structural Variations among Four Grapevine (Vitis vinifera L.) Cultivars Representing Sardinian Agriculture

    PubMed Central

    Mercenaro, Luca; Nieddu, Giovanni; Porceddu, Andrea; Pezzotti, Mario; Camiolo, Salvatore

    2017-01-01

    The genetic diversity among grapevine (Vitis vinifera L.) cultivars that underlies differences in agronomic performance and wine quality reflects the accumulation of single nucleotide polymorphisms (SNPs) and small indels as well as larger genomic variations. A combination of high throughput sequencing and mapping against the grapevine reference genome allows the creation of comprehensive sequence variation maps. We used next generation sequencing and bioinformatics to generate an inventory of SNPs and small indels in four widely cultivated Sardinian grape cultivars (Bovale sardo, Cannonau, Carignano and Vermentino). More than 3,200,000 SNPs were identified with high statistical confidence. Some of the SNPs caused the appearance of premature stop codons and thus identified putative pseudogenes. The analysis of SNP distribution along chromosomes led to the identification of large genomic regions with uninterrupted series of homozygous SNPs. We used a digital comparative genomic hybridization approach to identify 6526 genomic regions with significant differences in copy number among the four cultivars compared to the reference sequence, including 81 regions shared between all four cultivars and 4953 specific to single cultivars (representing 1.2 and 75.9% of total copy number variation, respectively). Reads mapping at a distance that was not compatible with the insert size were used to identify a dataset of putative large deletions with cultivar Cannonau revealing the highest number. The analysis of genes mapping to these regions provided a list of candidates that may explain some of the phenotypic differences among the Bovale sardo, Cannonau, Carignano and Vermentino cultivars. PMID:28775732

  15. THE SPECTRAL AMPLITUDE OF STELLAR CONVECTION AND ITS SCALING IN THE HIGH-RAYLEIGH-NUMBER REGIME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Featherstone, Nicholas A.; Hindman, Bradley W., E-mail: feathern@colorado.edu

    2016-02-10

    Convection plays a central role in the dynamics of any stellar interior, and yet its operation remains largely hidden from direct observation. As a result, much of our understanding concerning stellar convection necessarily derives from theoretical and computational models. The Sun is, however, exceptional in that regard. The wealth of observational data afforded by its proximity provides a unique test bed for comparing convection models against observations. When such comparisons are carried out, surprising inconsistencies between those models and observations become apparent. Both photospheric and helioseismic measurements suggest that convection simulations may overestimate convective flow speeds on large spatial scales.more » Moreover, many solar convection simulations have difficulty reproducing the observed solar differential rotation owing to this apparent overestimation. We present a series of three-dimensional stellar convection simulations designed to examine how the amplitude and spectral distribution of convective flows are established within a star’s interior. While these simulations are nonmagnetic and nonrotating in nature, they demonstrate two robust phenomena. When run with sufficiently high Rayleigh number, the integrated kinetic energy of the convection becomes effectively independent of thermal diffusion, but the spectral distribution of that kinetic energy remains sensitive to both of these quantities. A simulation that has converged to a diffusion-independent value of kinetic energy will divide that energy between spatial scales such that low-wavenumber power is overestimated and high-wavenumber power is underestimated relative to a comparable system possessing higher Rayleigh number. We discuss the implications of these results in light of the current inconsistencies between models and observations.« less

  16. Clinical course in women undergoing termination of pregnancy within the legal time limit in French-speaking Switzerland.

    PubMed

    Perrin, Eliane; Berthoud, Marianne; Pott, Murielle; Toledo Vera, Anna G; Perrenoud, David; Bianchi-Demicheli, Francesco

    2011-10-19

    In 2002, Swiss citizens voted to accept new laws legalising the termination of pregnancy (TOP) up to 12th week of pregnancy. As a result the cantons formulated rules of implementation. Health institutions then had to modify their procedures and practices. One of the objectives of these changes was to simplify the clinical course for women who decide to terminate a pregnancy. Have the various health institutions in French-speaking Switzerland attained this goal? Are there differences between cantons? Are there any other differences, and if so, which ones? Comparative study of cantonal rules of implementation. Study by questionnaire of what happened to 281 women having undergone a TOP in French-speaking Switzerland. Quantitative and qualitative method. The comparative legal study of the six cantonal rules of implementation showed differences between cantons. The clinical course for women are defined by four quantifiable facts: 1) the number of days delay between the woman's decision (first step) and TOP; 2) the number of appointments attended before TOP; 3) the method of TOP; 4) the cost of TOP. On average, the waiting time was 12 days and the number of appointments was 3. The average cost of TOP was 1360 CHF. The differences, sometimes quite large, are explained by the size of the institutions (large university hospitals; average-sized, non-university hospitals; private doctors' offices). The cantonal rules of implementation and the size of the health care institutions play an important role in these courses for women in French-speaking Switzerland.

  17. Diffusion Kurtosis Imaging of Acute Infarction: Comparison with Routine Diffusion and Follow-up MR Imaging.

    PubMed

    Yin, Jianzhong; Sun, Haizhen; Wang, Zhiyun; Ni, Hongyan; Shen, Wen; Sun, Phillip Zhe

    2018-05-01

    Purpose To determine the relationship between diffusion-weighted imaging (DWI) and diffusion kurtosis imaging (DKI) in patients with acute stroke at admission and the tissue outcome 1 month after onset of stroke. Materials and Methods Patients with stroke underwent DWI (b values = 0, 1000 sec/mm 2 along three directions) and DKI (b values = 0, 1000, 2000 sec/mm 2 along 20 directions) within 24 hours after symptom onset and 1 month after symptom onset. For large lesions (diameter ≥ 1 cm), acute lesion volumes at DWI and DKI were compared with those at follow-up T2-weighted imaging by using Spearman correlation analysis. For small lesions (diameter < 1 cm), the number of acute lesions at DWI and DKI and follow-up T2-weighted imaging was counted and compared by using the McNemar test. Results Thirty-seven patients (mean age, 58 years; range, 35-82 years) were included. There were 32 large lesions and 138 small lesions. For large lesions, the volumes of acute lesions on kurtosis maps showed no difference from those on 1-month follow-up T2-weighted images (P = .532), with a higher correlation coefficient than those on the apparent diffusion coefficient and mean diffusivity maps (R 2 = 0.730 vs 0.479 and 0.429). For small lesions, the number of acute lesions on DKI, but not on DWI, images was consistent with that on the follow-up T2-weighted images (P = .125). Conclusion DKI complements DWI for improved prediction of outcome of acute ischemic stroke. © RSNA, 2018.

  18. Tumoural specimens for forensic purposes: comparison of genetic alterations in frozen and formalin-fixed paraffin-embedded tissues.

    PubMed

    Ananian, Viviana; Tozzo, Pamela; Ponzano, Elena; Nitti, Donato; Rodriguez, Daniele; Caenazzo, Luciana

    2011-05-01

    In certain circumstances, tumour tissue specimens are the only DNA resource available for forensic DNA analysis. However, cancer tissues can show microsatellite instability and loss of heterozygosity which, if concerning the short tandem repeats (STRs) used in the forensic field, can cause misinterpretation of the results. Moreover, though formalin-fixed paraffin-embedded tissues (FFPET) represent a large resource for these analyses, the quality of the DNA obtained from this kind of specimen can be an important limit. In this study, we evaluated the use of tumoural tissue as biological material for the determination of genetic profiles in the forensic field, highlighting which STR polymorphisms are more susceptible to tumour genetic alterations and which of the analysed tumours show a higher genetic variability. The analyses were conducted on samples of the same tissues conserved in different storage conditions, to compare genetic profiles obtained by frozen tissues and formalin-fixed paraffin-embedded tissues. The importance of this study is due to the large number of specimens analysed (122), the large number of polymorphisms analysed for each specimen (39), and the possibility to compare, many years after storage, the same tissue frozen and formalin-fixed paraffin-embedded. In the comparison between the genetic profiles of frozen tumour tissues and FFPET, the same genetic alterations have been reported in both kinds of specimens. However, FFPET showed new alterations. We conclude that the use of FFPET requires greater attention than frozen tissues in the results interpretation and great care in both pre-extraction and extraction processes.

  19. Comparison of the applicability domain of a quantitative structure-activity relationship for estrogenicity with a large chemical inventory.

    PubMed

    Netzeva, Tatiana I; Gallegos Saliner, Ana; Worth, Andrew P

    2006-05-01

    The aim of the present study was to illustrate that it is possible and relatively straightforward to compare the domain of applicability of a quantitative structure-activity relationship (QSAR) model in terms of its physicochemical descriptors with a large inventory of chemicals. A training set of 105 chemicals with data for relative estrogenic gene activation, obtained in a recombinant yeast assay, was used to develop the QSAR. A binary classification model for predicting active versus inactive chemicals was developed using classification tree analysis and two descriptors with a clear physicochemical meaning (octanol-water partition coefficient, or log Kow, and the number of hydrogen bond donors, or n(Hdon)). The model demonstrated a high overall accuracy (90.5%), with a sensitivity of 95.9% and a specificity of 78.1%. The robustness of the model was evaluated using the leave-many-out cross-validation technique, whereas the predictivity was assessed using an artificial external test set composed of 12 compounds. The domain of the QSAR training set was compared with the chemical space covered by the European Inventory of Existing Commercial Chemical Substances (EINECS), as incorporated in the CDB-EC software, in the log Kow / n(Hdon) plane. The results showed that the training set and, therefore, the applicability domain of the QSAR model covers a small part of the physicochemical domain of the inventory, even though a simple method for defining the applicability domain (ranges in the descriptor space) was used. However, a large number of compounds are located within the narrow descriptor window.

  20. Systematic analysis of copy number variation associated with congenital diaphragmatic hernia.

    PubMed

    Zhu, Qihui; High, Frances A; Zhang, Chengsheng; Cerveira, Eliza; Russell, Meaghan K; Longoni, Mauro; Joy, Maliackal P; Ryan, Mallory; Mil-Homens, Adam; Bellfy, Lauren; Coletti, Caroline M; Bhayani, Pooja; Hila, Regis; Wilson, Jay M; Donahoe, Patricia K; Lee, Charles

    2018-05-15

    Congenital diaphragmatic hernia (CDH), characterized by malformation of the diaphragm and hypoplasia of the lungs, is one of the most common and severe birth defects, and is associated with high morbidity and mortality rates. There is growing evidence demonstrating that genetic factors contribute to CDH, although the pathogenesis remains largely elusive. Single-nucleotide polymorphisms have been studied in recent whole-exome sequencing efforts, but larger copy number variants (CNVs) have not yet been studied on a large scale in a case control study. To capture CNVs within CDH candidate regions, we developed and tested a targeted array comparative genomic hybridization platform to identify CNVs within 140 regions in 196 patients and 987 healthy controls, and identified six significant CNVs that were either unique to patients or enriched in patients compared with controls. These CDH-associated CNVs reveal high-priority candidate genes including HLX , LHX1 , and HNF1B We also discuss CNVs that are present in only one patient in the cohort but have additional evidence of pathogenicity, including extremely rare large and/or de novo CNVs. The candidate genes within these predicted disease-causing CNVs form functional networks with other known CDH genes and play putative roles in DNA binding/transcription regulation and embryonic development. These data substantiate the importance of CNVs in the etiology of CDH, identify CDH candidate genes and pathways, and highlight the importance of ongoing analysis of CNVs in the study of CDH and other structural birth defects. Copyright © 2018 the Author(s). Published by PNAS.

  1. Developmental Changes in the Effect of Active Left and Right Head Rotation on Random Number Generation.

    PubMed

    Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine

    2018-01-01

    Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children's access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults ( n = 24), but also in 7- to 11-year-old elementary school children ( n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes.

  2. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel

    2016-01-15

    Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kVmore » beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.« less

  3. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging.

    PubMed

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Halaweish, Ahmed; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. The authors' report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.

  4. Incremental terrain processing for large digital elevation models

    NASA Astrophysics Data System (ADS)

    Ye, Z.

    2012-12-01

    Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.

  5. Relationships between number and space processing in adults with and without dyscalculia.

    PubMed

    Mussolin, Christophe; Martin, Romain; Schiltz, Christine

    2011-09-01

    A large body of evidence indicates clear relationships between number and space processing in healthy and brain-damaged adults, as well as in children. The present paper addressed this issue regarding atypical math development. Adults with a diagnosis of dyscalculia (DYS) during childhood were compared to adults with average or high abilities in mathematics across two bisection tasks. Participants were presented with Arabic number triplets and had to judge either the number magnitude or the spatial location of the middle number relative to the two outer numbers. For the numerical judgment, adults with DYS were slower than both groups of control peers. They were also more strongly affected by the factors related to number magnitude such as the range of the triplets or the distance between the middle number and the real arithmetical mean. By contrast, adults with DYS were as accurate and fast as adults who never experienced math disability when they had to make a spatial judgment. Moreover, number-space congruency affected performance similarly in the three experimental groups. These findings support the hypothesis of a deficit of number magnitude representation in DYS with a relative preservation of some spatial mechanisms in DYS. Results are discussed in terms of direct and indirect number-space interactions. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Developing Young Children's Multidigit Number Sense.

    ERIC Educational Resources Information Center

    Diezmann, Carmel M.; English, Lyn D.

    2001-01-01

    This article describes a series of enrichment experiences designed to develop young (ages 5 to 8) gifted children's understanding of large numbers, central to their investigation of space travel. It describes activities designed to teach reading of large numbers and exploring numbers to a thousand and then a million. (Contains ten references.) (DB)

  7. No Winglets: What a Drag...Argument for Adding Winglets to Large Air Force Aircraft

    DTIC Science & Technology

    2008-01-01

    22134-5068 MASTER OF MILITARY STUDIES NO WINGLETS : WHAT A DRAG... ARGUMENT FOR ADDING WINGLETS TO LARGE AIR FORCE AIRCRAFT ,SUBMITTED IN PARTIAL...currently valid OMB control number. 1. REPORT DATE 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE No Winglets ...What a Drag...Argument for Adding Winglets to Large Air Force Aircraft 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  8. Factors contributing to airborne particle dispersal in the operating room.

    PubMed

    Noguchi, Chieko; Koseki, Hironobu; Horiuchi, Hidehiko; Yonekura, Akihiko; Tomita, Masato; Higuchi, Takashi; Sunagawa, Shinya; Osaki, Makoto

    2017-07-06

    Surgical-site infections due to intraoperative contamination are chiefly ascribable to airborne particles carrying microorganisms. The purpose of this study is to identify the actions that increase the number of airborne particles in the operating room. Two surgeons and two surgical nurses performed three patterns of physical movements to mimic intraoperative actions, such as preparing the instrument table, gowning and donning/doffing gloves, and preparing for total knee arthroplasty. The generation and behavior of airborne particles were filmed using a fine particle visualization system, and the number of airborne particles in 2.83 m 3 of air was counted using a laser particle counter. Each action was repeated five times, and the particle measurements were evaluated through one-way analysis of variance multiple comparison tests followed by Tukey-Kramer and Bonferroni-Dunn multiple comparison tests for post hoc analysis. Statistical significance was defined as a P value ≤ .01. A large number of airborne particles were observed while unfolding the surgical gown, removing gloves, and putting the arms through the sleeves of the gown. Although numerous airborne particles were observed while applying the stockinet and putting on large drapes for preparation of total knee arthroplasty, fewer particles (0.3-2.0 μm in size) were detected at the level of the operating table under laminar airflow compared to actions performed in a non-ventilated preoperative room (P < .01). The results of this study suggest that surgical staff should avoid unnecessary actions that produce a large number of airborne particles near a sterile area and that laminar airflow has the potential to reduce the incidence of bacterial contamination.

  9. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  10. Retail supply of malaria-related drugs in rural Tanzania: risks and opportunities.

    PubMed

    Goodman, Catherine; Kachur, S Patrick; Abdulla, Salim; Mwageni, Eleuther; Nyoni, Joyce; Schellenberg, Joanna A; Mills, Anne; Bloland, Peter

    2004-06-01

    To characterize availability of fever and malaria medicines within the retail sector in rural Tanzania, assess the likely public health implications, and identify opportunities for policy interventions to increase the coverage of effective treatment. A census of retailers selling drugs was undertaken in the areas under demographic surveillance in four Tanzanian districts, using a structured questionnaire. Drugs were stocked by two types of retailer: a large number of general retailers (n = 675) and a relatively small number of drug shops (n = 43). Almost all outlets stocked antipyretics/painkillers. One-third of general retailers stocking drugs had antimalarials, usually chloroquine alone. Almost all drug shops stocked antimalarials (98%): nearly all had chloroquine, 42% stocked quinine, 37% sulphadoxine-pyrimethamine and 30% amodiaquine. A large number of antimalarial brands were available. Population ratios indicate the relative accessibility of retail drug providers compared with health facilities. Drug shop staff generally travelled long distances to buy from drugs wholesalers or pharmacies. General retailers bought mainly from local general wholesalers, with a few general wholesalers accounting for a high proportion of all sources cited. Drugs were widely available from a large number of retail outlets. Potential negative implications include provision of ineffective drugs, confusion over brand names, uncontrolled use of antimalarials, and the availability of components of potential combination therapy regimens as monotherapies. On the other hand, this active and highly accessible retail market provides opportunities for improving the coverage of effective antimalarial treatment. Interventions targeted at all drug retailers are likely to be costly to deliver and difficult to sustain, but two promising points for targeted intervention are drug shops and selected general wholesalers. Retail quality may also be improved through consumer education, and modification of the chemical quality, packaging and price of products entering the retail distribution chain.

  11. Dynamical-statistical seasonal prediction for western North Pacific typhoons based on APCC multi-models

    NASA Astrophysics Data System (ADS)

    Kim, Ok-Yeon; Kim, Hye-Mi; Lee, Myong-In; Min, Young-Mi

    2017-01-01

    This study aims at predicting the seasonal number of typhoons (TY) over the western North Pacific with an Asia-Pacific Climate Center (APCC) multi-model ensemble (MME)-based dynamical-statistical hybrid model. The hybrid model uses the statistical relationship between the number of TY during the typhoon season (July-October) and the large-scale key predictors forecasted by APCC MME for the same season. The cross validation result from the MME hybrid model demonstrates high prediction skill, with a correlation of 0.67 between the hindcasts and observation for 1982-2008. The cross validation from the hybrid model with individual models participating in MME indicates that there is no single model which consistently outperforms the other models in predicting typhoon number. Although the forecast skill of MME is not always the highest compared to that of each individual model, the skill of MME presents rather higher averaged correlations and small variance of correlations. Given large set of ensemble members from multi-models, a relative operating characteristic score reveals an 82 % (above-) and 78 % (below-normal) improvement for the probabilistic prediction of the number of TY. It implies that there is 82 % (78 %) probability that the forecasts can successfully discriminate between above normal (below-normal) from other years. The forecast skill of the hybrid model for the past 7 years (2002-2008) is more skillful than the forecast from the Tropical Storm Risk consortium. Using large set of ensemble members from multi-models, the APCC MME could provide useful deterministic and probabilistic seasonal typhoon forecasts to the end-users in particular, the residents of tropical cyclone-prone areas in the Asia-Pacific region.

  12. Experimental study of a constrained vapor bubble fin heat exchanger in the absence of external natural convection.

    PubMed

    Basu, Sumita; Plawsky, Joel L; Wayner, Peter C

    2004-11-01

    In preparation for a microgravity flight experiment on the International Space Station, a constrained vapor bubble fin heat exchanger (CVB) was operated both in a vacuum chamber and in air on Earth to evaluate the effect of the absence of external natural convection. The long-term objective is a general study of a high heat flux, low capillary pressure system with small viscous effects due to the relatively large 3 x 3 x 40 mm dimensions. The current CVB can be viewed as a large-scale version of a micro heat pipe with a large Bond number in the Earth environment but a small Bond number in microgravity. The walls of the CVB are quartz, to allow for image analysis of naturally occurring interference fringes that give the pressure field for liquid flow. The research is synergistic in that the study requires a microgravity environment to obtain a low Bond number and the space program needs thermal control systems, like the CVB, with a large characteristic dimension. In the absence of natural convection, operation of the CVB may be dominated by external radiative losses from its quartz surface. Therefore, an understanding of radiation from the quartz cell is required. All radiative exchange with the surroundings occurs from the outer surface of the CVB when the temperature range renders the quartz walls of the CVB optically thick (lambda > 4 microns). However, for electromagnetic radiation where lambda < 2 microns, the walls are transparent. Experimental results obtained for a cell charged with pentane are compared with those obtained for a dry cell. A numerical model was developed that successfully simulated the behavior and performance of the device observed experimentally.

  13. RACORO continental boundary layer cloud investigations. 2. Large-eddy simulations of cumulus clouds and evaluation with in-situ and ground-based observations

    DOE PAGES

    Endo, Satoshi; Fridlind, Ann M.; Lin, Wuyin; ...

    2015-06-19

    A 60-hour case study of continental boundary layer cumulus clouds is examined using two large-eddy simulation (LES) models. The case is based on observations obtained during the RACORO Campaign (Routine Atmospheric Radiation Measurement [ARM] Aerial Facility [AAF] Clouds with Low Optical Water Depths [CLOWD] Optical Radiative Observations) at the ARM Climate Research Facility's Southern Great Plains site. The LES models are driven by continuous large-scale and surface forcings, and are constrained by multi-modal and temporally varying aerosol number size distribution profiles derived from aircraft observations. We compare simulated cloud macrophysical and microphysical properties with ground-based remote sensing and aircraft observations.more » The LES simulations capture the observed transitions of the evolving cumulus-topped boundary layers during the three daytime periods, and generally reproduce variations of droplet number concentration with liquid water content (LWC), corresponding to the gradient between the cloud centers and cloud edges at given heights. The observed LWC values fall within the range of simulated values; the observed droplet number concentrations are commonly higher than simulated, but differences remain on par with potential estimation errors in the aircraft measurements. Sensitivity studies examine the influences of bin microphysics versus bulk microphysics, aerosol advection, supersaturation treatment, and aerosol hygroscopicity. Simulated macrophysical cloud properties are found to be insensitive in this non-precipitating case, but microphysical properties are especially sensitive to bulk microphysics supersaturation treatment and aerosol hygroscopicity.« less

  14. How does epistasis influence the response to selection?

    PubMed Central

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the ‘infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4Ne by the ‘drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large Nes, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects. PMID:27901509

  15. How does epistasis influence the response to selection?

    PubMed

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the 'infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4N e by the 'drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large N e s, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects.

  16. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  17. Social and natural sciences differ in their research strategies, adapted to work for different knowledge landscapes.

    PubMed

    Jaffe, Klaus

    2014-01-01

    Do different fields of knowledge require different research strategies? A numerical model exploring different virtual knowledge landscapes, revealed two diverging optimal search strategies. Trend following is maximized when the popularity of new discoveries determine the number of individuals researching it. This strategy works best when many researchers explore few large areas of knowledge. In contrast, individuals or small groups of researchers are better in discovering small bits of information in dispersed knowledge landscapes. Bibliometric data of scientific publications showed a continuous bipolar distribution of these strategies, ranging from natural sciences, with highly cited publications in journals containing a large number of articles, to the social sciences, with rarely cited publications in many journals containing a small number of articles. The natural sciences seem to adapt their research strategies to landscapes with large concentrated knowledge clusters, whereas social sciences seem to have adapted to search in landscapes with many small isolated knowledge clusters. Similar bipolar distributions were obtained when comparing levels of insularity estimated by indicators of international collaboration and levels of country-self citations: researchers in academic areas with many journals such as social sciences, arts and humanities, were the most isolated, and that was true in different regions of the world. The work shows that quantitative measures estimating differences between academic disciplines improve our understanding of different research strategies, eventually helping interdisciplinary research and may be also help improve science policies worldwide.

  18. Use of SMS text messaging to improve outpatient attendance.

    PubMed

    Downer, Sean R; Meara, John G; Da Costa, Annette C

    2005-10-03

    To evaluate the effect of appointment reminders sent as short message service (SMS) text messages to patients' mobile telephones on attendance at outpatient clinics. Cohort study with historical control. Royal Children's Hospital, Melbourne, Victoria. Patients who gave a mobile telephone contact number and were scheduled to attend any of five outpatient clinics (dermatology, gastroenterology, general medicine, paediatric dentistry and plastic surgery) in September (trial group) or August (control group), 2004. Failure to attend (FTA) rate compared between the group sent a reminder and those who were not. 2151 patients were scheduled to attend a clinic in September; 1382 of these (64.2%) gave a mobile telephone contact number and were sent an SMS reminder (trial group). Corresponding numbers in the control group were 2276 scheduled to attend and 1482 (65.1%) who gave a mobile telephone number. The FTA rate for individual clinics was 12%-16% for the trial group, and 19%-39% for the control group. Overall FTA rate was significantly lower in the trial group than in the control group (14.2% v 23.4%; P < 0.001). The observed reduction in failure to attend rate was in line with that found using traditional reminder methods. The ease with which large numbers of messages can be customised and sent by SMS text messaging, along with its availability and comparatively low cost, suggest it may be a suitable means of improving patient attendance.

  19. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  20. Bayesian analysis of biogeography when the number of areas is large.

    PubMed

    Landis, Michael J; Matzke, Nicholas J; Moore, Brian R; Huelsenbeck, John P

    2013-11-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a "data-augmentation" approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea.

Top