Sample records for small average number

  1. Powerful Voter Selection for Making Multistep Delegate Ballot Fair

    NASA Astrophysics Data System (ADS)

    Yamakawa, Hiroshi

    For decision by majority, each voter often exercises his right by delegating to trustable other voters. Multi-step delegates rule allows indirect delegating through more than one voter, and this helps each voter finding his delegate voters. In this paper, we propose powerful voter selection method depending on the multi-step delegate rule. This method sequentially selects voters who is most delegated indirectly. Multi-agent simulation demonstrate that we can achieve highly fair poll results from small number of vote by using proposed method. Here, fairness is prediction accuracy to sum of all voters preferences for choices. In simulation, each voter selects choices arranged on one dimensional preference axis for voting. Acquaintance relationships among voters were generated as a random network, and each voter delegates some of his acquaintances who has similar preferences. We obtained simulation results from various acquaintance networks, and then averaged these results. Firstly, if each voter has enough acquaintances in average, proposed method can help predicting sum of all voters' preferences of choices from small number of vote. Secondly, if the number of each voter's acquaintances increases corresponding to an increase in the number of voters, prediction accuracy (fairness) from small number of vote can be kept in appropriate level.

  2. Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.

    PubMed

    Baars, W J; Hutchins, N; Marusic, I

    2017-03-13

    Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. 40 CFR 80.1142 - What are the provisions for small refiners under the RFS program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employed an average of no more than 1,500 people, based on the average number of employees for all pay.... (2) For overnight or courier services: U.S. EPA, Attn: RFS Program, 6406J, 1310 L Street, NW., 6th...

  4. Seasonal And Intra-seasonal Hydrological Responses To Change In Climate Pattern And Small Dams of the Faga Watershed In Burkina-Faso

    NASA Astrophysics Data System (ADS)

    Mamounata, K.

    2015-12-01

    In response to the increasing demand for food linked to the substantial growth of population in Burkina Faso, irrigation has been widely used by the farming community to support agricultural production. Thus a promising option for water resources development in such a context is to increase the number of small dams. It is assumed that the great number of small dams may have effect on sub-basins' hydrological dynamic. This study aims to assess the seasonal and the intra-seasonal change in river basins hydrology with the case study of the Faga River sub-basin located in Burkina-Faso, West Africa, using Water Simulation Model (WaSiM). For this watershed the number of small dams is slightly very important (More than 60) and their impact on the watershed runoff has been estimated simultaneously with the change in climate pattern. The coefficient of variation for rainfall in this sub-basin from 1982 to 2010 is 0.097 and the stream flow presents a seasonal average of 25.58Km3 per month for the same period. The intra-seasonal climate variation for the same period is estimated at 0.087 in the scenario where any dam has not been considered. Results based on simulation including the five important dams over the sub-basin show that the overall effect of small dams is on average a 20.76% in runoff. Projections using the Representative Concentration Pathways (RCP) 4.5 and 8.5 climate scenarios with increase of 25% of dams' number show a probable decrease of about 29.54% and 35.25% of the average during the next fifty years runoff. The study findings show that small dams reduce significantly the runoff from their watershed and the uncertainties related to the sustainability of the resource seems to be increasing during the same period. Therefore, despite the very large number of water storage infrastructures, reservoirs operating strategies have to be achieved for water sustainability within the Faga sub-basin.

  5. The heterogeneity statistic I(2) can be biased in small meta-analyses.

    PubMed

    von Hippel, Paul T

    2015-04-14

    Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I(2). We calculate the bias of I(2), focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. We use Mathematica software to calculate the expectation and bias of I(2). I(2) has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I(2) will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I(2) can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I(2) estimate is 21 percent. The point estimate I(2) should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I(2).

  6. Detection of the presence of Chlamydia trachomatis bacteria using diffusing wave spectroscopy with a small number of scatterers

    NASA Astrophysics Data System (ADS)

    Ulyanov, Sergey; Ulianova, Onega; Filonova, Nadezhda; Moiseeva, Yulia; Zaitsev, Sergey; Saltykov, Yury; Polyanina, Tatiana; Lyapina, Anna; Kalduzova, Irina; Larionova, Olga; Utz, Sergey; Feodorova, Valentina

    2018-04-01

    Theory of diffusing wave spectroscopy has been firstly adapted to the problem of rapid detection of Chlamydia trachomatis bacteria in blood samples of Chlamydia patients. Formula for correlation function of temporal fluctuations of speckle intensity is derived for the case of small number of scattering events. Dependence of bandwidth of spectrum on average number of scatterers is analyzed. Set-up for detection of the presence of C. trachomatis cells in aqueous suspension is designed. Good agreement between theoretical results and experimental data is shown. Possibility of detection of the presence of C. trachomatis cells in probing volume using diffusing wave spectroscopy with a small number of scatterers is successfully demonstrated for the first time.

  7. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  8. Fine and ultrafine particles in small cities. A case study in the south of Europe.

    PubMed

    Aranda, A; Díaz-de-Mera, Y; Notario, A; Rodríguez, D; Rodríguez, A

    2015-12-01

    Ultrafine particles, PM2.5 and PM10 mass concentration, NO(x), Ozone, SO2, back-trajectories of air masses and meteorological parameters were studied in a small city over the period February, 2013 to June, 2014. The profiles of PM2.5 and PM10 particles are provided, showing averaged values of 16.6 and 21.6 μg m(-3), respectively. The average number concentration of particles in the range of diameters 5.6-560 nm was 1.2 × 10(4)#/ cm(3) with contributions of 42, 51 and 7% from the nucleation, Aitken, and accumulation modes, respectively. The average number concentration of ultrafine particles was 1.1 × 10(4)#/ cm(3). The results obtained are evidence for some differences in the pollution of ambient air by particles in the studied town in comparison to bigger cities. Nucleation events due to emissions from the city were not observed, and traffic emissions amount to a small contribution to PM2.5 and PM10 particles which are mainly due to crustal origin from the arid surroundings and long-range transport from the Sahara Desert.

  9. Determining average path length and average trapping time on generalized dual dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  10. Energy Distributions in Small Populations: Pascal versus Boltzmann

    ERIC Educational Resources Information Center

    Kugel, Roger W.; Weiner, Paul A.

    2010-01-01

    The theoretical distributions of a limited amount of energy among small numbers of particles with discrete, evenly-spaced quantum levels are examined systematically. The average populations of energy states reveal the pattern of Pascal's triangle. An exact formula for the probability that a particle will be in any given energy state is derived.…

  11. Targeted Approach to Overcoming Treatment Resistance in Advanced Prostate Cancer

    DTIC Science & Technology

    2014-09-01

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data ...identify analogs with higher efficacy and less toxicity. Initial results identified a small number of compounds, some of them entirely novel, that should... small  number  of  compounds,  some  of  them  entirely  novel,   that  should  be  followed  up  on  in  subsequent

  12. Anomaly detection in reconstructed quantum states using a machine-learning technique

    NASA Astrophysics Data System (ADS)

    Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki

    2014-02-01

    The accurate detection of small deviations in given density matrices is important for quantum information processing. Here we propose a method based on the concept of data mining. We demonstrate that the proposed method can more accurately detect small erroneous deviations in reconstructed density matrices, which contain intrinsic fluctuations due to the limited number of samples, than a naive method of checking the trace distance from the average of the given density matrices. This method has the potential to be a key tool in broad areas of physics where the detection of small deviations of quantum states reconstructed using a limited number of samples is essential.

  13. Communal Sharing and the Provision of Low-Volume High-Cost Health Services: Results of a Survey.

    PubMed

    Richardson, Jeff; Iezzi, Angelo; Chen, Gang; Maxwell, Aimee

    2017-03-01

    This paper suggests and tests a reason why the public might support the funding of services for rare diseases (SRDs) when the services are effective but not cost effective, i.e. when more health could be produced by allocating funds to other services. It is postulated that the fairness of funding a service is influenced by a comparison of the average patient benefit with the average cost to those who share the cost. Survey respondents were asked to allocate a budget between cost-effective services that had a small effect upon a large number of relatively well patients and SRDs that benefited a small number of severely ill patients but were not cost effective because of their high cost. Part of the budget was always allocated to the SRDs. The budget share rose with the number sharing the cost. Sharing per se appears to characterise preferences. This has been obscured in studies that focus upon cost per patient rather than cost per person sharing the cost.

  14. Development of the Statocyst in the Freshwater Snail Biomphalaria Glabrata (Pulmonata, Basommatophora)

    NASA Technical Reports Server (NTRS)

    Gao, Wenyuan; Wiederhold, Michael; Hejl, Robert

    1997-01-01

    The development of the statocyst of the freshwater snail Biomphalaria glabrata has been examined from embryo to adult. Special emphasis was put on the growth of the statoconia in the statocysts. In the statocysts of embryonic snails (90-120 h after oviposition) there is not a single statolith but an average of 40-50 statoconia per statocyst. The number of statoconia increases to 385-400 when the snails reach a shell diameter of 4 mm and remains relatively constant thereafter, irrespective of shell size. Small statoconia are found in supporting cells, which suggests that the statoconia are produced within these cells. The average diameter of statoconia and the total mass of statoconia increase with increasing shell diameter. The average number of large statoconia (diameter greater than 7 micrometers) per statocyst continues to increase from 2 to 10 mm animals while the number of small ones (diameter less than 4 micrometers) initially rises and then decreases after 4 mm. These results demonstrate continuous growth of the statoconia in the cyst lumen of Biomphalaria. The single statoconia vibrate in a regular pattern in vivo, indicating beating of the statocyst cilia. The statoconia sink under the influence of gravity to load and stimulate receptor cells which are at the bottom. The length of cilia and the size of statocyst gradually increase as the animal grows. However, the increase in the volume of the statocyst is relatively small compared with the increase in body weight during normal development.

  15. Limitations of signal averaging due to temporal correlation in laser remote-sensing measurements.

    PubMed

    Menyuk, N; Killinger, D K; Menyuk, C R

    1982-09-15

    Laser remote sensing involves the measurement of laser-beam transmission through the atmosphere and is subject to uncertainties caused by strong fluctuations due primarily to speckle, glint, and atmospheric-turbulence effects. These uncertainties are generally reduced by taking average values of increasing numbers of measurements. An experiment was carried out to directly measure the effect of signal averaging on back-scattered laser return signals from a diffusely reflecting target using a direct-detection differential-absorption lidar (DIAL) system. The improvement in accuracy obtained by averaging over increasing numbers of data points was found to be smaller than that predicted for independent measurements. The experimental results are shown to be in excellent agreement with a theoretical analysis which considers the effect of temporal correlation. The analysis indicates that small but long-term temporal correlation severely limits the improvement available through signal averaging.

  16. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  17. [Travel time and distances to Norwegian out-of-hours casualty clinics].

    PubMed

    Raknes, Guttorm; Morken, Tone; Hunskår, Steinar

    2014-11-01

    Geographical factors have an impact on the utilisation of out-of-hours services. In this study we have investigated the travel distance to out-of-hours casualty clinics in Norwegian municipalities in 2011 and the number of municipalities covered by the proposed recommendations for secondary on-call arrangements due to long distances. We estimated the average maximum travel times and distances in Norwegian municipalities using a postcode-based method. Separate analyses were performed for municipalities with a single, permanently located casualty clinic. Altogether 417 out of 430 municipalities were included. We present the median value of the maximum travel times and distances for the included municipalities. The median maximum average travel distance for the municipalities was 19 km. The median maximum average travel time was 22 minutes. In 40 of the municipalities (10 %) the median maximum average travel time exceeded 60 minutes, and in 97 municipalities (23 %) the median maximum average travel time exceeded 40 minutes. The population of these groups comprised 2 % and 5 % of the country's total population respectively. For municipalities with permanent emergency facilities(N = 316), the median average flight time 16 minutes and median average distance 13 km.. In many municipalities, the inhabitants have a long average journey to out-of-hours emergency health services, but seen as a whole, the inhabitants of these municipalities account for a very small proportion of the Norwegian population. The results indicate that the proposed recommendations for secondary on-call duty based on long distances apply to only a small number of inhabitants. The recommendations should therefore be adjusted and reformulated to become more relevant.

  18. Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.

    2005-01-01

    Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.

  19. Radiation exposure from consumer products and miscellaneous sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-01-01

    This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less

  20. Effects of reproductive condition, roost microclimate, and weather patterns on summer torpor use by a vespertilionid bat

    PubMed Central

    Johnson, Joseph S; Lacki, Michael J

    2014-01-01

    A growing number of mammal species are recognized as heterothermic, capable of maintaining a high-core body temperature or entering a state of metabolic suppression known as torpor. Small mammals can achieve large energetic savings when torpid, but they are also subject to ecological costs. Studying torpor use in an ecological and physiological context can help elucidate relative costs and benefits of torpor to different groups within a population. We measured skin temperatures of 46 adult Rafinesque's big-eared bats (Corynorhinus rafinesquii) to evaluate thermoregulatory strategies of a heterothermic small mammal during the reproductive season. We compared daily average and minimum skin temperatures as well as the frequency, duration, and depth of torpor bouts of sex and reproductive classes of bats inhabiting day-roosts with different thermal characteristics. We evaluated roosts with microclimates colder (caves) and warmer (buildings) than ambient air temperatures, as well as roosts with intermediate conditions (trees and rock crevices). Using Akaike's information criterion (AIC), we found that different statistical models best predicted various characteristics of torpor bouts. While the type of day-roost best predicted the average number of torpor bouts that bats used each day, current weather variables best predicted daily average and minimum skin temperatures of bats, and reproductive condition best predicted average torpor bout depth and the average amount of time spent torpid each day by bats. Finding that different models best explain varying aspects of heterothermy illustrates the importance of torpor to both reproductive and nonreproductive small mammals and emphasizes the multifaceted nature of heterothermy and the need to collect data on numerous heterothermic response variables within an ecophysiological context. PMID:24558571

  1. Measurements of the Time-Averaged and Instantaneous Induced Velocities in the Wake of a Helicopter Rotor Hovering at High Tip Speeds

    NASA Technical Reports Server (NTRS)

    Heyson, Harry H.

    1960-01-01

    Measurements of the time-averaged induced velocities were obtained for rotor tip speeds as great as 1,100 feet per second (tip Mach number of 0.98) and measurements of the instantaneous induced velocities were obtained for rotor tip speeds as great as 900 feet per second. The results indicate that the small effects on the wake with increasing Mach number are primarily due to the changes in rotor-load distribution resulting from changes in Mach number rather than to compressibility effects on the wake itself. No effect of tip Mach number on the instantaneous velocities was observed. Under conditions for which the blade tip was operated at negative pitch angles, an erratic circulatory flow was observed.

  2. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  3. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  4. Progressive calibration and averaging for tandem mass spectrometry statistical confidence estimation: Why settle for a single decoy?

    PubMed Central

    Keich, Uri; Noble, William Stafford

    2017-01-01

    Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989

  5. [Distribution of atmospheric ultrafine particles during haze weather in Hangzhou].

    PubMed

    Chen, Qiu-Fang; Sun, Zai; Xie, Xiao-Fang

    2014-08-01

    Atmospheric ultrafine particles (UFPs) were monitored with fast mobility particle sizer (FMPS) in continuous haze weather and the haze fading process during December 6 to 11, 2013 in Hangzhou. Particle concentration and size distribution were studied associated with meteorological factors. The results showed that number concentrations were the highest at night and began to reduce in the morning. There was a small peak at 8 o'clock in the morning and 18 o'clock in the afternoon. It showed an obvious peak traffic source, which indicated that traffic emissions played a great role in the atmospheric pollution. During haze weather, the highest number concentration of UFPs reached 8 x 10(4) cm(-3). Particle size spectrum distribution was bimodal, the peak particle sizes were 15 nm and 100 nm respectively. Majority of UFPs were Aitken mode and Accumulation mode and the size of most particles concentrated near 100 nm. Average CMD(count medium diameter) was 85.89 nm. During haze fading process, number concentration and particles with size around 100 nm began to reduce and peak size shifted to small size. Nuclear modal particles increased and were more than accumulation mode. Average CMD was 58.64 nm. Meteorological factors such as the visibility and wind were negatively correlated with the particle number concentration. Correlation coefficient R were -0.225 and - 0.229. The humidity was correlated with number concentration. Correlation coefficient R was 0.271. The atmosphere was stable in winter and the level temperature had small correlation with number concentration. Therefore, study on distribution of atmospheric ultrafine particles during haze weather had the significance on the formation mechanism and control of haze weather.

  6. Mobility based multicast routing in wireless mesh networks

    NASA Astrophysics Data System (ADS)

    Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan

    2013-01-01

    There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.

  7. Home range characteristics of Mexican Spotted Owls in the Rincon Mountains, Arizona

    USGS Publications Warehouse

    Willey, David W.; van Riper, Charles

    2014-01-01

    We studied a small isolated population of Mexican Spotted Owls (Strix occidentalis lucida) from 1996–1997 in the Rincon Mountains of Saguaro National Park, southeastern Arizona, USA. All mixed-conifer and pine-oak forest patches in the park were surveyed for Spotted Owls, and we located, captured, and radio-tagged 10 adult birds representing five mated pairs. Using radio-telemetry, we examined owl home range characteristics, roost habitat, and monitored reproduction within these five territories. Breeding season (Mar–Sep) home range size for 10 adult owls (95% adaptive kernel isopleths) averaged 267 ha (±207 SD), and varied widely among owls (range 34–652 ha). Mean home range size for owl pairs was 478 ha (±417 ha SD), and ranged from 70–1,160 ha. Owls that produced young used smaller home ranges than owls that had no young. Six habitat variables differed significantly between roost and random sites, including: percent canopy cover, number of trees, number of vegetation layers, average height of trees, average diameter of trees, and tree basal area. Radio-marked owls remained in their territories following small prescribed management fires within those territories, exhibiting no proximate effects to the presence of prescribed fire.

  8. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-04

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  9. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  10. 40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...,500 people, based on the average number of employees for all pay periods for calendar year 2006 for... services: U.S. EPA, Attn: RFS Program, 6406J, 1310 L Street, NW., 6th floor, Washington, DC 20005. (202...

  11. The SPINDLE Disruption-Tolerant Networking System

    DTIC Science & Technology

    2007-11-01

    average availability ( AA ). The AA metric attempts to measure the average fraction of time in the near future that the link will be available for use...Each link’s AA is epidemically disseminated to all nodes. Path costs are computed using the topology learned through this dissemination, with cost of a...link l set to (1 − AA (l)) + c (a small constant factor that makes routing favor fewer number of hops when all links have AA of 1). Additional details

  12. Number and size of nucleoli in the spermatocytes of chicken and Japanese quail.

    PubMed

    Andraszek, Katarzyna; Gryzińska, Magdalena; Knaga, Sebastian; Wójcik, Ewa; Smalec, Elzbieta

    2012-01-01

    Nucleoli are the product of nucleolus organizing region activity (NOR) of specific chromosomes. Their basic function is to synthetise ribosomal RNA precursors and promote the maturation and assemblage of preribosomal RNP molecules. Information on rRNA-coding gene activity can be provided by the analysis of the number and size of nucleoli in the prophase of the first meiotic division. The morphology and ultrastructure of a nucleolus depends, among others, on the species and cell growth cycle as well as the physiological and pathological state of an organism. The purpose of this research was to determine the number and size of nucleoli in the spermatocytes of the domestic chicken and the Japanese quail. Diverse numbers and sizes of nucleoli in the cells of the analysed birds were observed. 1-4 nucleoli were identified in chicken cells (1.91 +/- 0.63 on average) and 1-2 in quail cells (1.13 +/- 0.33 on average). For the total of 957 nucleoli observed in Gallus cells, 329 were classified as large and 628 as small. In Coturnix cells, 563 nucleoli were identified (66 large and 497 small ones). An analysis of the numbers and sizes of nucleoli can be performed at the cytogenetic level and serve as an alternative source of information on rRNA encoding gene and nucleolus organising region (NOR) activities.

  13. Spatial prisoner's dilemma game with volunteering in Newman-Watts small-world networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-Xi; Xu, Xin-Jian; Chen, Yong; Wang, Ying-Hai

    2005-03-01

    A modified spatial prisoner’s dilemma game with voluntary participation in Newman-Watts small-world networks is studied. Some reasonable ingredients are introduced to the game evolutionary dynamics: each agent in the network is a pure strategist and can only take one of three strategies (cooperator, defector, and loner); its strategical transformation is associated with both the number of strategical states and the magnitude of average profits, which are adopted and acquired by its coplayers in the previous round of play; a stochastic strategy mutation is applied when it gets into the trouble of local commons that the agent and its neighbors are in the same state and get the same average payoffs. In the case of very low temptation to defect, it is found that agents are willing to participate in the game in typical small-world region and intensive collective oscillations arise in more random region.

  14. Robust small area prediction for counts.

    PubMed

    Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray

    2015-06-01

    A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Cancellation exponent and multifractal structure in two-dimensional magnetohydrodynamics: direct numerical simulations and Lagrangian averaged modeling.

    PubMed

    Graham, Jonathan Pietarila; Mininni, Pablo D; Pouquet, Annick

    2005-10-01

    We present direct numerical simulations and Lagrangian averaged (also known as alpha model) simulations of forced and free decaying magnetohydrodynamic turbulence in two dimensions. The statistics of sign cancellations of the current at small scales is studied using both the cancellation exponent and the fractal dimension of the structures. The alpha model is found to have the same scaling behavior between positive and negative contributions as the direct numerical simulations. The alpha model is also able to reproduce the time evolution of these quantities in free decaying turbulence. At large Reynolds numbers, an independence of the cancellation exponent with the Reynolds numbers is observed.

  16. An Early Look At SHOP Marketplaces: Low Premiums, Adequate Plan Choice In Many, But Not All, States.

    PubMed

    Gabel, Jon R; Stromberg, Sam T; Green, Matthew; Lischko, Amy; Whitmore, Heidi

    2015-05-01

    The Affordable Care Act created the Small Business Health Options Program (SHOP) Marketplaces to help small businesses provide health insurance to their employees. To attract the participation of substantial numbers of small employers, SHOP Marketplaces must demonstrate value-added features unavailable in the traditional small-group market. Such features could include lower premiums than those for plans offered outside the Marketplace and more extensive choices of carriers and plans. More choices are necessary for SHOP Marketplaces to offer the "employee choice model," in which employees may choose from many carriers and plans. This study compared the numbers of carriers and plans and premium levels in 2014 for plans offered through SHOP Marketplaces with those of plans offered only outside of the Marketplaces. An average of 4.3 carriers participated in each state's Marketplace, offering a total of forty-seven plans. Premiums for plans offered through SHOP Marketplaces were, on average, 7 percent less than those in the same metal tier offered only outside of the Marketplaces. Lower premiums and the participation of multiple carriers in most states are a source of optimism for future enrollment growth in SHOP Marketplaces. Lack of broker buy-in in many states and burdensome enrollment processes are major impediments to success. Project HOPE—The People-to-People Health Foundation, Inc.

  17. GENERAL: Teleportation of a Bipartite Entangled Coherent State via a Four-Partite Cluster-Type Entangled State

    NASA Astrophysics Data System (ADS)

    Chen, Hui-Na; Liu, Jin-Ming

    2009-10-01

    We present an optical scheme to almost completely teleport a bipartite entangled coherent state using a four-partite cluster-type entangled coherent state as quantum channel. The scheme is based on optical elements such as beam splitters, phase shifters, and photon detectors. We also obtain the average fidelity of the teleportation process. It is shown that the average fidelity is quite close to unity if the mean photon number of the coherent state is not too small.

  18. Reynolds number of transition and self-organized criticality of strong turbulence.

    PubMed

    Yakhot, Victor

    2014-10-01

    A turbulent flow is characterized by velocity fluctuations excited in an extremely broad interval of wave numbers k>Λf, where Λf is a relatively small set of the wave vectors where energy is pumped into fluid by external forces. Iterative averaging over small-scale velocity fluctuations from the interval Λf

  19. Reynolds number of transition and self-organized criticality of strong turbulence

    NASA Astrophysics Data System (ADS)

    Yakhot, Victor

    2014-10-01

    A turbulent flow is characterized by velocity fluctuations excited in an extremely broad interval of wave numbers k >Λf , where Λf is a relatively small set of the wave vectors where energy is pumped into fluid by external forces. Iterative averaging over small-scale velocity fluctuations from the interval Λf

  20. 18 CFR 367.59 - Additions and retirements of property.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... relatively large number or related small cost, an appropriate average book cost of the units, with due... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Additions and retirements of property. 367.59 Section 367.59 Conservation of Power and Water Resources FEDERAL ENERGY...

  1. 18 CFR 367.59 - Additions and retirements of property.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... relatively large number or related small cost, an appropriate average book cost of the units, with due... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Additions and retirements of property. 367.59 Section 367.59 Conservation of Power and Water Resources FEDERAL ENERGY...

  2. 18 CFR 367.59 - Additions and retirements of property.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... relatively large number or related small cost, an appropriate average book cost of the units, with due... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Additions and retirements of property. 367.59 Section 367.59 Conservation of Power and Water Resources FEDERAL ENERGY...

  3. 18 CFR 367.59 - Additions and retirements of property.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... relatively large number or related small cost, an appropriate average book cost of the units, with due... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Additions and retirements of property. 367.59 Section 367.59 Conservation of Power and Water Resources FEDERAL ENERGY...

  4. 18 CFR 367.59 - Additions and retirements of property.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... relatively large number or related small cost, an appropriate average book cost of the units, with due... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Additions and retirements of property. 367.59 Section 367.59 Conservation of Power and Water Resources FEDERAL ENERGY...

  5. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  6. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  7. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  8. The problem of seeing hospital staff without G.P. referral--the otolaryngological experience.

    PubMed

    Walshe, P; McGrain, S; Colgan, G; McShane, D

    2001-01-01

    Over a three month period, a record was kept of the number of hospital staff who approached the E.N.T. team requesting help for a medical problem. Staff members included doctors, nurses, clerical staff, paramedical staff and porters. The total number of employees in the hospital was recorded. The average General practitioner public patient list (General medical Service cardholders) for South Dublin was recorded (our hospital is in south west Dublin). The total number of hospital staff seen by E.N.T. in 3 months was seventy seven. The total number of hospital staff seen by other surgical specialties was approximately one hundred and sixty seven. Extrapolation of numbers seen by E.N.T. service in three months to numbers seen over a one year period is 308 patients. The numbers seen by the E.N.T. service in three months corresponds to 11.7% of the average South Dublin General Practitioner Medical card list. It has been estimated that approximately 20% of all problems the average General practitioner sees in a week are E.N.T. related. Those practices with a smaller paediatric population would have approximately 15% of the total practice concearned with E.N.T. problems. Therefore as 15% of 2,400 (total hospital staff) = 360, there is potentially a small General Practice which is 'hidden' within the hospital.

  9. The first effects of fluid inertia on flows in ordered and random arrays of spheres

    NASA Astrophysics Data System (ADS)

    Hill, Reghan J.; Koch, Donald L.; Ladd, Anthony J. C.

    2001-12-01

    Theory and lattice-Boltzmann simulations are used to examine the effects of fluid inertia, at small Reynolds numbers, on flows in simple cubic, face-centred cubic and random arrays of spheres. The drag force on the spheres, and hence the permeability of the arrays, is determined at small but finite Reynolds numbers, at solid volume fractions up to the close-packed limits of the arrays. For small solid volume fraction, the simulations are compared to theory, showing that the first inertial contribution to the drag force, when scaled with the Stokes drag force on a single sphere in an unbounded fluid, is proportional to the square of the Reynolds number. The simulations show that this scaling persists at solid volume fractions up to the close-packed limits of the arrays, and that the first inertial contribution to the drag force relative to the Stokes-flow drag force decreases with increasing solid volume fraction. The temporal evolution of the spatially averaged velocity and the drag force is examined when the fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. Theory for the short- and long-time behaviour is in good agreement with simulations, showing that the unsteady force is dominated by quasi-steady drag and added-mass forces. The short- and long-time added-mass coefficients are obtained from potential-flow and quasi-steady viscous-flow approximations, respectively.

  10. Corona graphs as a model of small-world networks

    NASA Astrophysics Data System (ADS)

    Lv, Qian; Yi, Yuhao; Zhang, Zhongzhi

    2015-11-01

    We introduce recursive corona graphs as a model of small-world networks. We investigate analytically the critical characteristics of the model, including order and size, degree distribution, average path length, clustering coefficient, and the number of spanning trees, as well as Kirchhoff index. Furthermore, we study the spectra for the adjacency matrix and the Laplacian matrix for the model. We obtain explicit results for all the quantities of the recursive corona graphs, which are similar to those observed in real-life networks.

  11. Relationships between Perron-Frobenius eigenvalue and measurements of loops in networks

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Chang, Yizhe

    2018-07-01

    The Perron-Frobenius eigenvalue (PFE) is widely used as measurement of the number of loops in networks, but what exactly the relationship between the PFE and the number of loops in networks is has not been researched yet, is it strictly monotonically increasing? And what are the relationships between the PFE and other measurements of loops in networks? Such as the average loop degree of nodes, and the distribution of loop ranks. We make researches on these questions based on samples of ER random network, NW small-world network and BA scale-free network, and the results confirm that, both the number of loops in network and the average loop degree of nodes of all samples do increase with the increase of the PFE in general trend, but neither of them are strictly monotonically increasing, so the PFE is capable to be used as a rough estimative measurement of the number of loops in networks and the average loop degree of nodes. Furthermore, we find that a majority of the loop ranks of all samples obey Weibull distribution, of which the scale parameter A and the shape parameter B have approximate power-law relationships with the PFE of the samples.

  12. Vulnerability of complex networks

    NASA Astrophysics Data System (ADS)

    Mishkovski, Igor; Biey, Mario; Kocarev, Ljupco

    2011-01-01

    We consider normalized average edge betweenness of a network as a metric of network vulnerability. We suggest that normalized average edge betweenness together with is relative difference when certain number of nodes and/or edges are removed from the network is a measure of network vulnerability, called vulnerability index. Vulnerability index is calculated for four synthetic networks: Erdős-Rényi (ER) random networks, Barabási-Albert (BA) model of scale-free networks, Watts-Strogatz (WS) model of small-world networks, and geometric random networks. Real-world networks for which vulnerability index is calculated include: two human brain networks, three urban networks, one collaboration network, and two power grid networks. We find that WS model of small-world networks and biological networks (human brain networks) are the most robust networks among all networks studied in the paper.

  13. Water relations and plant size aspects of flowering for Agave deserti

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobel, P.S.

    1987-03-01

    The percentage of rosettes of the monocarpic perennial Agave deserti that flowered annually in the north-western Sonoran Desert varied ca. 50-fold for the 8 yr considered. The number of days when the soil water potential in the root zone was above -0.5 MPa, enabling A. deserti to take up water, was approximately linearly related to the annual precipitation, which also varied considerably year-to-year. The percentage of flowering in a particular year could not be predicted from the number of wet days in that year, the year before, or 3 yr before (r/sup 2/ less than or equal to .10, Pmore » > .5), but there was a positive correlation between percentage of flowering and the number of wet days 2 yr previously (r/sup 2/ = .33, P = .1). Also, years with much flowering tended to alternate with those of little flowering (r/sup 2/ = .64, P = .05). Indeed, 95% of the annual variation in the percentage of the rosettes that flowered could be accounted for by the year-to-year alternations together with the number of wet days 2 yr before the flowering (P = .01). Although over 90% of the rosettes with inflorescences were large, averaging 66 leaves and inflorescences 4 m tall, flowering also occurred for a few small rosettes, averaging only nine leaves and inflorescences only 0.8 m tall. The small flowering rosettes were all attached to large flowering rosettes by rhizomes with living cortical cells, suggesting that a hormone or other chemical signal/condition could be passed to the small rosettes leading to their precocious flowering.« less

  14. Fluctuations of the gluon distribution from the small- x effective action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less

  15. Fluctuations of the gluon distribution from the small- x effective action

    DOE PAGES

    Dumitru, Adrian; Skokov, Vladimir

    2017-09-29

    The computation of observables in high-energy QCD involves an average over stochastic semiclassical small-x gluon fields. The weight of various configurations is determined by the effective action. We introduce a method to study fluctuations of observables, functionals of the small-x fields, which does not explicitly involve dipoles. We integrate out those fluctuations of the semiclassical gluon field under which a given observable is invariant. Thereby we obtain the effective potential for that observable describing its fluctuations about the average. Here, we determine explicitly the effective potential for the covariant gauge gluon distribution both for the McLerran-Venugopalan (MV) model and formore » a (nonlocal) Gaussian approximation for the small-x effective action. This provides insight into the correlation of fluctuations of the number of hard gluons versus their typical transverse momentum. We find that the spectral shape of the fluctuations of the gluon distribution is fundamentally different in the MV model, where there is a pileup of gluons near the saturation scale, versus the solution of the small-x JIMWLK renormalization group, which generates essentially scale-invariant fluctuations above the absorptive boundary set by the saturation scale.« less

  16. Fabrication of Aluminum Foams with Small Pore Size by Melt Foaming Method

    NASA Astrophysics Data System (ADS)

    Cheng, Ying; Li, Yanxiang; Chen, Xiang; Shi, Tong; Liu, Zhiyong; Wang, Ningzhen

    2017-04-01

    This article introduces an improvement to the fabrication of aluminum foams with small pore size by melt foaming method. Before added to the melt, the foaming agent (titanium hydride) was pretreated in two steps. It firstly went through the traditional pre-oxidation treatment, which delayed the decomposition of titanium hydride and made sure the dispersion stage was controllable. Then such pre-oxidized titanium hydride powder was mixed with copper powder in a planetary ball mill. This treatment can not only increase the number of foaming agent particles and make them easier to disperse in the melt, which helps to increase the number of pores, but also reduce the amount of hydrogen released in the foaming stage. Therefore, the pore size could be decreased. Using such a ball-milled foaming agent in melt foaming method, aluminum foams with small pore size (average size of 1.6 mm) were successfully fabricated.

  17. Diffusion of test particles in stochastic magnetic fields for small Kubo numbers.

    PubMed

    Neuer, Marcus; Spatschek, Karl H

    2006-02-01

    Motion of charged particles in a collisional plasma with stochastic magnetic field lines is investigated on the basis of the so-called A-Langevin equation. Compared to the previously used A-Langevin model, here finite Larmor radius effects are taken into account. The A-Langevin equation is solved under the assumption that the Lagrangian correlation function for the magnetic field fluctuations is related to the Eulerian correlation function (in Gaussian form) via the Corrsin approximation. The latter is justified for small Kubo numbers. The velocity correlation function, being averaged with respect to the stochastic variables including collisions, leads to an implicit differential equation for the mean square displacement. From the latter, different transport regimes, including the well-known Rechester-Rosenbluth diffusion coefficient, are derived. Finite Larmor radius contributions show a decrease of the diffusion coefficient compared to the guiding center limit. The case of small (or vanishing) mean fields is also discussed.

  18. Evolution of the Orszag-Tang vortex system in a compressible medium. I - Initial average subsonic flow

    NASA Technical Reports Server (NTRS)

    Dahlburg, R. B.; Picone, J. M.

    1989-01-01

    The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.

  19. Evolution of the Orszag--Tang vortex system in a compressible medium. I. Initial average subsonic flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R.B.; Picone, J.M.

    In this paper the results of fully compressible, Fourier collocation, numerical simulations of the Orszag--Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2--0.6. Thesemore » values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less

  20. Optimal marker-assisted selection to increase the effective size of small populations.

    PubMed

    Wang, J

    2001-02-01

    An approach to the optimal utilization of marker and pedigree information in minimizing the rates of inbreeding and genetic drift at the average locus of the genome (not just the marked loci) in a small diploid population is proposed, and its efficiency is investigated by stochastic simulations. The approach is based on estimating the expected pedigree of each chromosome by using marker and individual pedigree information and minimizing the average coancestry of selected chromosomes by quadratic integer programming. It is shown that the approach is much more effective and much less computer demanding in implementation than previous ones. For pigs with 10 offspring per mother genotyped for two markers (each with four alleles at equal initial frequency) per chromosome of 100 cM, the approach can increase the average effective size for the whole genome by approximately 40 and 55% if mating ratios (the number of females mated with a male) are 3 and 12, respectively, compared with the corresponding values obtained by optimizing between-family selection using pedigree information only. The efficiency of the marker-assisted selection method increases with increasing amount of marker information (number of markers per chromosome, heterozygosity per marker) and family size, but decreases with increasing genome size. For less prolific species, the approach is still effective if the mating ratio is large so that a high marker-assisted selection pressure on the rarer sex can be maintained.

  1. Sharing the Pain: Cutting Faculty Salaries across the Board

    ERIC Educational Resources Information Center

    June, Audrey Williams

    2009-01-01

    Greensboro College has many of the intimate hallmarks of a small, private, liberal-arts college. Professors give their cellphone numbers to students and routinely provide extra help to those who need it. Classes at the North Carolina institution average 14 people. One of the students featured on the college Web site is a biology major who plays on…

  2. The power of a single trajectory

    NASA Astrophysics Data System (ADS)

    Schnellbächer, Nikolas D.; Schwarz, Ulrich S.

    2018-03-01

    Random walks are often evaluated in terms of their mean squared displacements, either for a large number of trajectories or for one very long trajectory. An alternative evaluation is based on the power spectral density, but here it is less clear which information can be extracted from a single trajectory. For continuous-time Brownian motion, Krapf et al now have mathematically proven that the one property that can be reliably extracted from a single trajectory is the frequency dependence of the ensemble-averaged power spectral density (Krapf et al 2018 New J. Phys. 20 023029). Their mathematical analysis also identifies the appropriate frequency window for this procedure and shows that the diffusion coefficient can be extracted by averaging over a small number of trajectories. The authors have verified their analytical results both by computer simulations and experiments.

  3. Adding small differences can increase similarity and choice.

    PubMed

    Kim, Jongmin; Novemsky, Nathan; Dhar, Ravi

    2013-02-01

    Similarity plays a critical role in many judgments and choices. Traditional models of similarity posit that increasing the number of differences between objects cannot increase judged similarity between them. In contrast to these previous models, the present research shows that introducing a small difference in an attribute that previously was identical across objects can increase perceived similarity between those objects. We propose an explanation based on the idea that small differences draw more attention than identical attributes do and that people's perceptions of similarity involve averaging attributes that are salient. We provide evidence that introducing small differences between objects increases perceived similarity. We also show that an increase in similarity decreases the difficulty of choice and the likelihood that a choice will be deferred.

  4. Identifying high-risk small business industries for occupational safety and health interventions.

    PubMed

    Okun, A; Lentz, T J; Schulte, P; Stayner, L

    2001-03-01

    Approximately one-third (32%) of U.S. workers are employed in small business industries (those with 80% of workers in establishments with fewer than 100 employees), and approximately 53 million persons in private industry work in small business establishments. This study was performed to identify small business industries at high risk for occupational injuries, illnesses, and fatalities. Small business industries were identified from among all three- and four-digit Standard Industrial Classification (SIC) codes and ranked using Bureau of Labor Statistics (BLS) data by rates and numbers of occupational injuries, illnesses, and fatalities. Both incidence rates and number of injury, illness, and fatality cases were evaluated. The 253 small business industries identified accounted for 1,568 work-related fatalities (34% of all private industry). Transportation incidents and violent acts were the leading causes of these fatalities. Detailed injury and illness data were available for 105 small business industries, that accounted for 1,476,400 work-related injuries, and 55,850 occupational illnesses. Many of the small business industries had morbidity and mortality rates exceeding the average rates for all private industry. The highest risk small business industries, based on a combined morbidity and mortality index, included logging, cut stone and stone products, truck terminals, and roofing, siding, and sheet metal work. Identification of high-risk small business industries indicates priorities for those interested in developing targeted prevention programs.

  5. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  6. Quantitative Measures of Chaotic Charged Particle Dynamics in the Magnetotail

    NASA Astrophysics Data System (ADS)

    Holland, D. L.; Martin, R. F., Jr.; Burris, C.

    2017-12-01

    It has long been noted that the motion of charged particles in magnetotail-like magnetic fields is chaotic, however, efforts to quantify the degree of chaos have had conflicting conclusions. In this paper we re-examine the question by focusing on quantitative measures of chaos. We first examine the percentage of orbits that enter the chaotic region of phase space and the average trapping time of those particles. We then examine the average exponential divergence rate (AEDR) of the chaotic particles between their first and last crossing of the mid-plane. We show that at resonant energies where the underlying phase space has a high degree of symmetry, only a small number of particle enter the chaotic region, but they are trapped for long periods of time and the time asymptotic value of the AEDR is very close to the average value of the AEDR. At the off-resonant energies where the phase space is highly asymmetric, the majority of the particle enter the chaotic region for fairly short periods of time and the time asymptotic value of the AEDR is much smaller than the average value. The root cause is that in the resonant case, the longest-lived orbits tend interact with the current many times and sample the entire chaotic region, whereas in the non-resonant case the longest-lived orbits only interact with the current sheet a small number of times but have very long mirrorings where the motion is nearly regular. Additionally we use an ad-hoc model where we model the current sheet as a Lorentz scattering system with each interaction with the current sheet being considered as a "collision". We find that the average kick per collision is greatest at off-resonant energies. Finally, we propose a chaos parameter as the product of the AEDR times the average chaotic particle trapping time times the percentage of orbits that are chaotic. We find that this takes on peak values at the resonant energies.

  7. Electrical transmission between mammalian neurons is supported by a small fraction of gap junction channels.

    PubMed

    Curti, Sebastian; Hoge, Gregory; Nagy, James I; Pereda, Alberto E

    2012-06-01

    Electrical synapses formed by gap junctions between neurons create networks of electrically coupled neurons in the mammalian brain, where these networks have been found to play important functional roles. In most cases, interneuronal gap junctions occur at remote dendro-dendritic contacts, making difficult accurate characterization of their physiological properties and correlation of these properties with their anatomical and morphological features of the gap junctions. In the mesencephalic trigeminal (MesV) nucleus where neurons are readily accessible for paired electrophysiological recordings in brain stem slices, our recent data indicate that electrical transmission between MesV neurons is mediated by connexin36 (Cx36)-containing gap junctions located at somato-somatic contacts. We here review evidence indicating that electrical transmission between these neurons is supported by a very small fraction of the gap junction channels present at cell-cell contacts. Acquisition of this evidence was enabled by the unprecedented experimental access of electrical synapses between MesV neurons, which allowed estimation of the average number of open channels mediating electrical coupling in relation to the average number of gap junction channels present at these contacts. Our results indicate that only a small proportion of channels (~0.1 %) appear to be conductive. On the basis of similarities with other preparations, we postulate that this phenomenon might constitute a general property of vertebrate electrical synapses, reflecting essential aspects of gap junction function and maintenance.

  8. Real-Time Ultrafine Aerosol Measurements from Wastewater Treatment Facilities.

    PubMed

    Piqueras, P; Li, F; Castelluccio, V; Matsumoto, M; Asa-Awuku, A

    2016-10-18

    Airborne particle emissions from wastewater treatment plants (WWTP) have been associated with health repercussions but particulate quantification studies are scarce. In this study, particulate matter (PM) number concentrations and size distributions in the ultrafine range (7-300 nm) were measured from two different sources: a laboratory-scale aerobic bioreactor and the activated sludge aeration basins at Orange County Sanitation District (OCSD). The relationships between wastewater parameters (total organic carbon (TOC), chemical oxygen demand (COD), and total suspended solids (TSS)), aeration flow rate and particle concentrations were also explored. A significant positive relationship was found between particle concentration and WWTP variables (COD: r(10) = 0.876, p <.001, TOC: r(10) = 0.664, p <.05, TSS: r(10) = 0.707, p <.05, aeration flow rate: r(8) = 0.988, p <.0001). A theoretical model was also developed from empirical data to compare real world WWTP aerosol number emission fluxes with laboratory data. Aerosol number fluxes at OCSD aerated basins (9.8 × 10 4 lbs/min·cm 2 ) and the bioreactor (7.95 × 10 4 lbs/min·cm 2 ) were calculated and showed a relatively small difference (19%). The ultrafine size distributions from both systems were consistent, with a mode of ∼48 nm. The average mass concentration (7.03 μg/cm 3 ) from OCSD was relatively small compared to other urban sources. However, the in-tank average number concentration of airborne particles (14 480 lbs/cm 3 ) was higher than background ambient concentrations.

  9. Plant Proteins Are Smaller Because They Are Encoded by Fewer Exons than Animal Proteins.

    PubMed

    Ramírez-Sánchez, Obed; Pérez-Rodríguez, Paulino; Delaye, Luis; Tiessen, Axel

    2016-12-01

    Protein size is an important biochemical feature since longer proteins can harbor more domains and therefore can display more biological functionalities than shorter proteins. We found remarkable differences in protein length, exon structure, and domain count among different phylogenetic lineages. While eukaryotic proteins have an average size of 472 amino acid residues (aa), average protein sizes in plant genomes are smaller than those of animals and fungi. Proteins unique to plants are ∼81aa shorter than plant proteins conserved among other eukaryotic lineages. The smaller average size of plant proteins could neither be explained by endosymbiosis nor subcellular compartmentation nor exon size, but rather due to exon number. Metazoan proteins are encoded on average by ∼10 exons of small size [∼176 nucleotides (nt)]. Streptophyta have on average only ∼5.7 exons of medium size (∼230nt). Multicellular species code for large proteins by increasing the exon number, while most unicellular organisms employ rather larger exons (>400nt). Among subcellular compartments, membrane proteins are the largest (∼520aa), whereas the smallest proteins correspond to the gene ontology group of ribosome (∼240aa). Plant genes are encoded by half the number of exons and also contain fewer domains than animal proteins on average. Interestingly, endosymbiotic proteins that migrated to the plant nucleus became larger than their cyanobacterial orthologs. We thus conclude that plants have proteins larger than bacteria but smaller than animals or fungi. Compared to the average of eukaryotic species, plants have ∼34% more but ∼20% smaller proteins. This suggests that photosynthetic organisms are unique and deserve therefore special attention with regard to the evolutionary forces acting on their genomes and proteomes. Copyright © 2016 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.

  10. Effectiveness of backpack electrofishing for removal of non-native fishes from a small warm-water stream

    USGS Publications Warehouse

    Ward, David L.; O'neill, Matthew W.; Ka'apu-Lyons, Cassie

    2015-01-01

    Electrofishing is commonly used when renovating small streams to remove nuisance fishes but the likelihood of complete eradication of unwanted species, particularly warm-water fishes, is unknown. In October of 2008, we electrofished Bonita Creek, a small stream with base flows (<0.56 m3/s) in southern Arizona, and then treated the stream with rotenone to kill all of the remaining fish and quantify the effectiveness of single and multiple-pass electro fishing. Six, 100-m transects were electro fished on three consecutive days followed by a single treatment with rotenone. Fish caught using electrofishing were identified, counted and removed from each transect daily and then compared to numbers of dead fish collected during the subsequent rotenone application. Electrofishing effectiveness was highly variable among transects. Single-pass electrofishing caught an average of 23% (95% CI=5 to 40%) of the fish present, and three-pass electrofishing on consecutive days caught on average 55% (95% CI=28 to 83%) of the fish in each transect. Native Arizona fishes were more susceptible to electrofishing (77 % captured) than non-native species (54% captured), though native fish were rare. Transects in Bonita Creek averaged 3.6±1.5 m wide and 0.25±0.20 m deep (max depth 1.2 m). Bonita Creek is a small first-order stream which exhibits ideal conditions for backpack electrofishing, yet we captured a relatively small percentage of the fish present. This suggests that complete removal of non-native warm-water fishes using backpack electrofishing is not likely to be successful, especially in larger more complex streams.

  11. Numerical simulation of the compressible Orszag-Tang vortex. Interim report, June 1988-February 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R.B.; Picone, J.M.

    Results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. Initial conditions consist of a nonrandom, periodic field in which the magnetic and velocity fields contain X-points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure-field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average flow Mach number of the flow. In the numerical simulations, this initial Mach number is varied from 0.2 to 0.6. These values correspond to average plasma beta valuesmore » ranging from 30.0 to 3.3, respectively. Compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as mass density and nonsolenoidal flow field. These effects include (1) retardation of growth of correlation between the magnetic field and the velocity field, (2) emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible-flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less

  12. Blackmail propagation on small-world networks

    NASA Astrophysics Data System (ADS)

    Shao, Zhi-Gang; Jian-Ping Sang; Zou, Xian-Wu; Tan, Zhi-Jie; Jin, Zhun-Zhi

    2005-06-01

    The dynamics of the blackmail propagation model based on small-world networks is investigated. It is found that for a given transmitting probability λ the dynamical behavior of blackmail propagation transits from linear growth type to logistical growth one with the network randomness p increases. The transition takes place at the critical network randomness pc=1/N, where N is the total number of nodes in the network. For a given network randomness p the dynamical behavior of blackmail propagation transits from exponential decrease type to logistical growth one with the transmitting probability λ increases. The transition occurs at the critical transmitting probability λc=1/, where is the average number of the nearest neighbors. The present work will be useful for understanding computer virus epidemics and other spreading phenomena on communication and social networks.

  13. Urban-Small Building Complex Environment: W07US Stability Analysis and Inter-Study Comparison, Volume AS-2

    DTIC Science & Technology

    2008-05-01

    pyranometer (Kipp/Zonen-CM3). A Campbell CR23X micro-logger recorded the standard meteorological parameters in 1-min averages. 3 Figure 2... Pyranometer Kipp/Zonen CM3 Watts/meter2 Net solar radiation Net radiometer Kipp/Zonen NR-LITE Watts/meter2 Table 4. W07US tower configuration. Tower Number

  14. Genome Sequences of Three Cluster AU Arthrobacter Phages, Caterpillar, Nightmare, and Teacup

    PubMed Central

    Adair, Tamarah L.; Stowe, Emily; Pizzorno, Marie C.; Krukonis, Gregory; Harrison, Melinda; Garlena, Rebecca A.; Russell, Daniel A.; Jacobs-Sera, Deborah

    2017-01-01

    ABSTRACT Caterpillar, Nightmare, and Teacup are cluster AU siphoviral phages isolated from enriched soil on Arthrobacter sp. strain ATCC 21022. These genomes are 58 kbp long with an average G+C content of 50%. Sequence analysis predicts 86 to 92 protein-coding genes, including a large number of small proteins with predicted transmembrane domains. PMID:29122860

  15. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  16. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  17. Improvement of small dairy producers in the central coast of Peru.

    PubMed

    Gomez, C; Fernandez, M; Salazar, I; Saldaña, I; Heredia, H

    2007-12-01

    The objective of this study was to characterize a group of small dairy producers in the central coast of Peru and determine the most important limiting factors that affect animal productivity, using Participatory Rural Appraisal methodologies. The information collected during a year included biological and socio-economic characteristics. Inappropriate nutrition, management, health and reproduction were observed in most farms. The average number of animals per family was 17.3, of which 40% were bulls and growing calves. Milk production and net cash income were reduced in the summer. The feeding programme for lactating and growing females of forage exchanged for labour and purchased concentrates did not theoretically or practically meet the cows' needs. All water was carried to the site on donkeys from one kilometre distance. All cows were negative for a number of potential infectious diseases. By California Mastitis Test of routine samples 7.1% of cows had clinical mastitis and 66.1% subclinical. Taking notional labour and forage costs into account, milk production was at a loss but the farmers considered neither and were satisfied with a net cash income from milk sales three times the average agricultural wage.

  18. Current control of time-averaged magnetization in superparamagnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Bapna, Mukund; Majetich, Sara A.

    2017-12-01

    This work investigates spin transfer torque control of time-averaged magnetization in a small 20 nm × 60 nm nanomagnet with a low thermal stability factor, Δ ˜ 11. Here, the nanomagnet is a part of a magnetic tunnel junction and fluctuates between parallel and anti-parallel magnetization states with respect to the magnetization of the reference layer generating a telegraph signal in the current versus time measurements. The response of the nanomagnet to an external field is first analyzed to characterize the magnetic properties. We then show that the time-averaged magnetization in the telegraph signal can be fully controlled between +1 and -1 by voltage over a small range of 0.25 V. NIST Statistical Test Suite analysis is performed for testing true randomness of the telegraph signal that the device generates when operated at near critical current values for spin transfer torque. Utilizing the probabilistic nature of the telegraph signal generated at two different voltages, a prototype demonstration is shown for multiplication of two numbers using an artificial AND logic gate.

  19. Gonorrhoea and Syphilis Epidemiology in Flemish General Practice 2009–2013: Results from a Registry-based Retrospective Cohort Study Compared with Mandatory Notification

    PubMed Central

    Schweikardt, Christoph; Goderis, Geert; Elli, Steven; Coppieters, Yves

    2016-01-01

    Background The number of newly diagnosed gonorrhoea and syphilis cases has increased in Flanders in recent years. Our aim was to investigate, to which extent these diagnoses were registered by general practitioners (GPs), and to examine opportunities and limits of the Intego database in this regard. Methods Data from a retrospective cohort study based on the Flemish Intego general practice database was analyzed for the years 2009–2013. Case definitions were applied. Due to small case numbers obtained, cases were pooled and averaged over the observation period. Frequencies were compared with those calculated from figures of mandatory notification. Results A total of 91 gonorrhoea and 23 syphilis cases were registered. The average Intego annual frequency of gonorrhoea cases obtained was 11.9 (95% Poisson confidence interval (CI) 9.6; 14.7) per 100,000 population, and for syphilis 3.0 (CI 1.9; 4.5), respectively, while mandatory notification was calculated at 14.0 (CI: 13.6, 14.4) and 7.0 (CI: 6.7, 7.3), respectively. Conclusion In spite of limitations such as small numbers and different case definitions, comparison with mandatory notification suggests that the GP was involved in the large majority of gonorrhoea cases, while the majority of new syphilis cases did not come to the knowledge of the GP. PMID:29546196

  20. Maximum one-shot dissipated work from Rényi divergences

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  1. Maximum one-shot dissipated work from Rényi divergences.

    PubMed

    Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  2. Development and characterization of 16 polymorphic microsatellite loci for the Alaska blackfish (Esociformes: Dallia pectoralis)

    USGS Publications Warehouse

    Campbell, Matthew A.; Sage, George K.; DeWilde, Rachel L.; López, J. Andres; Talbot, Sandra L.

    2014-01-01

    Blackfishes (Esociformes: Esocidae: Dallia), small fishes with relictual distributions, are unique in being the only primary freshwater fish genus endemic to Beringia. Although the number of species of Dallia is debated, disjunct populations and distinct mitochondrial divisions that predate the end of the last glacial maximum are apparent. We developed sixteen polymorphic microsatellites from the Alaska blackfish (Dallia pectoralis) to study genetic diversity in Dallia. Genotypes from two populations, Denali (n = 31) and Bethel (n = 35), demonstrated the usefulness of the loci for population-level investigation. Observed and expected heterozygosity averaged 18.6 and 19.8 % in Denali and 61.1 and 63.7 % in Bethel. Number of alleles at each locus averaged 3.50 in Denali and 9.63 in Bethel. The observed signature of variability and structuring between populations is consistent with mitochondrial data.

  3. Conversion of Low Quality Waste Heat to Electric Power with Small-Scale Organic Rankine Cycle (ORC) Engine/Generator Technology

    DTIC Science & Technology

    2016-08-01

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...urrendy valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) , 2. REPORT TYPE 3. DATES COVERED (From...NUMBER (Include area code) 919-282-1050 Standard Form 298 (Rev. 8198) Pntscnbed by ANSI Std. Z39.18 Cost & Performance Report 58XX i COST

  4. Flame balls dynamics in divergent channel

    NASA Astrophysics Data System (ADS)

    Fursenko, R.; Minaev, S.

    2011-12-01

    A three-dimensional reaction-diffusion model for lean low-Lewis-number premixed flames with radiative heat losses propagating in divergent channel is studied numerically. Effects of inlet gas velocity and heat-loss intensity on flame structure at low Lewis numbers are investigated. It is found that continuous flame front exists at small heat losses and the separate flame balls settled within restricted domain inside the divergent channel at large heat losses. It is shown that the time averaged flame balls coordinate may be considered as important characteristic analogous to coordinate of continuous flame stabilized in divergent channel.

  5. Small female rib cage fracture in frontal sled tests.

    PubMed

    Shaw, Greg; Lessley, David; Ash, Joseph; Poplin, Jerry; McMurry, Tim; Sochor, Mark; Crandall, Jeff

    2017-01-02

    The 2 objectives of this study are to (1) examine the rib and sternal fractures sustained by small stature elderly females in simulated frontal crashes and (2) determine how the findings are characterized by prior knowledge and field data. A test series was conducted to evaluate the response of 5 elderly (average age 76 years) female postmortem human subjects (PMHS), similar in mass and size to a 5th percentile female, in 30 km/h frontal sled tests. The subjects were restrained on a rigid planar seat by bilateral rigid knee bolsters, pelvic blocks, and a custom force-limited 3-point shoulder and lap belt. Posttest subject injury assessment included identifying rib cage fractures by means of a radiologist read of a posttest computed tomography (CT) and an autopsy. The data from a motion capture camera system were processed to provide chest deflection, defined as the movement of the sternum relative to the spine at the level of T8.  A complementary field data investigation involved querying the NASS-CDS database over the years 1997-2012. The targeted cases involved belted front seat small female passenger vehicle occupants over 40 years old who were injured in 25 to 35 km/h delta-V frontal crashes (11 to 1 o'clock). Peak upper shoulder belt tension averaged 1,970 N (SD = 140 N) in the sled tests. For all subjects, the peak x-axis deflection was recorded at the sternum with an average of -44.5 mm or 25% of chest depth. The thoracic injury severity based on the number and distribution of rib fractures yielded 4 subjects coded as Abbreviated Injury Scale (AIS) 3 (serious) and one as AIS 5 (critical). The NASS-CDS field data investigation of small females identified 205 occupants who met the search criteria. Rib fractures were reported for 2.7% of the female occupants. The small elderly test subjects sustained a higher number of rib cage fractures than expected in what was intended to be a minimally injurious frontal crash test condition. Neither field studies nor prior laboratory frontal sled tests conducted with 50th percentile male PMHS predicted the injury severity observed. Although this was a limited study, the results justify further exploration of the risk of rib cage injury for small elderly female occupants.

  6. Changes of the phytoplankton community as symptoms of deterioration of water quality in a shallow lake.

    PubMed

    Dembowska, Ewa Anna; Mieszczankin, Tomasz; Napiórkowski, Paweł

    2018-01-25

    Covering more than 60% of the lake surface, macrophytes determined the taxonomic composition of phytoplankton. We have found numerous indications of ecological deterioration and an increased trophic level year to year: an increased total number of taxa; a significantly increased number of species of Chlorophyta, Bacillariophyceae and Cyanoprokaryota; a decreased number of Chrysophyceae; increased Nygaard index, and high diversity and variability of phytoplankton functional groups. Within 2 years (2002 and 2003) algal biomass doubled: from 3.616 to 7.968 mg l -1 . An increased contribution of Chlorococcales and Cyanoprokaryota indicates progressive eutrophication of the lake. The average size of planktonic algae increased, particularly Cyanoprokaryota, where small-celled decreased dramatically and were replaced by large colonies. Cyanoprokaryota remained the dominant group of phytoplankton after 10 years, and the ecosystem of the lake remained in the turbid state. This group of algae had the average biomass 9.734 mg l -1 , which constituted almost 92% of the total biomass.

  7. Turbulent thermal superstructures in Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Stevens, Richard J. A. M.; Blass, Alexander; Zhu, Xiaojue; Verzicco, Roberto; Lohse, Detlef

    2018-04-01

    We report the observation of superstructures, i.e., very large-scale and long living coherent structures in highly turbulent Rayleigh-Bénard convection up to Rayleigh Ra=109 . We perform direct numerical simulations in horizontally periodic domains with aspect ratios up to Γ =128 . In the considered Ra number regime the thermal superstructures have a horizontal extend of six to seven times the height of the domain and their size is independent of Ra. Many laboratory experiments and numerical simulations have focused on small aspect ratio cells in order to achieve the highest possible Ra. However, here we show that for very high Ra integral quantities such as the Nusselt number and volume averaged Reynolds number only converge to the large aspect ratio limit around Γ ≈4 , while horizontally averaged statistics such as standard deviation and kurtosis converge around Γ ≈8 , the integral scale converges around Γ ≈32 , and the peak position of the temperature variance and turbulent kinetic energy spectra only converge around Γ ≈64 .

  8. A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Junghyun; Hayward, Chris; Zeiler, Cleat

    Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less

  9. A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth

    NASA Astrophysics Data System (ADS)

    Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.

    2018-04-01

    A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).

  10. Analysis of radiation-induced small Cu particle cluster formation in aqueous CuCl2

    USGS Publications Warehouse

    Jayanetti, Sumedha; Mayanovic, Robert A.; Anderson, Alan J.; Bassett, William A.; Chou, I.-Ming

    2001-01-01

    Radition-induced small Cu particle cluster formation in aqueous CuCl2 was analyzed. It was noticed that nearest neighbor distance increased with the increase in the time of irradiation. This showed that the clusters approached the lattice dimension of bulk copper. As the average cluster size approached its bulk dimensions, an increase in the nearest neighbor coordination number was found with the decrease in the surface to volume ratio. Radiolysis of water by incident x-ray beam led to the reduction of copper ions in the solution to themetallic state.

  11. Balancing Work and Academics in College: Why Do Students Working 10 to 19 Hours Per Week Excel?

    ERIC Educational Resources Information Center

    Dundes, Lauren; Marx, Jeff

    2007-01-01

    Given that 74% of undergraduates work an average of 25.5 hours per week while going to school, we know surprisingly little about how off-campus employment affects undergraduates and to what extent its impact varies by the number of hours worked. Our survey of undergraduates at a small liberal arts college found that the academic performance of…

  12. Resolution of molecular weight distributions in slightly pyrolyzed cellulose using the weibull function

    Treesearch

    A. Broido; Hsiukang Yow

    1977-01-01

    Even before weight loss in the low-temperature pyrolysis of cellulose becomes significant, the average degree of polymerization of the partially pyrolyzed samples drops sharply. The gel permeation chromatograms of nitrated derivatives of the samples can be described in terms of a small number of mixed size populations—each component fitted within reasonable limits by a...

  13. Polymerization of anionic wormlike micelles.

    PubMed

    Zhu, Zhiyuan; González, Yamaira I; Xu, Hangxun; Kaler, Eric W; Liu, Shiyong

    2006-01-31

    Polymerizable anionic wormlike micelles are obtained upon mixing the hydrotropic salt p-toluidine hydrochloride (PTHC) with the reactive anionic surfactant sodium 4-(8-methacryloyloxyoctyl)oxybenzene sulfonate (MOBS). Polymerization captures the cross-sectional radius of the micelles (approximately 2 nm), induces micellar growth, and leads to the formation of a stable single-phase dispersion of wormlike micellar polymers. The unpolymerized and polymerized micelles were characterized using static and dynamic laser light scattering, small-angle neutron scattering, 1H NMR, and stopped-flow light scattering. Stopped-flow light scattering was also used to measure the average lifetime of the unpolymerized wormlike micelles. A comparison of the average lifetime of unpolymerized wormlike micelles with the surfactant monomer propagation rate was used to elucidate the mechanism of polymerization. There is a significant correlation between the ratio of the average lifetime to the monomer propagation rate and the average aggregation number of the polymerized wormlike micelles.

  14. Discriminative analysis with a limited number of MEG trials in depression.

    PubMed

    Lu, Qing; Jiang, Haiteng; Bi, Kun; Liu, Chu; Yao, Zhijian

    2014-01-01

    In studies when exploring distinct patterns of functional abnormalities inherent in depression, experiments are generally repeated over many trials, and then the data are averaged across those trials in order to improve the signal to noise ratio. Repeated stimuli will lead to unpredictable impairment on signals, due to material familiarity or subjects׳ fatigue. In this consideration, signal processing tools powerful on small numbers of trials are expected to alleviate the work load on subjects, especially for mental disease studies. Forty-four subjects, half-depressed patients and half-healthy subjects, were recruited for MEG scanning in response to sad facial stimuli. Multichannel matching pursuit (MMP) was implemented to manage the limited number of trials. The post-MMP MEG signals were utilized to calculate the power topography over the whole brain, as inputs for a Support Vector Machine (SVM) classifier. Standard ICA and conventional ensemble averaging plus Butterworth filtering were employed as well as benchmark studies for performance comparison. A limited number of trials were required via MMP to discriminate the depressive. Post-MMP discriminative analysis revealed a deficit theta pattern and an excessive alpha/beta pattern. The small sample size may impair the stability of the reported findings. The transient tiny variance of the signal was excluded from exploration. The deficit theta pattern together with the excessive alpha/beta pattern in depression may indicate the dysfunction of the limbic-cortical circuit in a 'top-down' process. The post-MMP discrimination helps alleviate the scanning burden, facilitating the possibility for neuroimaging supporting the affective disorder clinical diagnosis. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Influence of Blurred Ways on Pattern Recognition of a Scale-Free Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Li

    2010-01-01

    We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.

  16. Effects of Inlet Modification and Rocket-Rack Extension on the Longitudinal Trim and Low-Lift Drag of the Douglas F5D-1 Airplane as Obtained with a 0.125-Scale Rocket-Boosted Model between Mach Numbers of 0.81 and 1.64, TED No. NACA AD 399

    NASA Technical Reports Server (NTRS)

    Hastings, Earl C., Jr.; Dickens, Waldo L.

    1957-01-01

    A flight investigation was conducted to determine the effects of an inlet modification and rocket-rack extension on the longitudinal trim and low-lift drag of the Douglas F5D-1 airplane. The investigation was conducted with a 0.125-scale rocket-boosted model which was flight tested at the Langley Pilotless Aircraft Research Station at Wallops Island, Va. Results indicate that the combined effects of the modified inlet and fully extended rocket racks on the trim lift coefficient and trim angle of attack were small between Mach numbers of 0.94 and 1.57. Between Mach numbers of 1.10 and 1.57 there was an average increase in drag coefficient of about o,005 for the model with modified inlet and extended rocket racks. The change in drag coefficient due to the inlet modification alone is small between Mach numbers of 1.59 and 1.64

  17. The effect of normative context variability on recognition memory.

    PubMed

    Steyvers, Mark; Malmberg, Kenneth J

    2003-09-01

    According to some theories of recognition memory (e.g., S. Dennis & M. S. Humphreys, 2001), the number of different contexts in which words appear determines how memorable individual occurrences of words will be: A word that occurs in a small number of different contexts should be better recognized than a word that appears in a larger number of different contexts. To empirically test this prediction, a normative measure is developed, referred to here as context variability, that estimates the number of different contexts in which words appear in everyday life. These findings confirm the prediction that words low in context variability are better recognized (on average) than words that are high in context variability. (c) 2003 APA, all rights reserved

  18. 2017 in review: FDA approvals of new molecular entities.

    PubMed

    Kinch, Michael S; Griesenauer, Rebekah H

    2018-05-08

    An overview of drugs approved by the FDA in 2017 reflected a reversion to the mean after a low number of NME approvals in 2016. This reversal was largely driven by the largest number of biologics-based NMEs recorded to date, which offset an average number of small-molecule approvals. Oncology indications continued to dominate followed by novel treatments for infectious, immunologic and neurologic diseases. From a mechanistic standpoint, the industry has continued a trend of target diversification, reflecting advances in scientific understanding of disease processes. Finally, 2017 continued a period of relatively few mergers and acquisitions, which broke a more-than-a-decade-long decline in the number of organizations contributing to research and development. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Non-Crop Host Sampling Yields Insights into Small-Scale Population Dynamics of Drosophila suzukii (Matsumura)

    PubMed Central

    Loeb, Gregory M.

    2018-01-01

    Invasive, polyphagous crop pests subsist on a number of crop and non-crop resources. While knowing the full range of host species is important, a seasonal investigation into the use of non-crop plants adjacent to cropping systems provide key insights into some of the factors determining local population dynamics. This study investigated the infestation of non-crop plants by the invasive Drosophila suzukii (Matsumura), a pest of numerous economically important stone and small fruit crops, by sampling fruit-producing non-crop hosts adjacent to commercial plantings weekly from June through November in central New York over a two-year period. We found D. suzukii infestation rates (number of flies emerged/kg fruit) peaked mid-August through early September, with Rubus allegheniensis Porter and Lonicera morrowii Asa Gray showing the highest average infestation in both years. Interannual infestation patterns were similar despite a lower number of adults caught in monitoring traps the second year, suggesting D. suzukii host use may be density independent. PMID:29301358

  20. Modeling of stochastic motion of bacteria propelled spherical microbeads

    NASA Astrophysics Data System (ADS)

    Arabagi, Veaceslav; Behkam, Bahareh; Cheung, Eugene; Sitti, Metin

    2011-06-01

    This work proposes a stochastic dynamic model of bacteria propelled spherical microbeads as potential swimming microrobotic bodies. Small numbers of S. marcescens bacteria are attached with their bodies to surfaces of spherical microbeads. Average-behavior stochastic models that are normally adopted when studying such biological systems are generally not effective for cases in which a small number of agents are interacting in a complex manner, hence a stochastic model is proposed to simulate the behavior of 8-41 bacteria assembled on a curved surface. Flexibility of the flagellar hook is studied via comparing simulated and experimental results for scenarios of increasing bead size and the number of attached bacteria on a bead. Although requiring more experimental data to yield an exact, certain flagellar hook stiffness value, the examined results favor a stiffer flagella. The stochastic model is intended to be used as a design and simulation tool for future potential targeted drug delivery and disease diagnosis applications of bacteria propelled microrobots.

  1. Structure-activity relationships of pyrethroid insecticides. Part 2. The use of molecular dynamics for conformation searching and average parameter calculation

    NASA Astrophysics Data System (ADS)

    Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.

    1992-04-01

    Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinuesa, Ricardo; Fick, Lambert; Negi, Prabal

    In the present document we describe a toolbox for the spectral-element code Nek5000, aimed at computing turbulence statistics. The toolbox is presented for a small test case, namely a square duct with L x = 2h, L y = 2h and L z = 4h, where x, y and z are the horizontal, vertical and streamwise directions, respectively. The number of elements in the xy-plane is 16 X 16 = 256, and the number of elements in z is 4, leading to a total of 1,204 spectral elements. A polynomial order of N = 5 is chosen, and the meshmore » is generated using the Nek5000 tool genbox. The toolbox presented here allows to compute mean-velocity components, the Reynolds-stress tensor as well as turbulent kinetic energy (TKE) and Reynolds-stress budgets. Note that the present toolbox allows to compute turbulence statistics in turbulent flows with one homogeneous direction (where the statistics are based on time-averaging as well as averaging in the homogeneous direction), as well as in fully three-dimensional flows (with no periodic directions, where only time-averaging is considered).« less

  3. An analysis of effect of land use change on river flow variability

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Liu, Yuting; Yang, Xinyue; Wang, Xiang

    2018-02-01

    Land use scenario analysis, SWAT model, flow characteristic indices and flow variability technology were used to analyze the effect of land use quantity and location change on river flow. Results showed that river flow variation caused by land use change from forest to crop was larger than that caused by land use change from forest to grass; Land use change neither from upstream to downstream nor from downstream to upstream had little effect on annual average discharge and maximum annual average discharge. But it had obvious effect on maximum daily discharge; Land use change which occurred in upstream could lead to producing larger magnitude flood more easily; Land use change from forest to crop or grass could increase the number of large magnitude floods and their total duration. And it also could increase the number of small magnitude floods but decrease their duration.

  4. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  5. Small intestine histomorphometry of beef cattle with divergent feed efficiency

    PubMed Central

    2013-01-01

    Background The provision of feed is a major cost in beef production. Therefore, the improvement of feed efficiency is warranted. The direct assessment of feed efficiency has limitations and alternatives are needed. Small intestine micro-architecture is associated with function and may be related to feed efficiency. The objective was to verify the potential histomorphological differences in the small intestine of animals with divergent feed efficiency. Methods From a population of 45 feedlot steers, 12 were selected with low-RFI (superior feed efficiency) and 12 with high-RFI (inferior feed efficiency) at the end of the finishing period. The animals were processed at 13.79 ± 1.21 months of age. Within 1.5 h of slaughter the gastrointestinal tract was collected and segments from duodenum and ileum were harvested. Tissue fragments were processed, sectioned and stained with hematoxylin and eosin. Photomicroscopy images were taken under 1000x magnification. For each animal 100 intestinal crypts were imaged, in a cross section view, from each of the two intestinal segments. Images were analyzed using the software ImageJ®. The measurements taken were: crypt area, crypt perimeter, crypt lumen area, nuclei number and the cell size was indirectly calculated. Data were analyzed using general linear model and correlation procedures of SAS®. Results Efficient beef steers (low-RFI) have a greater cellularity (indicated by nuclei number) in the small intestinal crypts, both in duodenum and ileum, than less efficient beef steers (high-RFI) (P < 0.05). The mean values for the nuclei number of the low-RFI and high-RFI groups were 33.16 and 30.30 in the duodenum and 37.21 and 33.65 in the ileum, respectively. The average size of the cells did not differ between feed efficiency groups in both segments (P ≥ 0.10). A trend was observed (P ≤ 0.10) for greater crypt area and crypt perimeter in the ileum for cattle with improved feed efficiency. Conclusion Improved feed efficiency is associated with greater cellularity and no differences on average cell size in the crypts of the small intestine in the bovine. These observations are likely to lead to an increase in the energy demand by the small intestine regardless of the more desirable feed efficiency. PMID:23379622

  6. Implications for the missing low-mass galaxies (satellites) problem from cosmic shear

    NASA Astrophysics Data System (ADS)

    Jimenez, Raul; Verde, Licia; Kitching, Thomas D.

    2018-06-01

    The number of observed dwarf galaxies, with dark matter mass ≲ 1011 M⊙ in the Milky Way or the Andromeda galaxy does not agree with predictions from the successful ΛCDM paradigm. To alleviate this problem a suppression of dark matter clustering power on very small scales has been conjectured. However, the abundance of dark matter halos outside our immediate neighbourhood (the Local Group) seem to agree with the ΛCDM-expected abundance. Here we connect these problems to observations of weak lensing cosmic shear, pointing out that cosmic shear can make significant statements about the missing satellites problem in a statistical way. As an example and pedagogical application we use recent constraints on small-scales power suppression from measurements of the CFHTLenS data. We find that, on average, in a region of ˜Gpc3 there is no significant small-scale power suppression. This implies that suppression of small-scale power is not a viable solution to the `missing satellites problem' or, alternatively, that on average in this volume there is no `missing satellites problem' for dark matter masses ≳ 5 × 109 M⊙. Further analysis of current and future weak lensing surveys will probe much smaller scales, k > 10h Mpc-1 corresponding roughly to masses M < 109M⊙.

  7. Experimental Study of Heat Transfer to Small Cylinders in a Subsonic, High-temperature Gas Stream

    NASA Technical Reports Server (NTRS)

    Glawe, George E; Johnson, Robert C

    1957-01-01

    A Nusselt-Reynolds number relation for cylindrical thermocouple wires in crossflow was obtained from the experimental determination of time constants. Tests were conducted in exhaust gas over a temperature range of 2000 to 3400 R, a Mach number range of 0.3 to 0.8, and a static-pressure range from 2/3 to 1-1/3 atmospheres, yielding a Reynolds number range of 450 to 3000. The correlation obtained is Nu=(0.428 plus or minus 0.003) times the square root of Re* with average deviations of a single observation of 8.5 percent. This relation is the same as one previously reported for room-temperature conditions.

  8. Anticorrelation of X-ray bright points with sunspot number, 1970-1978

    NASA Technical Reports Server (NTRS)

    Golub, L.; Davis, J. M.; Krieger, A. S.

    1979-01-01

    Soft X-ray observations of the solar corona over the period 1970-1978 show that the number of small short-lived bipolar magnetic features (X-ray bright points) varies inversely with the sunspot index. During the entire period from 1973 to 1978 most of the magnetic flux emerging at the solar surface appeared in the form of bright points. In 1970, near the peak of solar cycle 20, the contributions from bright points and from active regions appear to be approximately equal. These observations strongly support an earlier suggestion that the solar cycle may be characterized as an oscillator in wave-number space with relatively little variation in the average total rate of flux emergence.

  9. STUDY OF HOME DEMONSTRATION UNITS IN A SAMPLE OF 27 COUNTIES IN NEW YORK STATE, NUMBER 3.

    ERIC Educational Resources Information Center

    ALEXANDER, FRANK D.; HARSHAW, JEAN

    AN EXPLORATORY STUDY EXAMINED CHARACTERISTICS OF 1,128 HOME DEMONSTRATION UNITS TO SUGGEST HYPOTHESES AND SCOPE FOR A MORE INTENSIVE STUDY OF A SMALL SAMPLE OF UNITS, AND TO PROVIDE GUIDANCE IN SAMPLING. DATA WERE OBTAINED FROM A SPECIALLY DESIGNED MEMBERSHIP CARD USED IN 1962. UNIT SIZE AVERAGED 23.6 MEMBERS BUT THE RANGE WAS FAIRLY GREAT. A NEED…

  10. Shape Shifting Satellites in Binary Near-Earth Asteroids: Do Meteoroid Impacts Play a Role in BYORP Orbital Evolution?

    NASA Technical Reports Server (NTRS)

    Rubincam, David Parry

    2012-01-01

    Less than catastrophic meteoroid impacts over 10(exp 5) years may change the shape of small rubble-pile satellites in binary NEAs, lengthening the average BYORP (binary Yarkovsky-Radzievskii-Paddack) rate of orbital evolution. An estimate of shape-shifting meteoroid fluxes give numbers close enough to causing random walks in the semimajor axis of binary systems to warrant further investigation

  11. Fish Habitat and Fish Populations in a Southern Appalachian Watershed before and after Hurricane Hugo

    Treesearch

    C. Andrew Dolloff; Patricia A. Flebbe; Michael D. Owen

    1994-01-01

    Habitat features and relative abundance of all fish species were estimated in 8.4 km of a small mountain stream system before and 11 months after Hurricane Hugo crossed the southern Appalachians in September 1989. There was no change in the total amount (area) of each habitat type but the total number of habitat units decreased and average size and depth of habitat...

  12. Frequency of Flooding Along U.S. Coasts, 2010-2015 Versus 1950-1959

    EPA Pesticide Factsheets

    This map shows the average number of days per year in which coastal waters rose above the local threshold for minor flooding at 27 sites along the U.S. coast. Each small bar graph compares the first decade of widespread measurements (the 1950s in orange) with the most recent decade (the 2010s in purple). For more information: www.epa.gov/climatechange/science/indicators

  13. Nucleation of Crystals From Solution in Microgravity (USML-1 Glovebox (GBX) Investigation)

    NASA Technical Reports Server (NTRS)

    Kroes, Roger L.; Reiss, Donald A.; Lehoczky, Sandor L.

    1994-01-01

    A new method for initiating nucleation from solutions in microgravity which avoids nucleation on container walls and other surfaces is described. This method consists of injecting a small quantity of highly concentrated, heated solution into the interior of a lightly supersaturated, cooler host gowth solution. It was tested successfully on USML-I, producing a large number of LAP crystals whose longest dimension averaged 1 mm.

  14. Improving consensus structure by eliminating averaging artifacts

    PubMed Central

    KC, Dukka B

    2009-01-01

    Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905

  15. Regulated and unregulated emissions from highway heavy-duty diesel engines complying with U.S. Environmental Protection Agency 2007 emissions standards.

    PubMed

    Khalek, Imad A; Bougher, Thomas L; Merritt, Patrick M; Zielinska, Barbara

    2011-04-01

    As part of the Advanced Collaborative Emissions Study (ACES), regulated and unregulated exhaust emissions from four different 2007 model year U.S. Environmental Protection Agency (EPA)-compliant heavy-duty highway diesel engines were measured on an engine dynamometer. The engines were equipped with exhaust high-efficiency catalyzed diesel particle filters (C-DPFs) that are actively regenerated or cleaned using the engine control module. Regulated emissions of carbon monoxide, nonmethane hydrocarbons, and particulate matter (PM) were on average 97, 89, and 86% lower than the 2007 EPA standard, respectively, and oxides of nitrogen (NOx) were on average 9% lower. Unregulated exhaust emissions of nitrogen dioxide (NO2) emissions were on, average 1.3 and 2.8 times higher than the NO, emissions reported in previous work using 1998- and 2004-technology engines, respectively. However, compared with other work performed on 1994- to 2004-technology engines, average emission reductions in the range of 71-99% were observed for a very comprehensive list of unregulated engine exhaust pollutants and air toxic contaminants that included metals and other elements, elemental carbon (EC), inorganic ions, and gas- and particle-phase volatile and semi-volatile organic carbon (OC) compounds. The low PM mass emitted from the 2007 technology ACES engines was composed mainly of sulfate (53%) and OC (30%), with a small fraction of EC (13%) and metals and other elements (4%). The fraction of EC is expected to remain small, regardless of engine operation, because of the presence of the high-efficiency C-DPF in the exhaust. This is different from typical PM composition of pre-2007 engines with EC in the range of 10-90%, depending on engine operation. Most of the particles emitted from the 2007 engines were mainly volatile nuclei mode in the sub-30-nm size range. An increase in volatile nanoparticles was observed during C-DPF active regeneration, during which the observed particle number was similar to that observed in emissions of pre-2007 engines. However, on average, when combining engine operation with and without active regeneration events, particle number emissions with the 2007 engines were 90% lower than the particle number emitted from a 2004-technology engine tested in an earlier program.

  16. Modeling level of urban taxi services using neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Wong, S.C.; Tong, C.O.

    1999-05-01

    This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less

  17. High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.

    PubMed

    Haxton, Thomas K

    2015-03-10

    We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.

  18. Intestinal coccidia (Apicomplexa: Eimeriidae) of Brazilian lizards. Eimeria carmelinoi n.sp., from Kentropyx calcarata and Acroeimeria paraensis n.sp. from Cnemidophorus lemniscatus lemniscatus (Lacertilia: Teiidae).

    PubMed

    Lainson, Ralph

    2002-03-01

    Eimeria carmelinoi n.sp., is described in the teiid lizard Kentropyx calcarata Spix, 1825 from north Brazil. Oocysts subspherical to spherical, averaging 21.25 x 20.15 micro m. Oocyst wall smooth, colourless and devoid of striae or micropyle. No polar body or conspicuous oocystic residuum, but frequently a small number of fine granules in Brownian movement. Sporocysts, averaging 10.1 x 9 microm, are without a Stieda body. Endogenous stages characteristic of the genus: intra-cytoplasmic, within the epithelial cells of the ileum and above the host cell nucleus. A re-description is given of a parasite previously described as Eimeria cnemidophori, in the teiid lizard Cnemidophorus lemniscatus lemniscatus. A study of the endogenous stages in the ileum necessitates renaming this coccidian as Acroeimeria cnemidophori (Carini, 1941) nov.comb., and suggests that Acroeimeria pintoi Lainson & Paperna, 1999 in the teiid Ameiva ameiva is a synonym of A. cnemidophori. A further intestinal coccidian, Acroeimeria paraensis n.sp. is described in C. l. lemniscatus, frequently as a mixed infection with A. cnemidophori. Mature oocysts, averaging 24.4 x 21.8 microm, have a single-layered, smooth, colourless wall with no micropyle or striae. No polar body, but the frequent presence of a small number of fine granules exhibiting Brownian movements. Sporocysts 9 x 8, without a Stieda body. Endogenous stages epicytoplasmic, characteristic of the genus, in the upper ileum. The importance of a study of the endogenous stages of eimeriid coccidia is discussed.

  19. Detecting Small Amounts of Gene Flow from Phylogenies of Alleles

    PubMed Central

    Slatkin, M.

    1989-01-01

    The method of coalescents is used to find the probability that none of the ancestors of alleles sampled from a population are immigrants. If that is the case for samples from two or more populations, then there would be concordance between the phylogenies of those alleles and the geographic locations from which they are drawn. This type of concordance has been found in several studies of mitochondrial DNA from natural populations. It is shown that if the number of sequences sampled from each population is reasonably large (10 or more), then this type of concordance suggests that the average number of individuals migrating between populations is likely to be relatively small (Nm < 1) but the possibility of occasional migrants cannot be excluded. The method is applied to the data of E. Bermingham and J. C. Avise on mtDNA from the bowfin, Amia calva. PMID:2714639

  20. Deterministic versus stochastic model of reprogramming: new evidence from cellular barcoding technique

    PubMed Central

    Yunusova, Anastasia M.; Fishman, Veniamin S.; Vasiliev, Gennady V.

    2017-01-01

    Factor-mediated reprogramming of somatic cells towards pluripotency is a low-efficiency process during which only small subsets of cells are successfully reprogrammed. Previous analyses of the determinants of the reprogramming potential are based on average measurements across a large population of cells or on monitoring a relatively small number of single cells with live imaging. Here, we applied lentiviral genetic barcoding, a powerful tool enabling the identification of familiar relationships in thousands of cells. High-throughput sequencing of barcodes from successfully reprogrammed cells revealed a significant number of barcodes from related cells. We developed a computer model, according to which a probability of synchronous reprogramming of sister cells equals 10–30%. We conclude that the reprogramming success is pre-established in some particular cells and, being a heritable trait, can be maintained through cell division. Thus, reprogramming progresses in a deterministic manner, at least at the level of cell lineages. PMID:28446707

  1. Kinematic dynamo action in square and hexagonal patterns.

    PubMed

    Favier, B; Proctor, M R E

    2013-11-01

    We consider kinematic dynamo action in rapidly rotating Boussinesq convection just above onset. The velocity is constrained to have either a square or a hexagonal pattern. For the square pattern, large-scale dynamo action is observed at onset, with most of the magnetic energy being contained in the horizontally averaged component. As the magnetic Reynolds number increases, small-scale dynamo action becomes possible, reducing the overall growth rate of the dynamo. For the hexagonal pattern, the breaking of symmetry between up and down flows results in an effective pumping velocity. For intermediate rotation rates, this additional effect can prevent the growth of any mean-field dynamo, so that only a small-scale dynamo is eventually possible at large enough magnetic Reynolds number. For very large rotation rates, this pumping term becomes negligible, and the dynamo properties of square and hexagonal patterns are qualitatively similar. These results hold for both perfectly conducting and infinite magnetic permeability boundary conditions.

  2. Floristic summary of 22 National Parks in the Midwestern United States

    USGS Publications Warehouse

    Bennett, J.P.

    1996-01-01

    Biological diversity is studied at many geographical scales, but specimen collecting is invariably done at a local level. Collecting of animal and plant specimens leads to the compilation of checklists for multiple small areas, which are sometimes merged to produce larger, regional checklists. Such an approach was employed to study the regional vascular flora of 22 national parks of the midwestern United States. Total number of plant taxa (species level and below) ranged from 86 at Hopewell Culture National Historical Park to 1,399 at Indiana Dunes National Lakeshore and averaged 520 per park. Infraspecific taxa were 12% or less of all taxa at all parks and averaged 7%. Genera per parkranged from 70 to 562, and families ranged from 41 to 145. Non-native species averaged 95 per park, or about 27% on average of the total number of taxa per park. The aggregated regional flora contained just over 2,900 taxa, 828 genera and 160 families. Eleven percent of the taxa were below the species level. Almost 17% of the taxa were non-native, a relatively large percentage, but not out of the range of percentages reported in the literature. The observed and estimated numbers of taxa for this region were in good agreement with other estimates for these latitudes and for a standard regional size. However, the parks do not represent their respective state floras very well when they are aggregated at that scale. Indiana was the best represented state with 65% of the state flora found in the parks, while only 25% of each state's flora were represented by parks in Iowa, Kansas, and Nebraska, and the average representation was only 42%.

  3. Occurrence and sources of Escherichia coli in metropolitan St. Louis streams, October 2004 through September 2007

    USGS Publications Warehouse

    Wilkison, Donald H.; Davis, Jerri V.

    2010-01-01

    The occurrence and sources of Escherichia coli (E. coli), one of several fecal indicator bacteria, in metropolitan St. Louis streams known to receive nonpoint source runoff, occasional discharges from combined and sanitary sewers, and treated wastewater effluent were investigated from October 2004 through September 2007. Three Missouri River sites, five Mississippi River sites, and six small basin tributary stream sites were sampled during base flow and storm events for the presence of E. coli and their sources. E. coli host-source determinations were conducted using local library based genotypic methods. Human fecal contamination in stream samples was additionally confirmed by the presence of Bacteroides thetaiotaomicron, an anaerobic, enteric bacterium with a high occurrence in, and specificity to, humans. Missouri River E. coli densities and loads during base flow were approximately 10 times greater than those in the Mississippi River above its confluence with the Missouri River. Although substantial amounts of E. coli originated from within the study area during base flow and storm events, considerable amounts of E. coli in the Missouri River, as well as in the middle Mississippi River sections downstream from its confluence with the Missouri River, originated in Missouri River reaches upstream from the study area. In lower Mississippi River reaches, bacteria contributions from the numerous combined and sanitary sewer overflows within the study area, as well as contributions from nonpoint source runoff, greatly increased instream E. coli densities. Although other urban factors cannot be discounted, average E. coli densities in streams were strongly correlated with the number of upstream combined and sanitary sewer overflow points, and the percentage of upstream impervious cover. Small basin sites with the greatest number of combined and sanitary sewer overflows (Maline Creek and the River des Peres) had larger E. coli densities, larger loads, and a greater percentage of E. coli attributable to humans than other small basin sites; however, even though small basin E. coli densities typically were much larger than in large river receiving streams, small basins contributed, on average, only a small part (a maximum of 16 percent) of the total E. coli load to larger rivers. On average, approximately one-third of E. coli in metropolitan St. Louis streams was identified as originating from humans. Another one-third of the E. coli was determined to have originated from unidentified sources; dogs and geese contributed lesser amounts, 10 and 20 percent, of the total instream bacteria. Sources of E. coli were largely independent of hydrologic conditions-an indication that sources remained relatively consistent with time.

  4. A new measure based on degree distribution that links information theory and network graph analysis

    PubMed Central

    2012-01-01

    Background Detailed connection maps of human and nonhuman brains are being generated with new technologies, and graph metrics have been instrumental in understanding the general organizational features of these structures. Neural networks appear to have small world properties: they have clustered regions, while maintaining integrative features such as short average pathlengths. Results We captured the structural characteristics of clustered networks with short average pathlengths through our own variable, System Difference (SD), which is computationally simple and calculable for larger graph systems. SD is a Jaccardian measure generated by averaging all of the differences in the connection patterns between any two nodes of a system. We calculated SD over large random samples of matrices and found that high SD matrices have a low average pathlength and a larger number of clustered structures. SD is a measure of degree distribution with high SD matrices maximizing entropic properties. Phi (Φ), an information theory metric that assesses a system’s capacity to integrate information, correlated well with SD - with SD explaining over 90% of the variance in systems above 11 nodes (tested for 4 to 13 nodes). However, newer versions of Φ do not correlate well with the SD metric. Conclusions The new network measure, SD, provides a link between high entropic structures and degree distributions as related to small world properties. PMID:22726594

  5. Sensitivity of Noncontrast Computed Tomography for Small Renal Calculi With Endoscopy as the Gold Standard.

    PubMed

    Bhojani, Naeem; Paonessa, Jessica E; El Tayeb, Marawan M; Williams, James C; Hameed, Tariq A; Lingeman, James E

    2018-04-03

    To compare the sensitivity of noncontrast computed tomography (CT) with endoscopy for detection of renal calculi. Imaging modalities for detection of nephrolithiasis have centered on abdominal x-ray, ultrasound, and noncontrast CT. Sensitivities of 58%-62% (abdominal x-ray), 45% (ultrasound), and 95%-100% (CT) have been previously reported. However, these results have never been correlated with endoscopic findings. Idiopathic calcium oxalate stone formers with symptomatic calculi requiring ureteroscopy were studied. At the time of surgery, the number and the location of all calculi within the kidney were recorded followed by basket retrieval. Each calculus was measured and sent for micro-CT and infrared spectrophotometry. All CT scans were reviewed by the same genitourinary radiologist who was blinded to the endoscopic findings. The radiologist reported on the number, location, and size of each calculus. Eighteen renal units were studied in 11 patients. Average time from CT scan to ureteroscopy was 28.6 days. The mean number of calculi identified per kidney was 9.2 ± 6.1 for endoscopy and 5.9 ± 4.1 for CT (P <.004). The mean size of total renal calculi (sum of the longest stone diameters) per kidney was 22.4 ± 17.1 mm and 18.2 ± 13.2 mm for endoscopy and CT, respectively (P = .06). CT scan underreports the number of renal calculi, probably missing some small stones and being unable to distinguish those lying in close proximity to one another. However, the total stone burden seen by CT is, on average, accurate when compared with that found on endoscopic examination. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Accounting for disagreements on average cone loss rates in retinitis pigmentosa with a new kinetic model: Its relevance for clinical trials.

    PubMed

    Baumgartner, W A; Baumgartner, A M

    2016-04-01

    Since 1985, at least nine studies of the average rate of cone loss in retinitis pigmentosa (RP) populations have yielded conflicting average rate constant values (-k), differing by 90-160%. This is surprising, since, except for the first two investigations, the Harvard or Johns Hopkins' protocols used in these studies were identical with respect to: use of the same exponential decline model, calculation of average -k from individual patient k values, monitoring patients over similarly large time frames, and excluding data exhibiting floor and ceiling effects. A detailed analysis of Harvard's and Hopkins' protocols and data revealed two subtle differences: (i) Hopkins' use of half-life t0.5 (or t(1/e)) for expressing patient cone-loss rates rather than k as used by Harvard; (ii) Harvard obtaining substantially more +k from improving fields due to dormant-cone recovery effects and "small -k" values than Hopkins' ("small -k" is defined as less than -0.040 year(-1)), e.g., 16% +k, 31% small -k, vs. Hopkins' 3% and 6% respectively. Since t0.5=0.693/k, it follows that when k=0, or is very small, t0.5 (or t(1/e)) is respectively infinity or a very large number. This unfortunate mathematical property (which also prevents t0.5 (t(1/e)) histogram construction corresponding to -k to +k) caused Hopkins' to delete all "small -k" and all +k due to "strong leverage". Naturally this contributed to Hopkins' larger average -k. Difference (ii) led us to re-evaluate the Harvard/Hopkins' exponential unchanging -k model. In its place we propose a model of increasing biochemical stresses from dying rods on cones during RP progression: increasing oxidative stresses and trophic factor deficiencies (e.g., RdCVF), and RPE malfunction. Our kinetic analysis showed rod loss to follow exponential kinetics with unchanging -k due to constant genetic stresses, thereby providing a theoretical basis for Clarke et al.'s empirical observation of such kinetics with eleven animal models of RP. In contrast to this, we show that cone loss occurs in patients with increasing -k values during RP progression. And as the Hopkins' protocol selects more advanced RP cases than Harvard's to assure avoidance of ceiling effects (Harvard does this by kinetic monitoring), we show increasing -k kinetics to be the reason Harvard obtains more +k and small -k values. Thus the combined effects of (i) and (ii) produce Harvard's smaller average -k value. The relevance of the increasing biochemical stress model for optimizing clinical trials is discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  8. A New Correlation of Large Earthquakes Along the Southern San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Scharer, K. M.; Weldon, R. J.; Biasi, G. P.

    2010-12-01

    There are now three sites on the southern San Andreas fault (SSAF) with records of 10 or more dated ground rupturing earthquakes (Frazier Mountain, Wrightwood and Pallett Creek) and at least seven other sites with 3-5 dated events. Numerous sites have related information including geomorphic offsets caused by 1 to a few earthquakes, a known amount of slip spanning a specific interval of time or number of earthquakes, or the number (but not necessarily the exact ages) of earthquakes in an interval of time. We use this information to construct a record of recent large earthquakes on the SSAF. Strongly overlapping C-14 age ranges, especially between closely spaced sites like Pallett Creek and Wrightwood on the Mojave segment and Thousand Palms, Indio, Coachella and Salt Creek on the southernmost 100 kms of the fault, and overlap between the more distant Frazier Mountain and Bidart Fan sites on the northernmost part of the fault suggest that the paleoseismic data are robust and can be explained by a relatively small number of events that span substantial portions of the fault. This is consistent with the extent of rupture of the two historic events (1857 was ~300 km long and 1812 was 100-200 km long); slip per event data that averages 3-5 m per event at most sites; and the long historical hiatus since 1857. While some sites have smaller offsets for individual events, correlation between sites suggests that many small offsets are near the end of long ruptures. While the long event series on the Mojave are quasi-periodic, individual intervals range about an order of magnitude, from a few decades up to ~200 years. This wide range of intervals and the apparent anti-slip predictable behavior of ruptures (small intervals are not followed by small events) suggest weak clustering or periods of time spanning multiple intervals when strain release is higher low lower than average. These properties defy the application of simple hazard analysis but need to be understood to properly forecast hazard along the fault.

  9. Cluster size dependence of high-order harmonic generation

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Hagmeijer, R.; Bastiaens, H. M. J.; Goh, S. J.; van der Slot, P. J. M.; Biedron, S. G.; Milton, S. V.; Boller, K.-J.

    2017-08-01

    We investigate high-order harmonic generation (HHG) from noble gas clusters in a supersonic gas jet. To identify the contribution of harmonic generation from clusters versus that from gas monomers, we measure the high-order harmonic output over a broad range of the total atomic number density in the jet (from 3×1016 to 3 × 1018 {{cm}}-3) at two different reservoir temperatures (303 and 363 K). For the first time in the evaluation of the harmonic yield in such measurements, the variation of the liquid mass fraction, g, versus pressure and temperature is taken into consideration, which we determine, reliably and consistently, to be below 20% within our range of experimental parameters. By comparing the measured harmonic yield from a thin jet with the calculated corresponding yield from monomers alone, we find an increased emission of the harmonics when the average cluster size is less than 3000. Using g, under the assumption that the emission from monomers and clusters add up coherently, we calculate the ratio of the average single-atom response of an atom within a cluster to that of a monomer and find an enhancement of around 100 for very small average cluster size (∼200). We do not find any dependence of the cut-off frequency on the composition of the cluster jet. This implies that HHG in clusters is based on electrons that return to their parent ions and not to neighboring ions in the cluster. To fully employ the enhanced average single-atom response found for small average cluster sizes (∼200), the nozzle producing the cluster jet must provide a large liquid mass fraction at these small cluster sizes for increasing the harmonic yield. Moreover, cluster jets may allow for quasi-phase matching, as the higher mass of clusters allows for a higher density contrast in spatially structuring the nonlinear medium.

  10. Reducing duplex examinations in patients with iatrogenic pseudoaneurysms.

    PubMed

    Stone, Patrick A; Aburahma, Ali F; Flaherty, Sarah K

    2006-06-01

    Ultrasound-guided thrombin injection has become the initial treatment of choice for femoral access-related pseudoaneurysms. Patients typically undergo serial duplex examinations to assess for spontaneous resolution of small iatrogenic pseudoaneurysms (IPSAs) (<2.5 cm), or may require repeated diagnostic, therapeutic, and follow-up studies for larger IPSAs (>2.5 cm). We evaluated the impact of a revised treatment algorithm that includes primary treatment of both small (<2.5 cm) and larger pseudoaneurysms (>2.5 cm), rather than observation of smaller ones, and attempts to establish a single duplex examination via a point-of-care treatment strategy. We reviewed 105 consecutive patients treated with ultrasound-guided thrombin injection from July 2001 through September 2004. Patient, IPSAs, characteristics, and treatment methods were examined. The number of duplex examinations per patient was evaluated over the treatment interval. Also, published cost data were used to compare primary treatment of small ISPAs vs observation with serial duplex examinations. Successful thrombosis occurred in 103 (98.1%) of 105 treated pseudoaneurysms. No minor or major complications occurred after thrombin injection in either small or large ISPAs, and both failures requiring operation were in the large aneurysm group. The recurrence rate for the series was 1.9% (2/105), and both recurrences were successfully treated with an additional thrombin injection. A single injection was successful in treating 43 (97.7%) of 44 small (<2.5 cm) IPSAs, and one required a second injection. Patients had an average of 3.3 duplex examinations in our first year of treatment experience, which declined to 1.5 by our third year with the institution of a point-of-care service model for all pseudoaneurysms. Based on this decreased use of duplex examination and an average treatment cohort of 35 IPSA patients per year our institution, we determined this results in a reduction of 35 hours of laboratory time and nearly 70 ultrasounds per year. Similarly for small pseudoaneurysms, a point-of-service primary treatment program rather than observation results in an estimated cost savings of $12,000, based on treating 15 small IPSAs per year. Ultrasound-guided thrombin injection is safe and effective for the treatment of nearly all iatrogenic pseudoaneurysms. We recommend primary treatment of small pseudoaneurysms by ultrasound-guided thrombin injection rather than observation with serial duplex scans. A point-of-care treatment algorithm can result in cost savings by reducing the number of necessary duplex examinations.

  11. An Investigation of the Differences and Similarities between Generated Small-World Networks for Right- and Left-Hand Motor Imageries.

    PubMed

    Zhang, Jiang; Li, Yuyao; Chen, Huafu; Ding, Jurong; Yuan, Zhen

    2016-11-04

    In this study, small-world network analysis was performed to identify the similarities and differences between functional brain networks for right- and left-hand motor imageries (MIs). First, Pearson correlation coefficients among the nodes within the functional brain networks from healthy subjects were calculated. Then, small-world network indicators, including the clustering coefficient, the average path length, the global efficiency, the local efficiency, the average node degree, and the small-world index, were generated for the functional brain networks during both right- and left-hand MIs. We identified large differences in the small-world network indicators between the functional networks during MI and in the random networks. More importantly, the functional brain networks underlying the right- and left-hand MIs exhibited similar small-world properties in terms of the clustering coefficient, the average path length, the global efficiency, and the local efficiency. By contrast, the right- and left-hand MI brain networks showed differences in small-world characteristics, including indicators such as the average node degree and the small-world index. Interestingly, our findings also suggested that the differences in the activity intensity and range, the average node degree, and the small-world index of brain networks between the right- and left-hand MIs were associated with the asymmetry of brain functions.

  12. Alignment error envelopes for single particle analysis.

    PubMed

    Jensen, G J

    2001-01-01

    To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.

  13. Number and impact of published scholarly works by pharmacy practice faculty members at accredited US colleges and schools of pharmacy (2001-2003).

    PubMed

    Coleman, Craig I; Schlesselman, Lauren S; Lao, Eang; White, C Michael

    2007-06-15

    To evaluate the quantity and quality of published literature conducted by pharmacy practice faculty members in US colleges and schools of pharmacy for the years 2001-2003. The Web of Science bibliographic database was used to identify publication citations for the years 2001-2003, which were then evaluated in a number of different ways. Faculty members were identified using American Association of Colleges of Pharmacy rosters for the 2000-2001, 2001-2002, and 2002-2003 academic years. Two thousand three hundred seventy-four pharmacy practice faculty members generated 1,896 publications in Web of Science searchable journals. A small number of faculty members (2.1%) were responsible for a large proportion of publications (30.6%), and only 4.9% of faculty members published 2 or more publications in these journals per year. The average impact factor for the top 200 publications was 7.6. Pharmacy practice faculty members contributed substantially to the biomedical literature and their work has had an important impact. A substantial portion of this work has come from a small subset of faculty members.

  14. Soliton concepts and protein structure

    NASA Astrophysics Data System (ADS)

    Krokhotin, Andrei; Niemi, Antti J.; Peng, Xubiao

    2012-03-01

    Structural classification shows that the number of different protein folds is surprisingly small. It also appears that proteins are built in a modular fashion from a relatively small number of components. Here we propose that the modular building blocks are made of the dark soliton solution of a generalized discrete nonlinear Schrödinger equation. We find that practically all protein loops can be obtained simply by scaling the size and by joining together a number of copies of the soliton, one after another. The soliton has only two loop-specific parameters, and we compute their statistical distribution in the Protein Data Bank (PDB). We explicitly construct a collection of 200 sets of parameters, each determining a soliton profile that describes a different short loop. The ensuing profiles cover practically all those proteins in PDB that have a resolution which is better than 2.0 Å, with a precision such that the average root-mean-square distance between the loop and its soliton is less than the experimental B-factor fluctuation distance. We also present two examples that describe how the loop library can be employed both to model and to analyze folded proteins.

  15. Experimental investigation of heat transfer and flow pattern from heated horizontal rectangular fin array under natural convection

    NASA Astrophysics Data System (ADS)

    Taji, S. G.; Parishwad, G. V.; Sane, N. K.

    2014-07-01

    This paper presents results of the experimental study conducted on heated horizontal rectangular fin array under natural convection. The temperature mapping and the prediction of the flow patterns over the fin array with variable fin spacing is carried out. Dimensionless fin spacing to height (S/H) ratio is varied from 0.05 to 0.3 and length to height ratio (L/H) = 5 is kept constant. The heater input to the fin array assembly is varied from 25 to 100 W. The single chimney flow pattern is observed from 8 to 12 mm fin spacing. The end flow is choked below 6 mm fin spacing. The single chimney flow pattern changes to sliding or end flow choking at 6 mm fin spacing. The average heat transfer coefficient (ha) is very small (2.52-5.78 W/m2 K) at 100 W for S = 5-12 mm. The ha is very small (1.12-1.8 W/m2 K) at 100 W for 2-4 mm fin spacing due to choked fin array end condition. The end flow is not sufficient to reach up to central portion of fin array and in the middle portion there is an unsteady down and up flow pattern resulting in sliding chimney. The central bottom portion of fin array channel does not contribute much in heat dissipation for S = 2-4 mm. The ha has significantly improved at higher spacing as compared to lower spacing region. The single chimney flow pattern is preferred from heat transfer point of view. The optimum spacing is confirmed in the range of 8-10 mm. The average heat transfer results are compared with previous literature and showed similar trend and satisfactory agreement. An empirical equation has been proposed to correlate the average Nusselt number as a function of Grashof number and fin spacing to height ratio. The average error for this equation is -0.32 %.

  16. The evolution of ovule number and flower size in wind-pollinated plants.

    PubMed

    Friedman, Jannice; Barrett, Spencer C H

    2011-02-01

    In angiosperms, ovules are "packaged" within individual flowers, and an optimal strategy should occur depending on pollination and resource conditions. In animal-pollinated species, wide variation in ovule number per flower occurs, and this contrasts with wind-pollinated plants, where most species possess uniovulate flowers. This pattern is usually explained as an adaptive response to low pollen receipt in wind-pollinated species. Here, we develop a phenotypic model for the evolution of ovule number per flower that incorporates the aerodynamics of pollen capture and a fixed resource pool for provisioning of flowers, ovules, and seeds. Our results challenge the prevailing explanation for the association between uniovulate flowers and wind pollination. We demonstrate that when flowers are small and inexpensive, as they are in wind-pollinated species, ovule number should be minimized and lower than the average number of pollen tubes per style, even under stochastic pollination and fertilization regimes. The model predicts that plants benefit from producing many small inexpensive flowers, even though some flowers capture too few pollen grains to fertilize their ovules. Wind-pollinated plants with numerous flowers distributed throughout the inflorescence, each with a single ovule or a few ovules, sample more of the airstream, and this should maximize pollen capture and seed production.

  17. Size-resolved particle emission factors for individual ships

    NASA Astrophysics Data System (ADS)

    Jonsson, Åsa M.; Westerlund, Jonathan; Hallquist, Mattias

    2011-07-01

    In these experiments size-resolved emission factors for particle number (EFPN) and mass (EFPM) have been determined for 734 individual ship passages for real-world dilution. The method used is an extractive sampling method of the passing ship plumes where particle number/mass and CO2 were measured with high time resolution (1 Hz). The measurements were conducted on a small island located in the entrance to the port of Gothenburg (N57.6849, E11.838), the largest harbor in Scandinavia. This is an emission control area (ECA) and in close vicinity to populated areas. The average EFPN and EFPM were 2.55 ± 0.11 × 1016 (kg fuel)-1 and 2050 ± 110 mg (kg fuel)-1, respectively. The determined EF for ships with multiple passages showed a great reproducibility. Size-resolved EFPN were peaking at small particle sizes ˜35 nm. Smaller particle sizes and hence less mass were observed by a gas turbine equipped ship compared to diesel engine equipped ships. On average 36 to 46% of the emitted particles by number were non-volatile and 24% by mass (EFPN 1.16 ± 0.19 × 1016 [kg fuel]-1 and EFPM 488 ± 73 mg [kg fuel]-1, respectively). This study shows a great potential to gain large data-sets regarding ship emission determining parameters that can improve current dispersion modeling for health assessments on local and regional scales. The global contributions of total and non-volatile particle mass from shipping using this extensive data-set from an ECA were estimated to be at least 0.80 Tgy-1 and 0.19 Tgy-1.

  18. Effects of Environmental Context on Physiological Response During Team Handball Small Sided Games.

    PubMed

    Bělka, Jan; Hulka, Karel; Machová, Iva; Šafář, Michal; Weisser, Radim; Bellar, David M; Hoover, Donald L; Judge, Lawrence W

    2017-01-01

    This study examined the distance covered and physiological effects of altering the number of players during small-sided games (SSG) in team handball. Twelve professional female handball players [24.6±3.7 years, 172±6.2 cm, 68.2 ± 9.9kg, 22.7 ± 2 kg/m 2 ] participated in this study. The SSG were played, first with five on each side (SSG 5), then four (SSG 4), then three (SSG 3). Each game was four minutes long, followed by three minutes of rest. The distance covered and time spent in four speed zones (based on player movement speed) were selected for analysis: Zone 1 (0-1.4 m/s), Zone 2 (1.5-3.4 m/s), Zone 3 (3.5-5.2 m/s), and Zone 4 (>5.2 m/s). Statistically significant differences were found in Zone 2, between conditions SSG 3 and SSG 4 (p=.049,ω 2 = .32). The highest average heart rate (HR) occurred during SSG 3. Average HR between SSG 3 (89.7 % HRmax) and SSG 5 (87.8 % HRmax) (p= .04, ω2= .26) were also significantly different. Participant HR response between the speed zones was not statistically significant. HR response was negatively correlated with the number of players within the SSG condition. Statistically significant results were found for RPE between SSG 3 and the other two SSG conditions (SSG 4, p = .01, and SSG 5, p = .00). These results indicate that changing the number of SSG players can be used to manipulate the physiological response during handball training.

  19. Effects of Environmental Context on Physiological Response During Team Handball Small Sided Games

    PubMed Central

    BĚLKA, JAN; HULKA, KAREL; MACHOVÁ, IVA; ŠAFÁŘ, MICHAL; WEISSER, RADIM; BELLAR, DAVID M.; HOOVER, DONALD L; JUDGE, LAWRENCE W.

    2017-01-01

    This study examined the distance covered and physiological effects of altering the number of players during small-sided games (SSG) in team handball. Twelve professional female handball players [24.6±3.7 years, 172±6.2 cm, 68.2 ± 9.9kg, 22.7 ± 2 kg/m2] participated in this study. The SSG were played, first with five on each side (SSG 5), then four (SSG 4), then three (SSG 3). Each game was four minutes long, followed by three minutes of rest. The distance covered and time spent in four speed zones (based on player movement speed) were selected for analysis: Zone 1 (0–1.4 m/s), Zone 2 (1.5–3.4 m/s), Zone 3 (3.5–5.2 m/s), and Zone 4 (>5.2 m/s). Statistically significant differences were found in Zone 2, between conditions SSG 3 and SSG 4 (p=.049,ω2= .32). The highest average heart rate (HR) occurred during SSG 3. Average HR between SSG 3 (89.7 % HRmax) and SSG 5 (87.8 % HRmax) (p= .04, ω2= .26) were also significantly different. Participant HR response between the speed zones was not statistically significant. HR response was negatively correlated with the number of players within the SSG condition. Statistically significant results were found for RPE between SSG 3 and the other two SSG conditions (SSG 4, p = .01, and SSG 5, p = .00). These results indicate that changing the number of SSG players can be used to manipulate the physiological response during handball training. PMID:29399252

  20. Minding the gap: Frequency of indels in mtDNA control region sequence data and influence on population genetic analyses

    USGS Publications Warehouse

    Pearce, J.M.

    2006-01-01

    Insertions and deletions (indels) result in sequences of various lengths when homologous gene regions are compared among individuals or species. Although indels are typically phylogenetically informative, occurrence and incorporation of these characters as gaps in intraspecific population genetic data sets are rarely discussed. Moreover, the impact of gaps on estimates of fixation indices, such as FST, has not been reviewed. Here, I summarize the occurrence and population genetic signal of indels among 60 published studies that involved alignments of multiple sequences from the mitochondrial DNA (mtDNA) control region of vertebrate taxa. Among 30 studies observing indels, an average of 12% of both variable and parsimony-informative sites were composed of these sites. There was no consistent trend between levels of population differentiation and the number of gap characters in a data block. Across all studies, the average influence on estimates of ??ST was small, explaining only an additional 1.8% of among population variance (range 0.0-8.0%). Studies most likely to observe an increase in ??ST with the inclusion of gap characters were those with < 20 variable sites, but a near equal number of studies with few variable sites did not show an increase. In contrast to studies at interspecific levels, the influence of indels for intraspecific population genetic analyses of control region DNA appears small, dependent upon total number of variable sites in the data block, and related to species-specific characteristics and the spatial distribution of mtDNA lineages that contain indels. ?? 2006 Blackwell Publishing Ltd.

  1. Factors influencing piglet pre-weaning mortality in 47 commercial swine herds in Thailand.

    PubMed

    Nuntapaitoon, Morakot; Tummaruk, Padet

    2018-01-01

    The present study aims to determine the occurrence of piglet pre-weaning mortality in commercial swine herds in Thailand in relation to piglet, sow, and environmental factors. Data were collected from the database of the computerized recording system from 47 commercial swine herds in Thailand. The raw data were carefully scrutinized for accuracy. Litters with a lactation length < 16 days or >28 days were excluded. In total, 199,918 litters from 74,088 sows were included in the analyses. Piglet pre-weaning mortality at the individual sow level was calculated as piglet pre-weaning mortality (%) = (number of littermate pigs - number of piglets at weaning) / number of littermate pigs. Litters were classified according to sow parity numbers (1, 2-5, and 6-9), average birth weight of the piglets (0.80-1.29, 1.30-1.79, 1.80-2.50 kg), number of littermate pigs (5-7, 8-10, 11-12, and 13-15 piglets), and size of the herd (small, medium, and large). Pearson correlations were conducted to analyze the associations between piglet pre-weaning mortality and reproductive parameters. Additionally, a general linear model procedure was performed to analyze the various factors influencing piglet pre-weaning mortality. On average, piglet pre-weaning mortality was 11.2% (median = 9.1%) and varied among herds from 4.8 to 19.2%. Among all the litters, 62.1, 18.1, and 19.8% of the litters had a piglet pre-weaning mortality rate of 0-10, 11-20, and greater than 20%, respectively. As the number of littermate pigs increased, piglet pre-weaning mortality also increased (r = 0.390, P < 0.001). Litters with 13-16 littermate pigs had a higher piglet pre-weaning mortality than litters with 5-7, 8-10, and 11-12 littermate pigs (20.8, 7.8, 7.2, and 11.2%, respectively; P < 0.001). Piglet pre-weaning mortality in large-sized herds was higher than that in small- and medium-sized herds (13.6, 10.6, and 11.2%, respectively; P < 0.001). Interestingly, in all categories of herd size, piglet pre-weaning mortality was increased almost two times when the number of littermates increased from 11-12 to 13-16 piglets. Furthermore, piglets with birth weights of 0.80-1.29 kg in large-sized herds had a higher risk of mortality than those in small- and medium-sized herds (15.3, 10.9, and 12.2%, respectively, P < 0.001). In conclusion, in commercial swine herds in the tropics, piglet pre-weaning mortality averaged 11.2% and varied among herds from 4.8 to 19.2%. The litters with 13-16 littermate pigs had piglet pre-weaning mortality of up to 20.8%. Piglets with low birth weight (0.80-1.29 kg) had a higher risk of pre-weaning mortality. Management strategies for reducing piglet pre-weaning mortality in tropical climates should be emphasized in litters with a high number of littermate pigs, low piglet birth weights, and large herd sizes.

  2. Timescales for exploratory tactical behaviour in football small-sided games.

    PubMed

    Ric, Angel; Hristovski, Robert; Gonçalves, Bruno; Torres, Lorena; Sampaio, Jaime; Torrents, Carlota

    2016-09-01

    The aim of this study was to identify the dynamics of tactical behaviour emerging on different timescales in football small-sided games and to quantify short- and long-term exploratory behaviour according to the number of opponents. Two teams of four professional male footballers played small-sided games against two different teams with a variable number of opponents (3, 5 and 7). Data were collected using a combination of systematic observation and a non-differential global positioning system (15 Hz). The temporal diversity and structural flexibility of the players were determined by calculating the dynamic overlap order parameter q, entropy and trapping strength. Analysis of the exploratory dynamics revealed two different timescales, forming a different metastable landscape of action for each constraint. Fast dynamics lasted on average a few seconds and consisted of changes in tactical patterns. The long timescale corresponded to the shared tasks of offence and defence lasting tens of seconds. The players' tactical diversity decreased with an increasing number of opponents, especially in defence. Manipulating numerical imbalance is likely to promote changes in the diversity, unpredictability and flexibility of tactical solutions. The fact that the temporally nested structure of constraints shaped the emergence of tactical behaviour provides a new rationale for practice task design. The manipulation of numerical imbalance on the timescale of a few tens of seconds, on which the exploratory behaviour of players saturates, may help coaches to optimise the exploratory efficiency of the small-sided games.

  3. Numerical simulation of turbulence in the presence of shear

    NASA Technical Reports Server (NTRS)

    Shaanan, S.; Ferziger, J. H.; Reynolds, W. C.

    1975-01-01

    The numerical calculations are presented of the large eddy structure of turbulent flows, by use of the averaged Navier-Stokes equations, where averages are taken over spatial regions small compared to the size of the computational grid. The subgrid components of motion are modeled by a local eddy-viscosity model. A new finite-difference scheme is proposed to represent the nonlinear average advective term which has fourth-order accuracy. This scheme exhibits several advantages over existing schemes with regard to the following: (1) the scheme is compact as it extends only one point away in each direction from the point to which it is applied; (2) it gives better resolution for high wave-number waves in the solution of Poisson equation, and (3) it reduces programming complexity and computation time. Examples worked out in detail are the decay of isotropic turbulence, homogeneous turbulent shear flow, and homogeneous turbulent shear flow with system rotation.

  4. Oscillation of neutrinos produced by the annihilation of dark matter inside the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaili, Arman; School of Physics, Institute for Research in Fundamental Sciences; Farzan, Yasaman

    2010-06-01

    The annihilation of dark matter particles captured by the Sun can lead to a neutrino flux observable in neutrino detectors. Considering the fact that these dark matter particles are nonrelativistic, if a pair of dark matter annihilates to a neutrino pair, the spectrum of neutrinos will be monochromatic. We show that in this case, even after averaging over the production point inside the Sun, the oscillatory terms of the oscillation probability do not average to zero. This leads to interesting observable features in the annual variation of the number of muon track events. We show that smearing of the spectrummore » due to thermal distribution of dark matter inside the Sun is too small to wash out this variation. We point out the possibility of studying the initial flavor composition of neutrinos produced by the annihilation of dark matter particles via measuring the annual variation of the number of {mu}-track events in neutrino telescopes.« less

  5. How reliable are ligand-centric methods for Target Fishing?

    NASA Astrophysics Data System (ADS)

    Peon, Antonio; Dang, Cuong; Ballester, Pedro

    2016-04-01

    Computational methods for Target Fishing (TF), also known as Target Prediction or Polypharmacology Prediction, can be used to discover new targets for small-molecule drugs. This may result in repositioning the drug in a new indication or improving our current understanding of its efficacy and side effects. While there is a substantial body of research on TF methods, there is still a need to improve their validation, which is often limited to a small part of the available targets and not easily interpretable by the user. Here we discuss how target-centric TF methods are inherently limited by the number of targets that can possibly predict (this number is by construction much larger in ligand-centric techniques). We also propose a new benchmark to validate TF methods, which is particularly suited to analyse how predictive performance varies with the query molecule. On average over approved drugs, we estimate that only five predicted targets will have to be tested to find two true targets with submicromolar potency (a strong variability in performance is however observed). In addition, we find that an approved drug has currently an average of eight known targets, which reinforces the notion that polypharmacology is a common and strong event. Furthermore, with the assistance of a control group of randomly-selected molecules, we show that the targets of approved drugs are generally harder to predict.

  6. Atmospheric CO2 Concentrations from the Commonwealth Scientific and Industrial Research Organization (CSIRO) GASLAB Flask Sampling Network (March 1991 - December 2006)

    DOE Data Explorer

    Steele, L. P. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia; Krummel, P. B. [Commonwealth Scientific and Industrial Research Organization (CSIRO),; Langenfelds, R. L. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Aspendale, Victoria, Australia

    2008-01-01

    Individual measurements have been obtained from flask air samples returned to the CSIRO GASLAB. Typical sample storage times range from days to weeks for some sites (e.g. Cape Grim, Aircraft over Tasmania and Bass Strait) to as much as one year for Macquarie Island and the Antarctic sites. Experiments carried out to test for changes in sample CO2 mixing ratio during storage have shown significant drifts in some flask types over test periods of several months to years (Cooper et al., 1999). Corrections derived from the test results are applied to network data according to flask type. These measurements indicate a rise in annual average atmospheric CO2 concentration from 357.72 parts per million by volume (ppmv) in 1992 to 383.05 ppmv in 2006, or an increase in annual average of about 1.81 ppmv/year. These flask data may be compared with other flask measurements from the Scripps Institution of Oceanography, available through 2004 in TRENDS; both indicate an annual average increase of 1.72 ppmv/year throuth 2004. Differences may be attributed to different sampling times or days, different numbers of samples, and different curve-fitting techniques used to obtain monthly and annual average numbers from flask data. Measurement error in flask data is believed to be small (Masarie et al., 2001).

  7. Higher incidence of small Y chromosome in humans with trisomy 21 (Down syndrome).

    PubMed

    Verma, R S; Huq, A; Madahar, C; Qazi, Q; Dosik, H

    1982-09-01

    The length of the Y chromosome was measured in 42 black patients with trisomy 21 (47,XY,+21) and a similar number of normal individuals of American black ancestry. The length of the Y was expressed as a function of Y/F ratio and arbitrarily classified into five groups using subjectively defined criteria as follows: very small, small, average, large, and very large. Thirty-eight % of the trisomy 21 patients had small or very small Ys compared to 2.38% of the controls (P less than 0.01). In both populations the size of the Y was not normally distributed. In the normals it was skewed to the left, whereas in the Downs the distribution was flat (platykurtic). A significantly higher incidence of Y length heteromorphisms was noted in the Down as compared to the normal black population. In the light of our current understanding that about one-third of all trisomy 21 patients are due to paternal nondisjunction, it may be tempting to speculate that males with small Y are at an increased risk for nondisjunction of the 21 chromosome.

  8. Hazardous medical waste generation in Greece: case studies from medical facilities in Attica and from a small insular hospital.

    PubMed

    Komilis, Dimitrios; Katsafaros, Nikolaos; Vassilopoulos, Panagiotis

    2011-08-01

    The accurate calculation of the unit generation rates and composition of medical waste generated from medical facilities is necessary in order to design medical waste treatment systems. In this work, the unit medical waste generation rates of 95 public and private medical facilities in the Attica region were calculated based on daily weight records from a central medical waste incineration facility. The calculated medical waste generation rates (in kg bed(-1) day( -1)) varied widely with average values at 0.27 ± 113% and 0.24 ± 121%, for public and private medical facilities, respectively. The hazardous medical waste generation was measured, at the source, in the 40 bed hospital of the island of Ikaria for a period of 42 days during a 6 month period. The average hazardous medical waste generation rate was 1.204 kg occupied bed(-1) day(-1) or 0.33 kg (official) bed( -1) day(-1). From the above amounts, 54% resulted from the patients' room (solid and liquid wastes combined), 24% from the emergency department (solid waste), 17% from the clinical pathology lab and 6% from the X-ray lab. In average, 17% of the total hazardous medical waste was solely infectious. Conclusively, no correlation among the number of beds and the unit medical waste generation rate could be established. Each hospital should be studied separately, since medical waste generation and composition depends on the number and type of departments/laboratories at each hospital, number of external patients and number of occupied beds.

  9. Historical instrumental climate data for Australia - quality and utility for palaeoclimatic studies

    NASA Astrophysics Data System (ADS)

    Nicholls, Neville; Collins, Dean; Trewin, Blair; Hope, Pandora

    2006-10-01

    The quality and availability of climate data suitable for palaeoclimatic calibration and verification for the Australian region are discussed and documented. Details of the various datasets, including problems with the data, are presented. High-quality datasets, where such problems are reduced or even eliminated, are discussed. Many climate datasets are now analysed onto grids, facilitating the preparation of regional-average time series. Work is under way to produce such high-quality, gridded datasets for a variety of hitherto unavailable climate data, including surface humidity, pan evaporation, wind, and cloud. An experiment suggests that only a relatively small number of palaeoclimatic time series could provide a useful estimate of long-term changes in Australian annual average temperature. Copyright

  10. Identifying genes that extend life span using a high-throughput screening system.

    PubMed

    Chen, Cuiying; Contreras, Roland

    2007-01-01

    We developed a high-throughput functional genomic screening system that allows identification of genes prolonging lifespan in the baker's yeast Saccharomyces cerevisiae. The method is based on isolating yeast mother cells with a higher than average number of cell divisions as indicated by the number of bud scars on their surface. Fluorescently labeled wheat germ agglutinin (WGA) was used for specific staining of chitin, a major component of bud scars. The critical new steps in our bud-scar-sorting system are the use of small microbeads, which allows successive rounds of purification and regrowth of the mother cells (M-cell), and utilization of flow cytometry to sort and isolate cells with a longer lifespan based on the number of bud scars specifically labeled with WGA.

  11. Early initial clinical experience with intravitreal aflibercept for wet age-related macular degeneration.

    PubMed

    Ferrone, Philip J; Anwar, Farihah; Naysan, Jonathan; Chaudhary, Khurram; Fastenberg, David; Graham, Kenneth; Deramo, Vincent

    2014-06-01

    Age-related macular degeneration (AMD) is a degenerative process that leads to severe vision loss. Wet AMD is defined by choroidal neovascularisation, leading to the accumulation of subretinal fluid (SRF), macular oedema (ME), and pigment epithelium detachments (PED). Purpose To evaluate the initial clinical experience of conversion from bevacizumab or ranibizumab to aflibercept in wet AMD patients. Records of 250 consecutive wet AMD patients were retrospectively reviewed. Of 250 patients, 29 were naive (with no previous treatment), and 221 were previously treated with bevacizumab (1/3) or ranibizumab (2/3). On average, converted patients received 14 injections every 6 weeks on a treat-and-extend regimen with Avastin or Lucentis before being converted to aflibercept every 7 weeks on average (no loading dose) for three doses. For the purposes of this study, we concentrated on the patients converted to aflibercept since the number of naive patients was too small to draw any conclusion from. Snellen (as logMar) visual acuities, and optical coherence tomography (OCT) were compared predrug and postdrug conversion. Converted patients did not show a significant difference in visual acuity or average OCT thickness from preconversion values; however, small improvements in ME (p=0.0001), SRF (p=0.0001), and PED (p=0.008) grading were noted on average after conversion to aflibercept. No significant difference in visual outcome or average OCT thickness was observed when switched from bevacizumab or ranibizumab q6 week to aflibercept 7-week dosing, on average. Mild anatomic improvements did occur in converted patients with regard to ME, SRF and PED improvement, on average, after conversion to aflibercept, and aflibercept was injected less frequently. No serious adverse reactions, including ocular infections or inflammation, as well as ocular and systemic effects were noted. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Injury rate as an indicator of business success.

    PubMed

    Holizki, Theresa; Nelson, Larry; McDonald, Rose

    2006-01-01

    Health and safety professionals and organizations have often suggested that promoting and improving health and safety in the workplace will improve business success. We conducted a study of all new small businesses that registered with the Workers' Compensation Board of British Columbia (WCB of BC) in the years 1993, 1995, 1996 and 1997, assessing their injury rate in the first 5 complete years of business. The data set represents 53,913 new businesses and 19,332 claims. Businesses were grouped by the number of years between registering for WCB coverage and termination of coverage. Injury rates were determined for each calendar year for each industry sector as injuries per 100 person-years, based on payroll information provided by the businesses. Across all industries, businesses that failed between 1 and 2 yr of start-up had an average injury rate of 9.71 while businesses that survived more than 5 yr had an average injury rate of only 3.89 in their first year of business (p<0.000001). The WCB of BC demonstrated a statistical correlation between health and safety in the workplace and the survival of a small business.

  13. N(2)O in small para-hydrogen clusters: Structures and energetics.

    PubMed

    Zhu, Hua; Xie, Daiqian

    2009-04-30

    We present the minimum-energy structures and energetics of clusters of the linear N(2)O molecule with small numbers of para-hydrogen molecules with pairwise additive potentials. Interaction energies of (p-H(2))-N(2)O and (p-H(2))-(p-H(2)) complexes were calculated by averaging the corresponding full-dimensional potentials over the H(2) angular coordinates. The averaged (p-H(2))-N(2)O potential has three minima corresponding to the T-shaped and the linear (p-H(2))-ONN and (p-H(2))-NNO structures. Optimization of the minimum-energy structures was performed using a Genetic Algorithm. It was found that p-H(2) molecules fill three solvation rings around the N(2)O axis, each of them containing up to five p-H(2) molecules, followed by accumulation of two p-H(2) molecules at the oxygen and nitrogen ends. The first solvation shell is completed at N = 17. The calculated chemical potential oscillates with cluster size up to the completed first solvation shell. These results are consistent with the available experimental measurements. (c) 2009 Wiley Periodicals, Inc.

  14. Electrocatalytically Active Nickel-Based Electrode Coatings Formed by Atmospheric and Suspension Plasma Spraying

    NASA Astrophysics Data System (ADS)

    Aghasibeig, M.; Mousavi, M.; Ben Ettouill, F.; Moreau, C.; Wuthrich, R.; Dolatabadi, A.

    2014-01-01

    Ni-based electrode coatings with enhanced surface areas, for hydrogen production, were developed using atmospheric plasma spray (APS) and suspension plasma spray (SPS) processes. The results revealed a larger electrochemical active surface area for the coatings produced by SPS compared to those produced by APS process. SEM micrographs showed that the surface microstructure of the sample with the largest surface area was composed of a large number of small cauliflower-like aggregates with an average diameter of 10 μm.

  15. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (32nd)

    DTIC Science & Technology

    1987-06-01

    number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample

  16. The UP4FUN Intervention Effect on Breaking Up Sedentary Time in 10- to 12-Year-Old Belgian Children: The ENERGY-Project.

    PubMed

    Verloigne, Maïté; Berntsen, Sveinung; Ridgers, Nicola D; Cardon, Greet; Chinapaw, Mai; Altenburg, Teatske; Brug, Johannes; Bere, Elling; De Bourdeaudhuij, Ilse; Van Lippevelde, Wendy; Maes, Lea

    2015-05-01

    There are currently no studies available reporting intervention effects on breaking up children's sedentary time. This study examined the UP4FUN intervention effect on objectively measured number of breaks in sedentary time, number of sedentary bouts (≥ 10 mins) and total and average amount of time spent in those sedentary bouts among 10- to 12-year-old Belgian children. The total sample included 354 children (mean age: 10.9 ± 0.7 years; 59% girls) with valid ActiGraph accelerometer data at pre- and posttest. Only few and small intervention effects were found, namely on total time spent in sedentary bouts immediately after school hours (4-6PM; β = -3.51mins) and on average time spent in sedentary bouts before school hours (6-8.30AM; β = -4.83mins) and immediately after school hours in favor of children from intervention schools (β = -2.71mins). Unexpectedly, girls from intervention schools decreased the number of breaks during school hours (8.30AM-4PM; β = -23.45breaks) and increased the number of sedentary bouts on a weekend day (β = +0.90bouts), whereas girls in control schools showed an increase in number of breaks and a decrease in number of bouts. In conclusion, UP4FUN did not have a consistent or substantial effect on breaking up children's sedentary time and these data suggest that more intensive and longer lasting interventions are needed.

  17. Estimating the number of fractions by tumour site for European countries in 2012 and 2025: An ESTRO-HERO analysis.

    PubMed

    Borras, Josep M; Grau, Cai; Corral, Julieta; Wong, Karen; Barton, Michael B; Ferlay, Jacques; Bray, Freddie; Lievens, Yolande

    2018-02-01

    The optimal number of radiotherapy fractions is a relevant input for planning resource needs. An estimation of the total number of fractions by country and tumour site is assessed for 2012 and 2025. European cancer incidence data by tumour site and country for 2012 and 2025 were extracted from the GLOBOCAN database. Incidence and stage data were introduced in the Australian Collaboration for Cancer Outcomes Research and Evaluation (CCORE) model, producing an evidence-based proportion of incident cases with an indication for radiotherapy and fractions by indication. An indication was defined as a clinical situation in which radiotherapy was the treatment of choice. The total number of fractions if radiotherapy was given according to guidelines to all patients with an indication in Europe was estimated to be 30 million for 2012; with a forecasted increase of 16.1% by 2025, yet with differences by country and tumour. The average number of fractions per course was 17.6 with a small range of differences following stage at diagnosis. Among the treatments with radical intent the average was 24 fractions, while it decreased to 2.5 among palliative treatments. An increase in the total number of fractions is expected in many European countries in the coming years following the trends in cancer incidence. In planning radiotherapy resources, these increases should be balanced to the evolution towards hypofractionation, along with increased complexity and quality assurance. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. An evaluation of self-reported mobile phone use compared to billing records among a group of engineers and scientists.

    PubMed

    Shum, Mona; Kelsh, Michael A; Sheppard, Asher R; Zhao, Ke

    2011-01-01

    Most epidemiologic studies of potential health impacts of mobile phones rely on self-reported information, which can lead to exposure misclassification. We compared self-reported questionnaire data among 60 participants, and phone billing records over a 3-year period (2002-2004). Phone usage information was compared by the calculation of the mean and median number of calls and duration of use, as well as correlation coefficients and associated P-values. Average call duration from self-reports was slightly lower than billing records (2.1 min vs. 2.8 min, P = 0.01). Participants reported a higher number of average daily calls than billing records (7.9 vs. 4.1, P = 0.002). Correlation coefficients for average minutes per day of mobile phone use and average number of calls per day were relatively high (R = 0.71 and 0.69, respectively, P < 0.001). Information reported at the monthly level tended to be more accurate than estimates of weekly or daily use. Our findings of modest correlations between self-reported mobile phone usage and billing records and substantial variability in recall are consistent with previous studies. However, the direction of over- and under-reporting was not consistent with previous research. We did not observe increased variability over longer periods of recall or a pattern of lower accuracy among older age groups compared with younger groups. Study limitations included a relatively small sample size, low participation rates, and potential limited generalizability. The variability within studies and non-uniformity across studies indicates that estimation of the frequency and duration of phone use by questionnaires should be supplemented with subscriber records whenever practical. © 2010 Wiley-Liss, Inc.

  19. Copy number variability of expression plasmids determined by cell sorting and Droplet Digital PCR.

    PubMed

    Jahn, Michael; Vorpahl, Carsten; Hübschmann, Thomas; Harms, Hauke; Müller, Susann

    2016-12-19

    Plasmids are widely used for molecular cloning or production of proteins in laboratory and industrial settings. Constant modification has brought forth countless plasmid vectors whose characteristics in terms of average plasmid copy number (PCN) and stability are rarely known. The crucial factor determining the PCN is the replication system; most replication systems in use today belong to a small number of different classes and are available through repositories like the Standard European Vector Architecture (SEVA). In this study, the PCN was determined in a set of seven SEVA-based expression plasmids only differing in the replication system. The average PCN for all constructs was determined by Droplet Digital PCR and ranged between 2 and 40 per chromosome in the host organism Escherichia coli. Furthermore, a plasmid-encoded EGFP reporter protein served as a means to assess variability in reporter gene expression on the single cell level. Only cells with one type of plasmid (RSF1010 replication system) showed a high degree of heterogeneity with a clear bimodal distribution of EGFP intensity while the others showed a normal distribution. The heterogeneous RSF1010-carrying cell population and one normally distributed population (ColE1 replication system) were further analyzed by sorting cells of sub-populations selected according to EGFP intensity. For both plasmids, low and highly fluorescent sub-populations showed a remarkable difference in PCN, ranging from 9.2 to 123.4 for ColE1 and from 0.5 to 11.8 for RSF1010, respectively. The average PCN determined here for a set of standardized plasmids was generally at the lower end of previously reported ranges and not related to the degree of heterogeneity. Further characterization of a heterogeneous and a homogeneous population demonstrated considerable differences in the PCN of sub-populations. We therefore present direct molecular evidence that the average PCN does not represent the true number of plasmid molecules in individual cells.

  20. A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies.

    PubMed

    Khondoker, Mizanur; Dobson, Richard; Skirrow, Caroline; Simmons, Andrew; Stahl, Daniel

    2016-10-01

    Recent literature on the comparison of machine learning methods has raised questions about the neutrality, unbiasedness and utility of many comparative studies. Reporting of results on favourable datasets and sampling error in the estimated performance measures based on single samples are thought to be the major sources of bias in such comparisons. Better performance in one or a few instances does not necessarily imply so on an average or on a population level and simulation studies may be a better alternative for objectively comparing the performances of machine learning algorithms. We compare the classification performance of a number of important and widely used machine learning algorithms, namely the Random Forests (RF), Support Vector Machines (SVM), Linear Discriminant Analysis (LDA) and k-Nearest Neighbour (kNN). Using massively parallel processing on high-performance supercomputers, we compare the generalisation errors at various combinations of levels of several factors: number of features, training sample size, biological variation, experimental variation, effect size, replication and correlation between features. For smaller number of correlated features, number of features not exceeding approximately half the sample size, LDA was found to be the method of choice in terms of average generalisation errors as well as stability (precision) of error estimates. SVM (with RBF kernel) outperforms LDA as well as RF and kNN by a clear margin as the feature set gets larger provided the sample size is not too small (at least 20). The performance of kNN also improves as the number of features grows and outplays that of LDA and RF unless the data variability is too high and/or effect sizes are too small. RF was found to outperform only kNN in some instances where the data are more variable and have smaller effect sizes, in which cases it also provide more stable error estimates than kNN and LDA. Applications to a number of real datasets supported the findings from the simulation study. © The Author(s) 2013.

  1. Behavioral response to a just-in-time adaptive intervention (JITAI) to reduce sedentary behavior in obese adults: Implications for JITAI optimization.

    PubMed

    Thomas, J Graham; Bond, Dale S

    2015-12-01

    Just-in-time adaptive interventions (JITAIs) use mobile computers, sensors, and software analytics to automatically detect behavior and deliver tailored treatment. However, little is known about how JITAIs influence patterns of behavior or how best to design JITAIs for maximum effect. This study examined prompts and behavioral response to the B-MOBILE JITAI for reducing sedentary behavior (SB) in overweight/obese individuals. Thirty participants (83% women; 67% White, mean ± SD body mass index = 36.2 kg/m2) tested 3 conditions presented in a randomized counterbalanced order involving smartphone-based prompts for walking breaks of (a) 3 min after 30 SB min, (b) 6 min after 60 SB min, and (c) 12 min after 120 SB min. Participants carried the smartphone an average of 6.90 days during each 7-day condition, for an average of 14.94 hr per day. The 3- and 6-min conditions resulted in the greatest number of prompts, walking breaks, the best adherence to prompts, the greatest amount of daily time spent in walking breaks, and fastest adherence to prompts (ps < .01). Small but statistically significant decreases in the number of daily walking breaks, adherence to prompts, and minutes per day spent in walking breaks were observed as a function of the number of days spent in a condition (ps < .05). The B-MOBILE JITAI was effective in prompting breaks in sedentary behavior when it was most clinically relevant. Frequent prompts for small change may be an optimal strategy for shaping sedentary behavior, although more research is needed to determine how best to promote long-term adherence. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  2. Behavioral Response to a Just-in-Time Adaptive Intervention (JITAI) to Reduce Sedentary Behavior in Obese Adults: Implications for JITAI Optimization

    PubMed Central

    Thomas, J. Graham; Bond, Dale S.

    2015-01-01

    Objective Just-in-time adaptive interventions (JITAIs) use mobile computers, sensors, and software analytics to automatically detect behavior and deliver tailored treatment. However, little is known about how JITAIs influence patterns of behavior or how best to design JITAIs for maximum effect. Methods This study examined prompts and behavioral response to the B-MOBILE JITAI for reducing sedentary behavior (SB) in overweight/obese individuals. Thirty participants (83% women; 67% White, mean ± SD body mass index = 36.2 kg/m2) tested three conditions presented in a randomized counterbalanced order involving smartphone-based prompts for walking breaks of (1) 3-min after 30 SB min; (2) 6-min after 60 SB min; and (3) 12-min after 120 SB min. Results Participants carried the smartphone an average of 6.90 days during each 7-day condition, for an average of 14.94 hours per day. The 3- and 6-min conditions resulted in the greatest number of prompts, walking breaks, the best adherence to prompts, the greatest amount of daily time spent in walking breaks, and fastest adherence to prompts (ps < .01). Small but statistically significant decreases in the number of daily walking breaks, adherence to prompts, and minutes per day spent in walking breaks were observed as a function of the number of days spent in a condition (ps < .05). Conclusions The B-MOBILE JITAI was effective in prompting breaks in sedentary behavior when it is most clinically relevant. Frequent prompts for small change may be an optimal strategy for shaping sedentary behavior, although more research is needed to determine how best to promote long-term adherence. PMID:26651467

  3. Spectra of turbulently advected scalars that have small Schmidt number

    NASA Astrophysics Data System (ADS)

    Hill, Reginald J.

    2017-09-01

    Exact statistical equations are derived for turbulent advection of a passive scalar having diffusivity much larger than the kinematic viscosity, i.e., small Schmidt number. The equations contain all terms needed for precise direct numerical simulation (DNS) quantification. In the appropriate limit, the equations reduce to the classical theory for which the scalar spectrum is proportional to the energy spectrum multiplied by k-4, which, in turn, results in the inertial-diffusive range power law, k-17 /3. The classical theory was derived for the case of isotropic velocity and scalar fields. The exact equations are simplified for less restrictive cases: (1) locally isotropic scalar fluctuations at dissipation scales with no restriction on symmetry of the velocity field, (2) isotropic velocity field with averaging over all wave-vector directions with no restriction on the symmetry of the scalar, motivated by that average being used for DNS, and (3) isotropic velocity field with axisymmetric scalar fluctuations, motivated by the mean-scalar-gradient-source case. The equations are applied to recently published DNSs of passive scalars for the cases of a freely decaying scalar and a mean-scalar-gradient source. New terms in the exact equations are estimated for those cases and are found to be significant; those terms cause the deviations from the classical theory found by the DNS studies. A new formula for the mean-scalar-gradient case explains the variation of the scalar spectra for the DNS of the smallest Schmidt-number cases. Expansion in Legendre polynomials reveals the effect of axisymmetry. Inertial-diffusive-range formulas for both the zero- and second-order Legendre contributions are given. Exact statistical equations reveal what must be quantified using DNS to determine what causes deviations from asymptotic relationships.

  4. Management of Toxic Epidermal Necrolysis with Plasmapheresis and Cyclosporine A: Our 10 Years’ Experience

    PubMed Central

    Giudice, Giuseppe; Maggio, Giulio; Bufano, Loredana; Memeo, Giuseppe

    2017-01-01

    Background: The management of toxic epidermal necrolysis (TEN) is controversial and there is no uniform strategy. Objective: To share our 10 years’ experience in treating severe TEN with a novel protocol based on the association of cyclosporine A and plasmapheresis. Methods: In this case series, we retrospectively collected and assessed the 12 cases of severe TEN treated from 2005 to 2015 at the Burn Unit of the University of Bari Policlinico hospital. Results: Average body surface area was 77; average SCORETEN was 4.3. The 12 patients had been treated with culprit drug withdrawal, systemic corticosteroids, and/or cyclosporine A with no response. The protocol was successfully administered in all 12 cases. Average time to response from protocol start was 4.9 days. Average time to remission from protocol start was 22 days; average hospital stay at our unit was 24.8 days. Four patients developed severe complications; 1 patient died. No complications linked to the protocol therapeutic measures were observed. The relatively small number of cases given the rarity of the condition is a limitation of this report. Conclusion: Our protocol based on the association of cyclosporine A and plasmapheresis is safe and efficacious in treating severe TEN. PMID:28280663

  5. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelluccio, Gustavo M.; McDowell, David L.

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  6. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE PAGES

    Castelluccio, Gustavo M.; McDowell, David L.

    2015-05-22

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  7. Assessing the clinical effectiveness of an algorithmic approach for mucosal lichen planus (MLP): A retrospective review.

    PubMed

    Ashack, Kurt A; Haley, Laura L; Luther, Chelsea A; Riemer, Christie A; Ashack, Richard J

    2016-06-01

    Mucosal lichen planus (MLP) is a therapeutic challenge in need of a new treatment approach because of its debilitating effect on patient's quality of life. We sought to evaluate a standardized treatment plan for patients with MLP. A second objective was to describe the effect of mycophenolate mofetil in this patient population. The study retrospectively analyzed 53 patients with MLP treated using a standardized algorithm. The number of MLP lesions, disease activity, and pain at the last visit were compared with baseline scores determined at the initial visit. Results were analyzed using the paired samples t test and confirmed with the Wilcoxon matched pairs signed rank test. The average number of lesions was reduced from 3.77 to 1.67 (P < .001). The average disease activity was reduced from 2.73 to 0.90 (P < .001). Average pain reported decreased from 2.03 to 1.03 (P < .001). This study was a retrospective analysis of a small patient population. There was no universal symptom severity scale used at the time of treatment for some patients. The standardized treatment plan reduced symptoms for patients with MLP. Mycophenolate mofetil appears to be a reasonable treatment option for these patients. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  8. Soliton concepts and protein structure.

    PubMed

    Krokhotin, Andrei; Niemi, Antti J; Peng, Xubiao

    2012-03-01

    Structural classification shows that the number of different protein folds is surprisingly small. It also appears that proteins are built in a modular fashion from a relatively small number of components. Here we propose that the modular building blocks are made of the dark soliton solution of a generalized discrete nonlinear Schrödinger equation. We find that practically all protein loops can be obtained simply by scaling the size and by joining together a number of copies of the soliton, one after another. The soliton has only two loop-specific parameters, and we compute their statistical distribution in the Protein Data Bank (PDB). We explicitly construct a collection of 200 sets of parameters, each determining a soliton profile that describes a different short loop. The ensuing profiles cover practically all those proteins in PDB that have a resolution which is better than 2.0 Å, with a precision such that the average root-mean-square distance between the loop and its soliton is less than the experimental B-factor fluctuation distance. We also present two examples that describe how the loop library can be employed both to model and to analyze folded proteins.

  9. Number and Impact of Published Scholarly Works by Pharmacy Practice Faculty Members at Accredited US Colleges and Schools of Pharmacy (2001-2003)

    PubMed Central

    Coleman, Craig I.; Schlesselman, Lauren S.; Lao, Eang

    2007-01-01

    Objective To evaluate the quantity and quality of published literature conducted by pharmacy practice faculty members in US colleges and schools of pharmacy for the years 2001-2003. Methods The Web of Science bibliographic database was used to identify publication citations for the years 2001-2003, which were then evaluated in a number of different ways. Faculty members were identified using American Association of Colleges of Pharmacy rosters for the 2000-2001, 2001-2002, and 2002-2003 academic years. Results Two thousand three hundred seventy-four pharmacy practice faculty members generated 1,896 publications in Web of Science searchable journals. A small number of faculty members (2.1%) were responsible for a large proportion of publications (30.6%), and only 4.9% of faculty members published 2 or more publications in these journals per year. The average impact factor for the top 200 publications was 7.6. Conclusion Pharmacy practice faculty members contributed substantially to the biomedical literature and their work has had an important impact. A substantial portion of this work has come from a small subset of faculty members. PMID:17619644

  10. Physical characterization of aerosol particles during the Chinese New Year’s firework events

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Wang, Xuemei; Chen, Jianmin; Cheng, Tiantao; Wang, Tao; Yang, Xin; Gong, Youguo; Geng, Fuhai; Chen, Changhong

    2010-12-01

    Measurements for particles 10 nm to 10 μm were taken using a Wide-range Particle Spectrometer during the Chinese New Year (CNY) celebrations in 2009 in Shanghai, China. These celebrations provided an opportunity to study the number concentration and size distribution of particles in an especial atmospheric pollution situation due to firework displays. The firework activities had a clear contribution to the number concentration of small accumulation mode particles (100-500 nm) and PM 1 mass concentration, with a maximum total number concentration of 3.8 × 10 4 cm -3. A clear shift of particles from nucleation and Aitken mode to small accumulation mode was observed at the peak of the CNY firework event, which can be explained by reduced atmospheric lifetimes of smaller particles via the concept of the coagulation sink. High particle density (2.7 g cm -3) was identified as being particularly characteristic of the firework aerosols. Recalculated fine particles PM 1 exhibited on average above 150 μg m -3 for more than 12 hours, which was a health risk to susceptible individuals. Integral physical parameters of firework aerosols were calculated for understanding their physical properties and further model simulation.

  11. [Prediction and spatial distribution of recruitment trees of natural secondary forest based on geographically weighted Poisson model].

    PubMed

    Zhang, Ling Yu; Liu, Zhao Gang

    2017-12-01

    Based on the data collected from 108 permanent plots of the forest resources survey in Maoershan Experimental Forest Farm during 2004-2016, this study investigated the spatial distribution of recruitment trees in natural secondary forest by global Poisson regression and geographically weighted Poisson regression (GWPR) with four bandwidths of 2.5, 5, 10 and 15 km. The simulation effects of the 5 regressions and the factors influencing the recruitment trees in stands were analyzed, a description was given to the spatial autocorrelation of the regression residuals on global and local levels using Moran's I. The results showed that the spatial distribution of the number of natural secondary forest recruitment was significantly influenced by stands and topographic factors, especially average DBH. The GWPR model with small scale (2.5 km) had high accuracy of model fitting, a large range of model parameter estimates was generated, and the localized spatial distribution effect of the model parameters was obtained. The GWPR model at small scale (2.5 and 5 km) had produced a small range of model residuals, and the stability of the model was improved. The global spatial auto-correlation of the GWPR model residual at the small scale (2.5 km) was the lowe-st, and the local spatial auto-correlation was significantly reduced, in which an ideal spatial distribution pattern of small clusters with different observations was formed. The local model at small scale (2.5 km) was much better than the global model in the simulation effect on the spatial distribution of recruitment tree number.

  12. Analysis of Modeling Assumptions used in Production Cost Models for Renewable Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoll, Brady; Brinkman, Gregory; Townsend, Aaron

    2016-01-01

    Renewable energy integration studies have been published for many different regions exploring the question of how higher penetration of renewable energy will impact the electric grid. These studies each make assumptions about the systems they are analyzing; however the effect of many of these assumptions has not been yet been examined and published. In this paper we analyze the impact of modeling assumptions in renewable integration studies, including the optimization method used (linear or mixed-integer programming) and the temporal resolution of the dispatch stage (hourly or sub-hourly). We analyze each of these assumptions on a large and a small systemmore » and determine the impact of each assumption on key metrics including the total production cost, curtailment of renewables, CO2 emissions, and generator starts and ramps. Additionally, we identified the impact on these metrics if a four-hour ahead commitment step is included before the dispatch step and the impact of retiring generators to reduce the degree to which the system is overbuilt. We find that the largest effect of these assumptions is at the unit level on starts and ramps, particularly for the temporal resolution, and saw a smaller impact at the aggregate level on system costs and emissions. For each fossil fuel generator type we measured the average capacity started, average run-time per start, and average number of ramps. Linear programming results saw up to a 20% difference in number of starts and average run time of traditional generators, and up to a 4% difference in the number of ramps, when compared to mixed-integer programming. Utilizing hourly dispatch instead of sub-hourly dispatch saw no difference in coal or gas CC units for either start metric, while gas CT units had a 5% increase in the number of starts and 2% increase in the average on-time per start. The number of ramps decreased up to 44%. The smallest effect seen was on the CO2 emissions and total production cost, with a 0.8% and 0.9% reduction respectively when using linear programming compared to mixed-integer programming and 0.07% and 0.6% reduction, respectively, in the hourly dispatch compared to sub-hourly dispatch.« less

  13. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) General requirements. The average full-load efficiency of each basic model of small electric motor must be... this section, provided, however, that an AEDM may be used to determine the average full-load efficiency of one or more of a manufacturer's basic models only if the average full-load efficiency of at least...

  14. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) General requirements. The average full-load efficiency of each basic model of small electric motor must be... this section, provided, however, that an AEDM may be used to determine the average full-load efficiency of one or more of a manufacturer's basic models only if the average full-load efficiency of at least...

  15. Physical Models of Layered Polar Firn Brightness Temperatures from 0.5 to 2 GHz

    NASA Technical Reports Server (NTRS)

    Tan, Shurun; Aksoy, Mustafa; Brogioni, Marco; Macelloni, Giovanni; Durand, Michael; Jezek, Kenneth C.; Wang, Tian-Lin; Tsang, Leung; Johnson, Joel T.; Drinkwater, Mark R.; hide

    2015-01-01

    We investigate physical effects influencing 0.5-2 GHz brightness temperatures of layered polar firn to support the Ultra Wide Band Software Defined Radiometer (UWBRAD) experiment to be conducted in Greenland and in Antarctica. We find that because ice particle grain sizes are very small compared to the 0.5-2 GHz wavelengths, volume scattering effects are small. Variations in firn density over cm- to m-length scales, however, cause significant effects. Both incoherent and coherent models are used to examine these effects. Incoherent models include a 'cloud model' that neglects any reflections internal to the ice sheet, and the DMRT-ML and MEMLS radiative transfer codes that are publicly available. The coherent model is based on the layered medium implementation of the fluctuation dissipation theorem for thermal microwave radiation from a medium having a nonuniform temperature. Density profiles are modeled using a stochastic approach, and model predictions are averaged over a large number of realizations to take into account an averaging over the radiometer footprint. Density profiles are described by combining a smooth average density profile with a spatially correlated random process to model density fluctuations. It is shown that coherent model results after ensemble averaging depend on the correlation lengths of the vertical density fluctuations. If the correlation length is moderate or long compared with the wavelength (approximately 0.6x longer or greater for Gaussian correlation function without regard for layer thinning due to compaction), coherent and incoherent model results are similar (within approximately 1 K). However, when the correlation length is short compared to the wavelength, coherent model results are significantly different from the incoherent model by several tens of kelvins. For a 10-cm correlation length, the differences are significant between 0.5 and 1.1 GHz, and less for 1.1-2 GHz. Model results are shown to be able to match the v-pol SMOS data closely and predict the h-pol data for small observation angles.

  16. Change in genetic size of small-closed populations: Lessons from a domestic mammal population.

    PubMed

    Ghafouri-Kesbi, Farhad

    2010-10-01

    The aim of this study was to monitor changes in genetic size of a small-closed population of Iranian Zandi sheep, by using pedigree information from animals born between 1991 and 2005. The genetic size was assessed by using measures based on the probability of identity-by-descend of genes (coancestry, f, and effective population size, N(e) ), as well as measures based on probability of gene origin (effective number of founders, f(e) , effective number of founder genomes, f(g) , and effective number of non-founder genomes, f(ne) ). Average coancestry, or the degree of genetic similarity of individuals, increased from 0.81% to 1.44% during the period 1993 to 2005, at the same time that N(e) decreased from 263 to 93. The observed trend for f(e) was irregular throughout the experiment in a way that f(e) was 68, 87, 77, 92, and 80 in 1993, 1996, 1999, 2002, and 2005, respectively. Simultaneously, f(g) , the most informative effective number, decreased from 61 to 35. The index of genetic diversity (GD) which was obtained from estimates of f(g) , decreased about 2% throughout the period studied. In addition, a noticeable reduction was observed in the estimates of f(ne) from 595 in 1993 to 61 in 2005. The higher than 1 ratio of f(e) to f(g) indicated the presence of bottlenecks and genetic drift in the development of this population of Zandi sheep. From 1993 to 1999, f(ne) was much higher than f(e) , thereby indicating that with respect to loss of genetic diversity, the unequal contribution of founders was more important than the random genetic drift in non-founder generations. Subsequently, random genetic drift in non-founder generations was the major reason for f(e) > f(ne) . The minimization of average coancestry in new reproductive individuals was recommended as a means of preserving the population against a further loss in genetic diversity.

  17. Minimizing the average distance to a closest leaf in a phylogenetic tree.

    PubMed

    Matsen, Frederick A; Gallagher, Aaron; McCoy, Connor O

    2013-11-01

    When performing an analysis on a collection of molecular sequences, it can be convenient to reduce the number of sequences under consideration while maintaining some characteristic of a larger collection of sequences. For example, one may wish to select a subset of high-quality sequences that represent the diversity of a larger collection of sequences. One may also wish to specialize a large database of characterized "reference sequences" to a smaller subset that is as close as possible on average to a collection of "query sequences" of interest. Such a representative subset can be useful whenever one wishes to find a set of reference sequences that is appropriate to use for comparative analysis of environmentally derived sequences, such as for selecting "reference tree" sequences for phylogenetic placement of metagenomic reads. In this article, we formalize these problems in terms of the minimization of the Average Distance to the Closest Leaf (ADCL) and investigate algorithms to perform the relevant minimization. We show that the greedy algorithm is not effective, show that a variant of the Partitioning Around Medoids (PAM) heuristic gets stuck in local minima, and develop an exact dynamic programming approach. Using this exact program we note that the performance of PAM appears to be good for simulated trees, and is faster than the exact algorithm for small trees. On the other hand, the exact program gives solutions for all numbers of leaves less than or equal to the given desired number of leaves, whereas PAM only gives a solution for the prespecified number of leaves. Via application to real data, we show that the ADCL criterion chooses chimeric sequences less often than random subsets, whereas the maximization of phylogenetic diversity chooses them more often than random. These algorithms have been implemented in publicly available software.

  18. Brain scaling in mammalian evolution as a consequence of concerted and mosaic changes in numbers of neurons and average neuronal cell size

    PubMed Central

    Herculano-Houzel, Suzana; Manger, Paul R.; Kaas, Jon H.

    2014-01-01

    Enough species have now been subject to systematic quantitative analysis of the relationship between the morphology and cellular composition of their brain that patterns begin to emerge and shed light on the evolutionary path that led to mammalian brain diversity. Based on an analysis of the shared and clade-specific characteristics of 41 modern mammalian species in 6 clades, and in light of the phylogenetic relationships among them, here we propose that ancestral mammal brains were composed and scaled in their cellular composition like modern afrotherian and glire brains: with an addition of neurons that is accompanied by a decrease in neuronal density and very little modification in glial cell density, implying a significant increase in average neuronal cell size in larger brains, and the allocation of approximately 2 neurons in the cerebral cortex and 8 neurons in the cerebellum for every neuron allocated to the rest of brain. We also propose that in some clades the scaling of different brain structures has diverged away from the common ancestral layout through clade-specific (or clade-defining) changes in how average neuronal cell mass relates to numbers of neurons in each structure, and how numbers of neurons are differentially allocated to each structure relative to the number of neurons in the rest of brain. Thus, the evolutionary expansion of mammalian brains has involved both concerted and mosaic patterns of scaling across structures. This is, to our knowledge, the first mechanistic model that explains the generation of brains large and small in mammalian evolution, and it opens up new horizons for seeking the cellular pathways and genes involved in brain evolution. PMID:25157220

  19. Signal-averaged P wave in patients with paroxysmal atrial fibrillation.

    PubMed

    Rosenheck, S

    1997-10-01

    The theoretical and experimental rational of atrial signal-averaged ECG in patients with AF is delay in the intra-atrial and interatrial conduction. Similar to the ventricular signal-averaged ECG, the atrial signal-averaged ECG is an averaging of a high number of consecutive P waves that match the template created earlier P wave triggering is preferred over QRS triggering because of more accurate aligning. However, the small amplitude of the atrial ECG and its gradual increase from the isoelectric line may create difficulties in defining the start point if P wave triggering is used. Studies using P wave triggering and those using QRS triggering demonstrate a prolonged P wave duration in patients with paroxysmal AF. The negative predictive value of this test is relatively high at 60%-80%. The positive predictive value of atrial signal-averaged ECGs in predicting the risk of AF is considerably lower than the negative predictive value. All the data accumulated prospectively on the predictive value of P wave signal-averaging was determined only in patients undergoing coronary bypass surgery or following MI; its value in other patients with paroxysmal AF is still not determined. The clinical role of frequency-domain analysis (alone or added to time-domain analysis) remains undefined. Because of this limited knowledge on the predictive value of P wave signal-averaging, it is still not clinical medicine, and further research is needed before atrial signal-averaged ECG will be part of clinical testing.

  20. Generalized banana-drift transport

    NASA Astrophysics Data System (ADS)

    Mynick, H. E.

    1985-10-01

    The theory of tokamak ripple transport in the banana-drift and ripple-plateau is extended in a number of directions. The theory is valid for small values of the toroidal periodicity number n of the perturbation, as well as for the moderate values (n approx. 10 - 20) previously assumed. It is shown that low-n perturbations can produce much greater transport than the larger-n perturbations usually studied. In addition, the ripple perturbation is allowed arbitrary values of poloidal mode number m and frequency omega, making it applicable to the transport induced by MHD modes. Bounce averaging is avoided, so the theory includes the contributions to transport from all harmonics of the bounce frequency, providing a continuous description of the transition from the banana drift to the ripple-plateau regime. The implications of the theory for toroidal rotation in tokamaks are considered.

  1. Vortex tubes in turbulence velocity fields at Reynolds numbers Re lambda approximately equal to 300-1300.

    PubMed

    Mouri, Hideaki; Hori, Akihiro; Kawashima, Yoshihide

    2004-12-01

    The most elementary structures of turbulence, i.e., vortex tubes, are studied using velocity data obtained in a laboratory experiment for boundary layers with Reynolds numbers Re(lambda) =295-1258 . We conduct conditional averaging for enhancements of a small-scale velocity increment and obtain the typical velocity profile for vortex tubes. Their radii are of the order of the Kolmogorov length. Their circulation velocities are of the order of the root-mean-square velocity fluctuation. We also obtain the distribution of the interval between successive enhancements of the velocity increment as the measure of the spatial distribution of vortex tubes. They tend to cluster together below about the integral length and more significantly below about the Taylor microscale. These properties are independent of the Reynolds number and are hence expected to be universal.

  2. Numerical study of dynamo action at low magnetic Prandtl numbers.

    PubMed

    Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A

    2005-04-29

    We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.

  3. Risk assessment of aircraft noise on sleep in Montreal.

    PubMed

    Tétreault, Louis-Francois; Plante, Céline; Perron, Stéphane; Goudreau, Sophie; King, Norman; Smargiassi, Audrey

    2012-05-24

    Estimate the number of awakenings additional to spontaneous awakenings, induced by the nighttime aircraft movements at an international airport in Montreal, in the population residing nearby in 2009. Maximum sound levels (LAS,max) were derived from aircraft movements using the Integrated Noise Model 7.0b, on a 28 x 28 km grid centred on the airport and with a 0.1 x 0.1 km resolution. Outdoor LAS,max were converted to indoor LAS,max by reducing noise levels by 15 dB(A) or 21 dB(A). For all grid points, LAS,max were transformed into probabilities of additional awakening using a function developed by Basner et al. (2006). The probabilities of additional awakening were linked to estimated numbers of exposed residents for each grid location to assess the number of aircraft-noise-induced awakenings in Montreal. Using a 15 dB(A) sound attenuation, 590 persons would, on average, have one or more additional awakenings per night for the year 2009. In the scenario using a 21 dB(A) sound attenuation, on average, no one would be subjected to one or more additional awakenings per night due to aircraft noise. Using the 2009 flight patterns, our data suggest that a small number of Montreal residents are exposed to noise levels that could induce one or more awakenings additional to spontaneous awakenings per night.

  4. Trends in use of the only Food and Drug Administration-approved commercially available fenestrated endovascular aneurysm repair device in the United States.

    PubMed

    Simons, Jessica P; Shue, Bing; Flahive, Julie M; Aiello, Francesco A; Steppacher, Robert C; Eaton, Elizabeth A; Messina, Louis M; Schanzer, Andres

    2017-05-01

    Fenestrated endografts are customized, patient-specific, endovascular devices with potential to significantly reduce morbidity and mortality of short-neck infrarenal and juxtarenal abdominal aortic aneurysm repair. The Zenith fenestrated endovascular graft (ZFEN) for abdominal aortic aneurysms (Cook Medical, Bloomington, Ind), Food and Drug Administration-approved in 2012, remains the only fenestrated device available in the United States. This technology is among the most technically complex catheter-based procedures and, therefore, inherently associated with serious risk for device-related complications. We sought to define patterns of physician and hospital adoption of ZFEN. Deidentified datasets containing numbers of physicians trained, orders by physicians and hospitals, and designs (fenestration/scallop configuration) was provided for U.S. ZFEN devices ordered (April 2012-August 2015). We evaluated the number of physicians trained, the number of devices ordered, hospital characteristics, and fenestration/scallop design configurations. Cook Medical assembled the datasets but played no role in study design, analysis, or interpretation of data. Between April 2012 and August 2015, 553 physicians attended formal ZFEN training sessions, 388 (70%) of whom ordered a total of 2669 devices. An increase in orders per month (nine in June 2012 and 91 in August 2015, 911% growth; P < .001) and in number of physicians ordering per month (eight in June 2012 and 62 in August 2015, 675% growth; P < .001) was observed. Teaching hospitals, representing all U.S. regions (Midwest 927, 35%; South 799, 30%; Northeast 547, 20%; West 396, 15%), accounted for 1703 (64%) ZFEN orders. Of 553 trained physicians, 165 (30%) ordered no devices, 116 (21%) ordered 1 device, 144 (26%) ordered 2-5 devices, 61 (11%) ordered 6-10 devices, 39 (7%) ordered 11-20, and 28 (5%) ordered >20 devices. For physicians contributing >6 months of data (n = 336), the average number of devices ordered per year was three (standard deviation, 4); 272 (81%) ordered ≤ 5 devices/year, 15 (4.5%) ordered 11-20 devices/year, and 3 (0.9%) ordered >20 devices/year. Of devices with design details available (2618 of 2669; 98%), most common designs were 2 small fenestrations/1 scallop (1443; 55%), 2 small fenestrations/1 large fenestration (568; 22%), 1 small fenestration/1 scallop (173, 6.6%), and 2 small fenestrations (169; 6.5%). The average number of target vessels incorporated in each design was 2.7/device; 2071 (79%) incorporated three, 398 (15%) incorporated two. Since 2012, ZFEN has demonstrated a ninefold increase in monthly orders, with 553 physicians trained. Unlike the experience of rapid dissemination seen with infrarenal endografts, only 28 (5%) physicians have ordered >20, whereas 165 (30%) have ordered none, and 272 (81%) ordered ≤ 5 devices/year. Assuming that volume, in general, correlates with outcomes, this adoption pattern raises questions whether fenestrated technology should be regionalized to high-volume centers. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  5. Long-term patterns of body mass and stature evolution within the hominin lineage.

    PubMed

    Will, Manuel; Pablos, Adrián; Stock, Jay T

    2017-11-01

    Body size is a central determinant of a species' biology and adaptive strategy, but the number of reliable estimates of hominin body mass and stature have been insufficient to determine long-term patterns and subtle interactions in these size components within our lineage. Here, we analyse 254 body mass and 204 stature estimates from a total of 311 hominin specimens dating from 4.4 Ma to the Holocene using multi-level chronological and taxonomic analytical categories. The results demonstrate complex temporal patterns of body size variation with phases of relative stasis intermitted by periods of rapid increases. The observed trajectories could result from punctuated increases at speciation events, but also differential proliferation of large-bodied taxa or the extinction of small-bodied populations. Combined taxonomic and temporal analyses show that in relation to australopithecines, early Homo is characterized by significantly larger average body mass and stature but retains considerable diversity, including small body sizes. Within later Homo , stature and body mass evolution follow different trajectories: average modern stature is maintained from ca 1.6 Ma, while consistently higher body masses are not established until the Middle Pleistocene at ca 0.5-0.4 Ma, likely caused by directional selection related to colonizing higher latitudes. Selection against small-bodied individuals (less than 40 kg; less than 140 cm) after 1.4 Ma is associated with a decrease in relative size variability in later Homo species compared with earlier Homo and australopithecines. The isolated small-bodied individuals of Homo naledi ( ca 0.3 Ma) and Homo floresiensis ( ca 100-60 ka) constitute important exceptions to these general patterns, adding further layers of complexity to the evolution of body size within the genus Homo . At the end of the Late Pleistocene and Holocene, body size in Homo sapiens declines on average, but also extends to lower limits not seen in comparable frequency since early Homo .

  6. Long-term patterns of body mass and stature evolution within the hominin lineage

    PubMed Central

    Pablos, Adrián; Stock, Jay T.

    2017-01-01

    Body size is a central determinant of a species' biology and adaptive strategy, but the number of reliable estimates of hominin body mass and stature have been insufficient to determine long-term patterns and subtle interactions in these size components within our lineage. Here, we analyse 254 body mass and 204 stature estimates from a total of 311 hominin specimens dating from 4.4 Ma to the Holocene using multi-level chronological and taxonomic analytical categories. The results demonstrate complex temporal patterns of body size variation with phases of relative stasis intermitted by periods of rapid increases. The observed trajectories could result from punctuated increases at speciation events, but also differential proliferation of large-bodied taxa or the extinction of small-bodied populations. Combined taxonomic and temporal analyses show that in relation to australopithecines, early Homo is characterized by significantly larger average body mass and stature but retains considerable diversity, including small body sizes. Within later Homo, stature and body mass evolution follow different trajectories: average modern stature is maintained from ca 1.6 Ma, while consistently higher body masses are not established until the Middle Pleistocene at ca 0.5–0.4 Ma, likely caused by directional selection related to colonizing higher latitudes. Selection against small-bodied individuals (less than 40 kg; less than 140 cm) after 1.4 Ma is associated with a decrease in relative size variability in later Homo species compared with earlier Homo and australopithecines. The isolated small-bodied individuals of Homo naledi (ca 0.3 Ma) and Homo floresiensis (ca 100–60 ka) constitute important exceptions to these general patterns, adding further layers of complexity to the evolution of body size within the genus Homo. At the end of the Late Pleistocene and Holocene, body size in Homo sapiens declines on average, but also extends to lower limits not seen in comparable frequency since early Homo. PMID:29291118

  7. Effect of ethanol-gasoline blends on small engine generator energy efficiency and exhaust emission.

    PubMed

    Lin, Wen-Yinn; Chang, Yuan-Yi; Hsieh, You-Ru

    2010-02-01

    This study was focused on fuel energy efficiency and pollution analysis of different ratios of ethanol-gasoline blended fuels (E0, E3, E6, and E9) under different loadings. In this research, the experimental system consisted of a small engine generator, a particulate matter measurement system, and an exhaust gas analyzer system. Different fuels, unleaded gasoline, and ethanol-gasoline blends (E0, E3, E6, and E9) were used to study their effects on the exhaust gas emission and were expressed as thermal efficiency of the small engine generator energy efficiency. The results suggested that particle number concentration increased as the engine loading increased; however, it decreased as the ethanol content in the blend increased. While using E6 as fuel, the carbon monoxide (CO) concentration was less than other fuels (E0, E3, and E9) for each engine loading. The average of CO concentration reduction by using E3, E6, and E9 is 42, 86, and 83%, respectively. Using an ethanol-gasoline blend led to a significant reduction in exhaust emissions by approximately 78.7, 97.5, and 89.46% of the mean average values of hydrocarbons (HCs) with E3, E6, and E9 fuels, respectively, for all engine loadings. Using an ethanol-gasoline blend led to a significant reduction in exhaust emissions by approximately 35, 86, and 77% of the mean average values of nitrogen oxides (NOx) with E3, E6, and E9 fuels, respectively, at each engine loading. The E6 fuel gave the best results of the exhaust emissions, and the E9 fuel gave the best results of the particle emissions and engine performance. The thermal efficiency of the small engine generator increased as the ethanol content in the blend increased and as the engine loading increased.

  8. Small gravitationally elicited voltage transients in pea stems

    NASA Technical Reports Server (NTRS)

    Pickard, B. G.

    1984-01-01

    Decapitated veritcal stem segments were observed for 45 min, noting the number of transients mu V for each 5-min interval. Results for two sets of stem segments are plotted. The average frequencies were 0.72 + or -0.05 and 0.66 + or - 0.04 per 5 min. One set of segments was swiveled gently to the horizontal position; then, recording continued for another 90 min. Transients continued in the vertical controls at a closely similar average rate 0.78 + or - 0.04 per 5 min. However, after the first 5-min interval the horizontally placed plants exhibited transients at an increased average rate of 1.08 + or - 0.04 per 5 min. The frequency of transients also increased following horizontal placement of intact shoots. However, it appears that this increase had two components: one due to gravity reception, and one due to the redistribution of indolacetic acid (IAA) believed to mediate gravitropic curvature (or, more specifically, to the increase of IAA in the lower tissue.

  9. Effects of parental number and duration of the breeding period on the effective population size and genetic diversity of a captive population of the endangered Tokyo bitterling Tanakia tanago (Teleostei: Cyprinidae).

    PubMed

    Kubota, Hitoshi; Watanabe, Katsutoshi

    2012-01-01

    The maintenance of genetic diversity is one of the chief concerns in the captive breeding of endangered species. Using microsatellite and mtDNA markers, we examined the effects of two key variables (parental number and duration of breeding period) on effective population size (N(e) ) and genetic diversity of offspring in an experimental breeding program for the endangered Tokyo bitterling, Tanakia tanago. Average heterozygosity and number of alleles of offspring estimated from microsatellite data increased with parental number in a breeding aquarium, and exhibited higher values for a long breeding period treatment (9 weeks) compared with a short breeding period (3 weeks). Haplotype diversity in mtDNA of offspring decreased with the reduction in parental number, and this tendency was greater for the short breeding period treatment. Genetic estimates of N(e) obtained with two single-sample estimation methods were consistently higher for the long breeding period treatment with the same number of parental fish. Average N(e) /N ratios were ranged from 0.5 to 1.4, and were high especially in the long breeding period with small and medium parental number treatments. Our results suggest that the spawning intervals of females and alternative mating behaviors of males influence the effective size and genetic diversity of offspring in bitterling. To maintain the genetic diversity of captive T. tanago, we recommend that captive breeding programs should be conducted for a sufficiently long period with an optimal level of parental density, as well as using an adequate number of parents. © 2011 Wiley Periodicals, Inc.

  10. Average discharge, perennial flow initiation, and channel initiation - small southern Appalachian basins

    Treesearch

    B. Lane Rivenbark; C. Rhett Jackson

    2004-01-01

    Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...

  11. Averaging processes in granular flows driven by gravity

    NASA Astrophysics Data System (ADS)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental results, we aim to define the scales governing the diffusive phenomenon, introducing the diffusive terms following the Boussinesq model. The diffusive coefficient will be experimentally defined; it will be probably proportional to the square root of the granular temperature θ and the diameter of the particles d or, alternatively, the flow height h. REFERENCES 1 Chapman S., Cowling T.G., 1971. Cambridge University Press, Cambridge, England. 2 Jenkins J.T., Savage S.B., 1983 J. Fluid.Mech., 130: 187-202 3 Savage S.B.,1984. J. Fluid.Mech., 24: 289-366 4 D.A.Drew, 1983. Annu. Rev. Fluid Mech. 15:261-291 5 I. Goldhirsch, 2003. Annu. Rev. Fluid Mech., 35:267-293. 6 I. Goldhirsch, 2008. Powder Technology, 182: 130-136. 7 T.J. Hsu, J.T. Jenkins, P.L. Liu 2004. Proc. Royal Soc.

  12. Estimated metabolic and mechanical demands during different small-sided games in elite soccer players.

    PubMed

    Gaudino, Paolo; Alberti, Giampietro; Iaia, F Marcello

    2014-08-01

    The present study examined the extent to which game format (possession play, SSG-P and game with regular goals and goalkeepers, SSG-G) and the number of players (5, 7 and 10 a-side) influence the physical demands of small-sided soccer games (SSGs) in elite soccer players. Training data were collected during the in-season period from 26 English Premier League outfield players using global positioning system technology. Total distance covered, distance at different speed categories and maximal speed were calculated. In addition, we focused on changes in velocity by reporting the number of accelerations and decelerations carried out during the SSGs (divided in two categories: moderate and high) and the absolute maximal values of acceleration and deceleration achieved. By taking into account these parameters besides speed and distance values, estimated energy expenditure and average metabolic power and distance covered at different metabolic power categories were calculated. All variables were normalized by time (i.e., 4min). The main findings were that the total distance, distances run at high speed (>14.4kmh(-1)) as well as absolute maximum velocity, maximum acceleration and maximum deceleration increased with pitch size (10v10>7v7>5v5; p<.05). Furthermore, total distance, very high (19.8-25.2kmh(-1)) and maximal (>25.2kmh(-1)) speed distances, absolute maximal velocity and maximum acceleration and deceleration were higher in SSG-G than in SSG-P (p<.001). On the other hand, the number of moderate (2-3ms(-2)) accelerations and decelerations as well as the total number of changes in velocity were greater as the pitch dimensions decreased (i.e., 5v5>7v7>10v10; p<.001) in both SSG-G and SSG-P. In addition, predicted energy cost, average metabolic power and distance covered at every metabolic power categories were higher in SSG-P compared to SSG-G and in big than in small pitch areas (p<.05). A detailed analysis of these drills is pivotal in contemporary football as it enables an in depth understanding of the workload imposed on each player which consequently has practical implications for the prescription of the adequate type and amount of stimulus during exercise training. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. SU-E-T-625: Potential for Reduced Radiation Induced Toxicity for the Treatment of Inoperable Non-Small-Cell Lung Cancer Using RapidArc Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Sood, S; Badkul, R

    2015-06-15

    Purpose: To investigate the feasibility of using RapidArc (RA) treatment planning to reduce irradiation volume of normal lung and other organs at risk (OARs) in the treatment of inoperable non-small-cell lung cancer (NSCLC) patients. Methods: A retrospective treatment planning and delivery study was performed to compare target coverage and the volumes of the normal lung, spinal cord, heart and esophagus on 4D-CT scan above their dose tolerances delivered by RA vs. IMRT for ten inoperable NSCLC patients (Stage I-IIIB). RA plans consisted of either one-full or two-partial co-planar arcs used to treat 95% of the planning target volume (PTV) withmore » 6MV beam to a prescription of 66Gy in 33 fractions. IMRT plans were generated using 5–7 co-planar fields with 6MV beam. PTV coverage, dose-volume histograms, homogeneity/conformity indices (CI), total number of monitor units(MUs), beam-on time and delivery accuracy were compared between the two treatment plans. Results: Similar target coverage was obtained between the two techniques. RA (CI=1.02) provided more conformal plans without loss of homogeneity compared to IMRT plans (CI=1.12). Compared to IMRT, RA achieved a significant median dose reduction in V10 (3%), V20 (8%), and mean lung dose (3%) on average, respectively. On average, V5 was comparable between the two treatment plans. RA reduced mean esophagus (6%), mean heart (18%), and maximum spinal cord dose (7%), on average, respectively. Total number of MUs and beam-on time were each reduced almost by a factor of 2 when compared to IMRT-patient comfort, reduced intra-fraction-motion and leakage dose. The average IMRT and RA QA pass rate was about 98% for both types of plans for 3%/3mm criterion. Conclusion: Compared to IMRT plans, RA provided not only comparable target coverage, but also improved conformity, treatment time, and significant reduction in irradiation of OARs. This may potentially allow for target dose escalation without increase in normal tissue toxicity.« less

  14. Nearest-Neighbor Distances and Aggregative Effects in Turbulence

    NASA Astrophysics Data System (ADS)

    Lanerolle, Lyon W. J.; Rothschild, B. J.; Yeung, P. K.

    2000-11-01

    The dispersive nature of turbulence which causes fluid elements to move apart (on average) is well known. Here we study another facet of turbulent mixing relevant to marine population dynamics - on how small organisms (approximated by fluid particles) are brought close to each other and allowed to interact. The crucial role played by the small scales in this process allows us to use direct numerical simulations of stationary isotropic turbulence, here with Taylor-scale Reynolds numbers (R_λ) from 38 to 91. We study the evolution of the Nearest-Neighbor Distances (NND) for collections of fluid particles initially located randomly in space satisfying Poisson-type distributions with mean values from 0.5 to 2.0 Kolmogorov length scales. Our results show that as particles begin to disperse on average, some also begin to aggregate in space. In particular, we find that (i) a significant proportion of particles are closer to each other than if their NNDs were randomly distributed, (ii) aggregative effects become stronger with R_λ, and (iii) although the mean value of NND grows monotonically with time in Kolmogorov variables, the growth rates are slower at higher R_λ. These results may assist in explaining the ``patchiness'' in plankton distributions observed in biological oceanography. Further details are given in B. J. Rothschild et al., The Biophysical Interpretation of Spatial Effects of Small-scale Turbulent Flow in the Ocean (paper in prep.).

  15. Cell population modelling of yeast glycolytic oscillations.

    PubMed Central

    Henson, Michael A; Müller, Dirk; Reuss, Matthias

    2002-01-01

    We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713

  16. Indexed triangle strips optimization for real-time visualization using genetic algorithm: preliminary study

    NASA Astrophysics Data System (ADS)

    Tanaka, Kiyoshi; Takano, Shuichi; Sugimura, Tatsuo

    2000-10-01

    In this work we focus on the indexed triangle strips that is an extended representation of triangle strips to improve the efficiency for geometrical transformation of vertices, and present a method to construct optimum indexed triangle strips using Genetic Algorithm (GA) for real-time visualization. The main objective of this work is how to optimally construct indexed triangle strips by improving the ratio that reuses the data stored in the cash memory and simultaneously reducing the total index numbers with GA. Simulation results verify that the average index numbers and cache miss ratio per polygon cold be small, and consequently the total visualization time required for the optimum solution obtained by this scheme could be remarkably reduced.

  17. 40 CFR 80.1343 - What hardship relief provisions are available only to small refiners?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small... annual average benzene standard at § 80.1230(a). (b) In the case of a small refiner approved under § 80.1340 for which compliance with the maximum average benzene requirement at § 80.1230(b) is not feasible...

  18. 40 CFR 80.1343 - What hardship relief provisions are available only to small refiners?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small... annual average benzene standard at § 80.1230(a). (b) In the case of a small refiner approved under § 80.1340 for which compliance with the maximum average benzene requirement at § 80.1230(b) is not feasible...

  19. 40 CFR 80.1343 - What hardship relief provisions are available only to small refiners?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small... annual average benzene standard at § 80.1230(a). (b) In the case of a small refiner approved under § 80.1340 for which compliance with the maximum average benzene requirement at § 80.1230(b) is not feasible...

  20. Dispersed SiC nanoparticles in Ni observed by ultra-small-angle X-ray scattering

    DOE PAGES

    Xie, R.; Ilavsky, J.; Huang, H. F.; ...

    2016-11-24

    In this paper, a metal-ceramic composite, nickel reinforced with SiC nanoparticles, was synthesized and characterized for its potential application in next-generation molten salt nuclear reactors. Synchrotron ultra-small-angle X-ray scattering (USAXS) measurements were conducted on the composite. The size distribution and number density of the SiC nanoparticles in the material were obtained through data modelling. Scanning and transmission electron microscopy characterization were performed to substantiate the results of the USAXS measurements. Tensile tests were performed on the samples to measure the change in their yield strength after doping with the nanoparticles. Finally, the average interparticle distance was calculated from the USAXSmore » results and is related to the increased yield strength of the composite.« less

  1. Composition of uroliths in small domestic animals in the United Kingdom.

    PubMed

    Rogers, Keith D; Jones, Boyd; Roberts, Larry; Rich, Michael; Montalto, Nicola; Beckett, Sophie

    2011-05-01

    The mineral composition of 7819 small animal uroliths in the UK was determined by semi-quantitative X-ray diffraction over a period of 90 months from 2002 to 2010. Canine and feline uroliths constituted 97% of the study population and the mineral phase detected most frequently was struvite (43%), followed by calcium oxalate (41%). Uroliths from crossbreeds, Dalmatians, Yorkshire terriers and Shih Tzus accounted for almost 30% of all canine uroliths, with the highest frequency in Dalmatians, which had a predominance of urate uroliths. The average ages of dogs and cats with uroliths were 7.0 years and 7.4 years, respectively. The ratio of the number of dogs presenting with struvite compared to oxalate phases reached a maximum at 3years of age. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  3. TectaY1870C/+ mice with alterations in the structure and porosity of the tectorial membrane display large numbers of spontaneous emissions

    NASA Astrophysics Data System (ADS)

    Cheatham, M. A.; Ahmad, A.; Dallos, P.; Richardson, G. P.

    2018-05-01

    Spontaneous otoacoustic emissions (SOAEs) are a signature of cochlear amplification, a process associated with outer hair cell (OHC) function and required for the sensitivity and frequency selectivity of cochlear responses. Although normal mice rarely exhibit these signals, those with mutations that influence the structure of the tectorial membrane (TM) show a greater incidence of these phenomena. In this report, we study mouse models with mutations affecting the striated-sheet matrix that forms the body of this accessory structure. In addition to CEACAM16, the matrix is composed of both α- and β-tectorin (TECTA and TECTB respectively) and these latter two noncollagenous proteins are the focus of this study. Of all the mutants tested to date, mice heterozygous (het) for a missense mutation (c.5609A>G, p.Tyr1870Cys) in Tecta (TectaY1870C/+ mice) are prolific emitters with an average of ˜7 SOAEs per cochlea, which is higher than in the very few normal mice with SOAEs where the average is 2-3 per cochlea. A small number of homozygous Tectb-/- mice lacking TECTB are also emitters, but they produce only one SOAE in any given ear. Although both mouse mutants have hearing loss, SOAE frequencies coincide with frequency regions where some degree of amplification is retained. The larger number of SOAEs in TectaY1870C/+ mice appears to correlate with an increase in porosity (1), which controls the spread of excitation of tectorial membrane traveling waves. Sellon and colleagues reported that this change in the material properties of the TM was associated with the larger size of nanoscale pores linked to the Y1870C missense mutation in TECTA. In mice lacking Tectb, where porosity is wild-type like, the number of SOAEs per cochlea is small and only a few of these animals present with this phenomenon. Characterization of traveling-wave properties in other emitting TM mutants will be required to know if the correlation between increased porosity and increased numbers of SOAEs reliably causes the change in phenotype.

  4. Remote sensing with simulated unmanned aircraft imagery for precision agriculture applications

    USGS Publications Warehouse

    Hunt, E. Raymond; Daughtry, Craig S.T.; Mirsky, Steven B.; Hively, W. Dean

    2014-01-01

    An important application of unmanned aircraft systems (UAS) may be remote-sensing for precision agriculture, because of its ability to acquire images with very small pixel sizes from low altitude flights. The objective of this study was to compare information obtained from two different pixel sizes, one about a meter (the size of a small vegetation plot) and one about a millimeter. Cereal rye (Secale cereale) was planted at the Beltsville Agricultural Research Center for a winter cover crop with fall and spring fertilizer applications, which produced differences in biomass and leaf chlorophyll content. UAS imagery was simulated by placing a Fuji IS-Pro UVIR digital camera at 3-m height looking nadir. An external UV-IR cut filter was used to acquire true-color images; an external red cut filter was used to obtain color-infrared-like images with bands at near-infrared, green, and blue wavelengths. Plot-scale Green Normalized Difference Vegetation Index was correlated with dry aboveground biomass ( ${mbi {r}} = 0.58$ ), whereas the Triangular Greenness Index (TGI) was not correlated with chlorophyll content. We used the SamplePoint program to select 100 pixels systematically; we visually identified the cover type and acquired the digital numbers. The number of rye pixels in each image was better correlated with biomass ( ${mbi {r}} = 0.73$ ), and the average TGI from only leaf pixels was negatively correlated with chlorophyll content ( ${mbi {r}} = -0.72$ ). Thus, better information for crop requirements may be obtained using very small pixel sizes, but new algorithms based on computer vision are needed for analysis. It may not be necessary to geospatially register large numbers of photographs with very small pixel sizes. Instead, images could be analyzed as single plots along field transects.

  5. Information and redundancy in the burial folding code of globular proteins within a wide range of shapes and sizes.

    PubMed

    Ferreira, Diogo C; van der Linden, Marx G; de Oliveira, Leandro C; Onuchic, José N; de Araújo, Antônio F Pereira

    2016-04-01

    Recent ab initio folding simulations for a limited number of small proteins have corroborated a previous suggestion that atomic burial information obtainable from sequence could be sufficient for tertiary structure determination when combined to sequence-independent geometrical constraints. Here, we use simulations parameterized by native burials to investigate the required amount of information in a diverse set of globular proteins comprising different structural classes and a wide size range. Burial information is provided by a potential term pushing each atom towards one among a small number L of equiprobable concentric layers. An upper bound for the required information is provided by the minimal number of layers L(min) still compatible with correct folding behavior. We obtain L(min) between 3 and 5 for seven small to medium proteins with 50 ≤ Nr ≤ 110 residues while for a larger protein with Nr = 141 we find that L ≥ 6 is required to maintain native stability. We additionally estimate the usable redundancy for a given L ≥ L(min) from the burial entropy associated to the largest folding-compatible fraction of "superfluous" atoms, for which the burial term can be turned off or target layers can be chosen randomly. The estimated redundancy for small proteins with L = 4 is close to 0.8. Our results are consistent with the above-average quality of burial predictions used in previous simulations and indicate that the fraction of approachable proteins could increase significantly with even a mild, plausible, improvement on sequence-dependent burial prediction or on sequence-independent constraints that augment the detectable redundancy during simulations. © 2016 Wiley Periodicals, Inc.

  6. Coordination sequences and information spreading in small-world networks

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.

    2002-10-01

    We study the spread of information in small-world networks generated from different d-dimensional regular lattices, with d=1, 2, and 3. With this purpose, we analyze by numerical simulations the behavior of the coordination sequence, e.g., the average number of sites C(n) that can be reached from a given node of the network in n steps along its bonds. For sufficiently large networks, we find an asymptotic behavior C(n)~ρn, with a constant ρ that depends on the network dimension d and on the rewiring probability p (which measures the disorder strength of a given network). A simple model of information spreading in these networks is studied, assuming that only a fraction q of the network sites are active. The number of active nodes reached in n steps has an asymptotic form λn, λ being a constant that depends on p and q, as well as on the dimension d of the underlying lattice. The information spreading presents two different regimes depending on the value of λ: For λ>1 the information propagates along the whole system, and for λ<1 the spreading is damped and the information remains confined in a limited region of the network. We discuss the connection of these results with site percolation in small-world networks.

  7. Large eddy simulation of turbine wakes using higher-order methods

    NASA Astrophysics Data System (ADS)

    Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer

    2017-11-01

    Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.

  8. Optical mapping of prefrontal brain connectivity and activation during emotion anticipation.

    PubMed

    Wang, Meng-Yun; Lu, Feng-Mei; Hu, Zhishan; Zhang, Juan; Yuan, Zhen

    2018-09-17

    Accumulated neuroimaging evidence shows that the dorsal lateral prefrontal cortex (dlPFC) is activated during emotion anticipation. The aim of this work is to examine the brain connectivity and activation differences in dlPFC between the positive, neutral and negative emotion anticipation by using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses were first assessed for all subjects during the performance of various emotion anticipation tasks. And then small-world analysis was performed, in which the small-world network indicators including the clustering coefficient, average path length, average node degree, and measure of small-world index were calculated for the functional brain networks associated with the positive, neutral and negative emotion anticipation, respectively. We discovered that compared to negative and neutral emotion anticipation, the positive one exhibited enhanced brain activation in the left dlPFC. Although the functional brain networks for the three emotion anticipation cases manifested the small-world properties regarding the clustering coefficient, average path length, average node degree, and measure of small-world index, the positive one showed significantly higher clustering coefficient and shorter average path length than those from the neutral and negative cases. Consequently, the small-world network indicators and brain activation in dlPPC were able to distinguish well between the positive, neutral and negative emotion anticipation. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Aging and Visual Counting

    PubMed Central

    Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.

    2010-01-01

    Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149

  10. Aerodynamic Loading Characteristics at Mach Numbers from 0.80 to 1.20 of a 1/10-Scale Three-Stage Scout Model

    NASA Technical Reports Server (NTRS)

    Kelly, Thomas C.

    1961-01-01

    Aerodynamic loads results have been obtained in the Langley 8-foot transonic pressure tunnel at Mach numbers from 0.80 to 1.20 for a 1/10-scale model of the upper three stages of the Scout vehicle. Tests were conducted through an angle-of-attack range from -8 deg to 8 deg at an average test Reynolds number per foot of about 4.0 x 10(exp 6). Results indicated that the peak negative pressures associated with expansion corners at the nose and transition flare exhibit sizeable variations which occur over a relatively small Mach number range. The magnitude of the variations may cause the critical local loading condition for the full-scale vehicle to occur at a Mach number considerably lower than that at which the maximum dynamic pressure occurs in flight. The addition of protuberances simulating antennas and wiring conduits had slight, localized effects. The lift carryover from the nose and transition flare on the cylindrical portions of the model generally increased with an increase in Mach number.

  11. Highly Conductive Multifunctional Graphene Polycarbonate Nanocomposites

    NASA Technical Reports Server (NTRS)

    Yoonessi, Mitra; Gaier, James R.

    2010-01-01

    Graphene nanosheet bisphenol A polycarbonate nanocomposites (0.027 2.2 vol %) prepared by both emulsion mixing and solution blending methods, followed by compression molding at 287 C, exhibited dc electrical percolation threshold of approx.0.14 and approx.0.38 vol %, respectively. The conductivities of 2.2 vol % graphene nanocomposites were 0.512 and 0.226 S/cm for emulsion and solution mixing. The 1.1 and 2.2 vol % graphene nanocomposites exhibited frequency-independent behavior. Inherent conductivity, extremely high aspect ratio, and nanostructure directed assembly of the graphene using PC nanospheres are the main factors for excellent electrical properties of the nanocomposites. Dynamic tensile moduli of nanocomposites increased with increasing graphene in the nanocomposite. The glass transition temperatures were decreased with increasing graphene for the emulsion series. High-resolution electron microscopy (HR-TEM) and small-angle neutron scattering (SANS) showed isolated graphene with no connectivity path for insulating nanocomposites and connected nanoparticles for the conductive nanocomposites. A stacked disk model was used to obtain the average particle radius, average number of graphene layers per stack, and stack spacing by simulation of the experimental SANS data. Morphology studies indicated the presence of well-dispersed graphene and small graphene stacking with infusion of polycarbonate within the stacks.

  12. Observations of low-energy ions in the wake of a magnetospheric satellite

    NASA Technical Reports Server (NTRS)

    Samir, U.; Comfort, R. H.; Chappell, C. R.; Stone, N. H.

    1986-01-01

    Measurements of low-energy ions made by the retarding ion mass spectrometer (RIMS) onboard the Dynamics Explorer 1 (DE 1) satellite are used to study some aspects of 'body-plasma interactions' in the terrestrial plasmasphere. Preliminary results are presented, yielding the degree of H+ and He+ ion depletion in the wake of the satellite in terms of specific and average ion Mach numbers, average ion mass, body size normalized to ionic Debye length, and body potential normalized to ion thermal energy. Some results from the RIMS measurements are compared with relevant results from the Explorer 31 and the Atmosphere Explorer C ionospheric satellites. Wake depletion is found to vary approximately linearly for small bodies (R-sub-Di less than about 12) and exponentially for large bodies (R-sub-Di greater than 50).

  13. Zonally averaged model of dynamics, chemistry and radiation for the atmosphere

    NASA Technical Reports Server (NTRS)

    Tung, K. K.

    1985-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations on a sphere, taking advantage of the more direct relationship between the mean meridional circulation and diabatic heating rate which is available in isentropic coordinates. Possible differences between results of nongeostrophic theory and the commonly used geostrophic formulation are discussed concerning: (1) the role of eddy forcing of the diabatic circulation, and (2) the nonlinear nearly inviscid limit vs the geostrophic limit. Problems associated with the traditional Rossby number scaling in quasi-geostrophic formulations are pointed out and an alternate, more general scaling based on the smallness of mean meridional to zonal velocities for a rotating planet is suggested. Such a scaling recovers the geostrophic balanced wind relationship for the mean zonal flow but reveals that the mean meridional velocity is in general ageostrophic.

  14. Status of small water supplies in the Nordic countries: Characteristics, water quality and challenges.

    PubMed

    Gunnarsdottir, Maria J; Persson, Kenneth M; Andradottir, Hrund O; Gardarsson, Sigurdur M

    2017-11-01

    Access to safe water is essential for public health and is one of the most important prerequisites for good living and safe food production. Many studies have shown that non-compliance with drinking water quality standards in small water supply systems is much higher than in large systems. Nevertheless, people served by small water supply systems have the right to the same level of health protection. Actions are therefore needed to improve the situation. The objective of the present study was to carry out a baseline analysis of the situation in the Nordic region and provide recommendations for governmental policy and actions. Data were gathered on number of water supplies, population served, compliance with regulations and waterborne disease outbreaks from various sources in the Nordic countries. The collected data showed that there are about 12500 regulated water supplies, 9400 of which serve fewer than 500 persons. The number of unregulated and poorly regulated supplies is unknown, but it can be roughly estimated that these serve 10% of the Nordic population on a permanent basis or 2.6 million people. However, this does not tell the whole story as many of the very small water supplies serve transient populations, summerhouse dwellers and tourist sites, with many more users. Non-compliance regarding microbes is much higher in the small supplies. The population weighted average fecal contamination incidence rate in the Nordic region is eleven times higher in the smaller supplies than in the large ones, 0.76% and 0.07%, respectively. Registered waterborne disease outbreaks were also more frequent in the small supplies than in the large ones. Copyright © 2017 Elsevier GmbH. All rights reserved.

  15. Does the introduction of nursing home culture change practices improve quality?

    PubMed

    Miller, Susan C; Lepore, Michael; Lima, Julie C; Shield, Renee; Tyler, Denise A

    2014-09-01

    To understand whether nursing home (NH) introduction of culture change practices is associated with improved quality. NH-level panel study using multivariate fixed-effects statistical modeling to estimate the effect of culture change introduction on quality outcomes. Eight hundred twenty-four U.S. NHs with culture change practice involvement beginning between 2005 and 2010. Directors of nursing and nursing home administrators. A culture change practice score (derived from a 2009/10 national NH survey) was used to stratify NHs according to practice implementation (high (scores in the top quartile; n = 217) vs other (n = 607)). NH-level outcomes included prevalence of seven care practices and three resident outcomes, health-related and quality-of-life weighted survey deficiencies, and average number of hospitalizations per resident year. For NHs with high practice implementation, introduction of culture change was associated with a significant decrease in prevalence of restraints, tube feeding, and pressure ulcers; an increase in the proportion of residents on bladder training programs; and a small decrease in the average number of hospitalizations per resident year (coefficient -0.04, standard error (SE) 0.02, P = .06). For NHs with lower practice implementation (practice scores in lower three quartiles), introduction was associated with fewer health-related (coefficient -5.26, SE 3.05; P = .09) and quality-of-life (coefficient -0.10, SE 0.05; P = .04) survey deficiencies, although these NHs also had small statistically significant increases in the prevalence of residents with urinary tract infections and in average hospitalizations per resident year (coefficient 0.03, SE 0.01, P = .02). The introduction of NH culture change appears to result in significant improvements in some care processes and outcomes in NHs with high practice implementation. For other NHs, culture change introduction results in fewer survey deficiencies. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.

  16. Identification of pathogen genomic variants through an integrated pipeline

    PubMed Central

    2014-01-01

    Background Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package. PMID:24589256

  17. Materials emission of chemicals--PVC flooring materials.

    PubMed

    Lundgren, B; Jonsson, B; Ek-Olausson, B

    1999-09-01

    Data of chemical emissions from flooring materials have been collected and investigated in a database known as METS. The emission tests are performed using the Field and Laboratory Emission Cell (FLEC). The emission rates of total volatile organic compounds (TVOC) in the boiling point range of hexane to octadecane varies from around 4,000 micrograms/(m2.h) to less than 10 micrograms/(m2.h). Results obtained 1994/95 are presented and compared with the results obtained in 1992 for similar materials. The tests are performed 4 weeks and 26 weeks after the manufacturing of the material. The emission rates of TVOC decrease on the average approximately 60% from 4 to 26 weeks. The differences and trends in emission rates of individual chemicals and their use are discussed. For many VOCs emission rates decrease rapidly and become near to or below 2 micrograms/(m2.h) (the detection limit) after 26 weeks. For a small number of individual compounds the emission rate decrease little over 26 weeks. A small number of chemicals are singled out for particular interest in a health and comfort evaluation based on the emission results.

  18. Size and structure dependent ultrafast dynamics of plasmonic gold nanosphere heterostructures on poly (ethylene glycol) brushes

    NASA Astrophysics Data System (ADS)

    Karatay, Ahmet; Küçüköz, Betül; Pekdemir, Sami; Onses, Mustafa Serdar; Elmali, Ayhan

    2017-11-01

    We have investigated the plasmonic properties of heterostructures that consist of gold nanosphere (NSs) with average diameters of 60 nm, 40 nm and 20 nm on poly (ethylene glycol) (PEG) brushes by using ultrafast pump-probe spectroscopy experiments. Gold NSs start to behave like gold nanorods with increasing number of immobilization cycles due to the close proximity. Gold NSs immobilized by 3 and 5 deposition cycles show longitudinal modes of plasmon bands at long wavelengths which are characteristic behaviors for gold nanorods. Increasing the number of immobilization cycle also increase relaxation times of samples due to the close proximity. Linear absorption spectra and scanning electron microscopy images show that there are close packing assemblies for heterostructures containing 20 nm gold NSs as the small particle. Ultrafast electron transfer (<100 fs) occurs between transverse and longitudinal modes by exciting the samples at both 520 nm and 650 nm. Further, experimental results indicate that, heterostructures with the small particles have faster relaxation times than other heterostructures due to closed packing of 20 nm gold NSs.

  19. Inhibitory motoneurons in arthropod motor control: organisation, function, evolution.

    PubMed

    Wolf, Harald

    2014-08-01

    Miniaturisation of somatic cells in animals is limited, for reasons ranging from the accommodation of organelles to surface-to-volume ratio. Consequently, muscle and nerve cells vary in diameters by about two orders of magnitude, in animals covering 12 orders of magnitude in body mass. Small animals thus have to control their behaviour with few muscle fibres and neurons. Hexapod leg muscles, for instance, may consist of a single to a few 100 fibres, and they are controlled by one to, rarely, 19 motoneurons. A typical mammal has thousands of fibres per muscle supplied by hundreds of motoneurons for comparable behavioural performances. Arthopods--crustaceans, hexapods, spiders, and their kin--are on average much smaller than vertebrates, and they possess inhibitory motoneurons for a motor control strategy that allows a broad performance spectrum despite necessarily small cell numbers. This arthropod motor control strategy is reviewed from functional and evolutionary perspectives and its components are described with a focus on inhibitory motoneurons. Inhibitory motoneurons are particularly interesting for a number of reasons: evolutionary and phylogenetic comparison of functional specialisations, evolutionary and developmental origin and diversification, and muscle fibre recruitment strategies.

  20. Stochastic algorithm for simulating gas transport coefficients

    NASA Astrophysics Data System (ADS)

    Rudyak, V. Ya.; Lezhnev, E. V.

    2018-02-01

    The aim of this paper is to create a molecular algorithm for modeling the transport processes in gases that will be more efficient than molecular dynamics method. To this end, the dynamics of molecules are modeled stochastically. In a rarefied gas, it is sufficient to consider the evolution of molecules only in the velocity space, whereas for a dense gas it is necessary to model the dynamics of molecules also in the physical space. Adequate integral characteristics of the studied system are obtained by averaging over a sufficiently large number of independent phase trajectories. The efficiency of the proposed algorithm was demonstrated by modeling the coefficients of self-diffusion and the viscosity of several gases. It was shown that the accuracy comparable to the experimental one can be obtained on a relatively small number of molecules. The modeling accuracy increases with the growth of used number of molecules and phase trajectories.

  1. Readily releasable pool of synaptic vesicles measured at single synaptic contacts.

    PubMed

    Trigo, Federico F; Sakaba, Takeshi; Ogden, David; Marty, Alain

    2012-10-30

    To distinguish between different models of vesicular release in brain synapses, it is necessary to know the number of vesicles of transmitter that can be released immediately at individual synapses by a high-calcium stimulus, the readily releasable pool (RRP). We used direct stimulation by calcium uncaging at identified, single-site inhibitory synapses to investigate the statistics of vesicular release and the size of the RRP. Vesicular release, detected as quantal responses in the postsynaptic neuron, showed an unexpected stochastic variation in the number of quanta from stimulus to stimulus at high intracellular calcium, with a mean of 1.9 per stimulus and a maximum of three or four. The results provide direct measurement of the RRP at single synaptic sites. They are consistent with models in which release proceeds from a small number of vesicle docking sites with an average occupancy around 0.7.

  2. On the progressive enrichment of the oxygen isotopic composition of water along a leaf.

    PubMed

    Farquhar, G. D.; Gan, K. S.

    2003-06-01

    A model has been derived for the enrichment of heavy isotopes of water in leaves, including progressive enrichment along the leaf. In the model, lighter water is preferentially transpired leaving heavier water to diffuse back into the xylem and be carried further along the leaf. For this pattern to be pronounced, the ratio of advection to diffusion (Péclet number) has to be large in the longitudinal direction, and small in the radial direction. The progressive enrichment along the xylem is less than that occurring at the sites of evaporation in the mesophyll, depending on the isolation afforded by the radial Péclet number. There is an upper bound on enrichment, and effects of ground tissue associated with major veins are included. When transpiration rate is spatially nonuniform, averaging of enrichment occurs more naturally with transpiration weighting than with area-based weighting. This gives zero average enrichment of transpired water, the modified Craig-Gordon equation for average enrichment at the sites of evaporation and the Farquhar and Lloyd (In Stable Isotopes and Plant Carbon-Water Relations, pp. 47-70. Academic Press, New York, USA, 1993) prediction for mesophyll water. Earlier results on the isotopic composition of evolved oxygen and of retro-diffused carbon dioxide are preserved if these processes vary in parallel with transpiration rate. Parallel variation should be indicated approximately by uniform carbon isotope discrimination across the leaf.

  3. Incorporating the Impacts of Small Scale Rock Heterogeneity into Models of Flow and Trapping in Target UK CO2 Storage Systems

    NASA Astrophysics Data System (ADS)

    Jackson, S. J.; Reynolds, C.; Krevor, S. C.

    2017-12-01

    Predictions of the flow behaviour and storage capacity of CO2 in subsurface reservoirs are dependent on accurate modelling of multiphase flow and trapping. A number of studies have shown that small scale rock heterogeneities have a significant impact on CO2flow propagating to larger scales. The need to simulate flow in heterogeneous reservoir systems has led to the development of numerical upscaling techniques which are widely used in industry. Less well understood, however, is the best approach for incorporating laboratory characterisations of small scale heterogeneities into models. At small scales, heterogeneity in the capillary pressure characteristic function becomes significant. We present a digital rock workflow that combines core flood experiments with numerical simulations to characterise sub-core scale capillary pressure heterogeneities within rock cores from several target UK storage reservoirs - the Bunter, Captain and Ormskirk sandstone formations. Measured intrinsic properties (permeability, capillary pressure, relative permeability) and 3D saturations maps from steady-state core flood experiments were the primary inputs to construct a 3D digital rock model in CMG IMEX. We used vertical end-point scaling to iteratively update the voxel by voxel capillary pressure curves from the average MICP curve; with each iteration more closely predicting the experimental saturations and pressure drops. Once characterised, the digital rock cores were used to predict equivalent flow functions, such as relative permeability and residual trapping, across the range of flow conditions estimated to prevail in the CO2 storage reservoirs. In the case of the Captain sandstone, rock cores were characterised across an entire 100m vertical transect of the reservoir. This allowed analysis of the upscaled impact of small scale heterogeneity on flow and trapping. Figure 1 shows the varying degree to which heterogeneity impacted flow depending on the capillary number in the Captain sandstone. At low capillary numbers, typical of regions where flow is dominated by buoyancy, fluid flow is impeded and trapping enhanced. At high capillary numbers, typical of the near wellbore environment, the fluid distributed homogeneously and the equivalent relative permeability was higher leading to improved injectivity.

  4. Reynolds number effects in combustion noise

    NASA Technical Reports Server (NTRS)

    Seshan, P. K.

    1981-01-01

    Acoustic emission spectra have been obtained for non-premixed turbulent combustion from two small diameter laboratory gas burners, two commercial gas burners and a large gas burner in the firebox of a Babcock-Wilcox Boiler (50,000 lb steam/hr). The changes in burner size and firing rate represent changes in Reynolds number and changes in air/fuel ratio represent departure from stoichiometric proportions. The combustion efficiency was measured independently through gas analysis. The acoustic spectra obtained from the various burners exhibit a persistent shape over the Reynolds number range of 8200-82,000. The spectra were analyzed for identification of a predictable frequency domain that is most responsive to, and readily correlated with, combustion efficiency. A simple parameter (consisting of the ratio of the average acoustic power output in the most responsive frequency bandwidth to the acoustic power level of the loudest frequency) is proposed whose value increases significantly and unmistakably as combustion efficiency approaches 100%. The dependence of the most responsive frequency domain on the various Reynolds numbers associated with turbulent jets is discussed.

  5. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei

    2007-06-15

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction ofmore » lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R{sup 2}=0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R{sup 2}=0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R{sup 2}=0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.« less

  6. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    PubMed

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log-Jacobian value and the air flow rate correlated well (R2 = 0.858 on average for the entire lung). The correlation for the fifth individual was not as good (R2 = 0.377 on average for the entire lung) and can be explained by the small variation in tidal volume for this individual. The correlation of the average log-Jacobian value and the air flow rate for images near the diaphragm correlated well in all five individuals (R2 = 0.943 on average). These preliminary results indicate a strong correlation between the expansion/compression of the lung measured by image registration and the air flow rate measured by spirometry. Predicting the location, motion, and compression/expansion of the tumor and normal tissue using image registration and spirometry could have many important benefits for radiotherapy treatment. These benefits include reducing radiation dose to normal tissue, maximizing dose to the tumor, improving patient care, reducing treatment cost, and increasing patient throughput.

  7. The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas.

    PubMed

    Friedman, Robert J; Gutkowicz-Krusin, Dina; Farber, Michele J; Warycha, Melanie; Schneider-Kels, Lori; Papastathis, Nicole; Mihm, Martin C; Googe, Paul; King, Roy; Prieto, Victor G; Kopf, Alfred W; Polsky, David; Rabinovitz, Harold; Oliviero, Margaret; Cognetta, Armand; Rigel, Darrell S; Marghoob, Ashfaq; Rivers, Jason; Johr, Robert; Grant-Kels, Jane M; Tsao, Hensin

    2008-04-01

    To evaluate the performance of dermoscopists in diagnosing small pigmented skin lesions (diameter

  8. Cache-enabled small cell networks: modeling and tradeoffs.

    PubMed

    Baştuǧ, Ejder; Bennis, Mehdi; Kountouris, Marios; Debbah, Mérouane

    We consider a network model where small base stations (SBSs) have caching capabilities as a means to alleviate the backhaul load and satisfy users' demand. The SBSs are stochastically distributed over the plane according to a Poisson point process (PPP) and serve their users either (i) by bringing the content from the Internet through a finite rate backhaul or (ii) by serving them from the local caches. We derive closed-form expressions for the outage probability and the average delivery rate as a function of the signal-to-interference-plus-noise ratio (SINR), SBS density, target file bitrate, storage size, file length, and file popularity. We then analyze the impact of key operating parameters on the system performance. It is shown that a certain outage probability can be achieved either by increasing the number of base stations or the total storage size. Our results and analysis provide key insights into the deployment of cache-enabled small cell networks (SCNs), which are seen as a promising solution for future heterogeneous cellular networks.

  9. Factors associated with the career path choices of veterinarians in western Canada

    PubMed Central

    Jelinski, Murray D.; Campbell, John R.; Naylor, Jonathan M.; Lawson, Karen L.; Derkzen, Dena

    2009-01-01

    This second of 2 articles, relating to the veterinary profession in western Canada, explores the factors associated with veterinarians’ career path choices. Among other factors, companion animal (small animal and equine) (CA) practitioners were less likely to have been raised in, or near to, a small center (≤ 10 000), were more concerned with their workload (hours of work and number of nights on-call), and preferred to work in progressive practices. Food animal (FA) practitioners were more likely to be male, have been raised in a small center, have been raised in the Province of Saskatchewan, and to have self-assessed themselves as having an above average knowledge of agriculture at the time they applied for admission to veterinary college. Mixed animal (MA) practitioners had more factors in common with FA than with CA practitioners. Three main factors were associated with leaving mixed or food animal practice: hours of work and too many nights on-call, the level of remuneration, and lack of support and mentorship. PMID:19721783

  10. [Microbiological quality of the air in "small gastronomy point"].

    PubMed

    Wójcik-Stopczyńska, Barbara

    2006-01-01

    The aim of this work was the estimation of microbial contamination of the air in "small gastronomy point". The study included three places, which have been separated on the ground of their function: 1. area of subsidiaries, 2. area of distribution (sale and serving meal), 3. area of consumption. The total numbers of aerobic mesophilic bacteria, yeasts and moulds were determined by sedimentation method. Taxonomy units of fungal aerosol were also estimated. The samples of air were collected in 16 investigation points in the morning (8-8.30) and in the afternoon (14-14.30). Four series of measurements were carried out and in general 128 of air samples were tested. The results showed that numbers of bacteria, yeasts and moulds were variable and received respectively 30-3397, 0-254 and 0-138 cfu x m(-3). Microbial contamination of air changed depending on area character (the highest average count of bacteria occurred in the air of consumption area and fungi in subsidiaries area), time of a day (contamination of the air increased in the afternoon) and determination date. Only in single samples the numbers of bacteria and fungi were higher than recommended level. Pigmentary bacteria had high participation in total count of bacteria and filamentous fungi were represented mostly by Penicillium sp. and Cladosporium sp.

  11. Modeling the genealogy of a cultural trait.

    PubMed

    Aguilar, Elliot; Ghirlanda, Stefano

    2015-05-01

    The mathematical study of genealogies has yielded important insights in population biology, such as the ability to estimate the time to the most recent common ancestor (MRCA) of a sample of genetic sequences or of a group of individuals. Here we introduce a model of cultural genealogies that is a step toward answering similar questions for cultural traits. In our model individuals can inherit from a variable, potentially large number of ancestors, rather than from a fixed, small number of ancestors (one or two) as is typical of genetic evolution. We first show that, given a sample of individuals, a cultural common ancestor does not necessarily exist. We then introduce a related concept: the most recent unique ancestor (MRUA), i.e., the most recent single individual who is the earliest cultural ancestor of the sample. We show that, under neutral evolution, the time to the MRUA can be staggeringly larger than the time to MRCA in a single ancestor model, except when the average number of learning opportunities per individuals is small. Our results point out that the properties of cultural genealogies may be very different from those of genetic genealogies, with potential implications for reconstructing the histories of cultural traits. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. A Simple Method for the Extraction, PCR-amplification, Cloning, and Sequencing of Pasteuria 16S rDNA from Small Numbers of Endospores.

    PubMed

    Atibalentja, N; Noel, G R; Ciancio, A

    2004-03-01

    For many years the taxonomy of the genus Pasteuria has been marred with confusion because the bacterium could not be cultured in vitro and, therefore, descriptions were based solely on morphological, developmental, and pathological characteristics. The current study sought to devise a simple method for PCR-amplification, cloning, and sequencing of Pasteuria 16S rDNA from small numbers of endospores, with no need for prior DNA purification. Results show that DNA extracts from plain glass bead-beating of crude suspensions containing 10,000 endospores at 0.2 x 10 endospores ml(-1) were sufficient for PCR-amplification of Pasteuria 16S rDNA, when used in conjunction with specific primers. These results imply that for P. penetrans and P. nishizawae only one parasitized female of Meloidogyne spp. and Heterodera glycines, respectively, should be sufficient, and as few as eight cadavers of Belonolaimus longicaudatus with an average number of 1,250 endospores of "Candidatus Pasteuria usgae" are needed for PCR-amplification of Pasteuria 16S rDNA. The method described in this paper should facilitate the sequencing of the 16S rDNA of the many Pasteuria isolates that have been reported on nematodes and, consequently, expedite the classification of those isolates through comparative sequence analysis.

  13. Modified methylene blue injection improves lymph node harvest in rectal cancer.

    PubMed

    Liu, Jianpei; Huang, Pinjie; Zheng, Zongheng; Chen, Tufeng; Wei, Hongbo

    2017-04-01

    The presence of nodal metastases in rectal cancer plays an important role in accurate staging and prognosis, which depends on adequate lymph node harvest. The aim of this prospective study is to investigate the feasibility and survival benefit of improving lymph node harvest by a modified method with methylene blue injection in rectal cancer specimens. One hundred and thirty-one patients with rectal cancer were randomly assigned to the control group in which lymph nodes were harvested by palpation and sight, or to the methylene blue group using a modified method of injection into the superior rectal artery with methylene blue. Analysis of clinicopathologic records, including a long-term follow-up, was performed. In the methylene blue group, 678 lymph nodes were harvested by simple palpation and sight. Methylene blue injection added 853 lymph nodes to the total harvest as well as 32 additional metastatic lymph nodes, causing a shift to node-positive stage in four patients. The average number of lymph nodes harvested was 11.7 ± 3.4 in the control group and 23.2 ± 4.7 in the methylene blue group, respectively. The harvest of small lymph nodes (<5 mm) and the average number of metastatic nodes were both significantly higher in the methylene blue group. The modified method of injection with methylene blue had no impact on overall survival. The modified method with methylene blue injection improved lymph node harvest in rectal cancer, especially small node and metastatic node retrieval, which provided more accurate staging. However, it was not associated with overall survival. © 2014 Royal Australasian College of Surgeons.

  14. Classification of Birds and Bats Using Flight Tracks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.

    Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less

  15. Naming Game with Multiple Hearers

    NASA Astrophysics Data System (ADS)

    Li, Bing; Chen, Guanrong; Chow, Tommy W. S.

    2013-05-01

    A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.

  16. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography

    PubMed Central

    Scherer, Klaus R.; Schuller, Björn W.

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions. PMID:29293572

  17. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography.

    PubMed

    Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.

  18. Rotational effects on impingement cooling

    NASA Technical Reports Server (NTRS)

    Epstein, A. H.; Kerrebrock, J. L.; Koo, J. J.; Preiser, U. Z.

    1987-01-01

    The present consideration of rotation effects on heat transfer in a radially exhausted, impingement-cooled turbine blade model gives attention to experimental results for Reynolds and Rossby numbers and blade/coolant temperature ratio values that are representative of small gas turbine engines. On the basis of a model that encompasses the effects of Coriolis force and buoyancy on heat transfer, bouyancy is identified as the cause of an average Nusselt number that is 20-30 percent lower than expected from previous nonrotating data. A heuristic model is proposed which predicts that the impingement jets nearest the blade roots should deflect inward, due to a centripetal force generated by their tangential velocity counter to the blade motion. Potentially serious thermal stresses must be anticipated from rotation effects in the course of blade design.

  19. Accuracies of the synthesized monochromatic CT numbers and effective atomic numbers obtained with a rapid kVp switching dual energy CT scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Larson, Sandra C.

    2011-04-15

    Purpose: This study was performed to investigate the accuracies of the synthesized monochromatic images and effective atomic number maps obtained with the new GE Discovery CT750 HD CT scanner. Methods: A Gammex-RMI model 467 tissue characterization phantom and the CT number linearity section of a Phantom Laboratory Catphan 600 phantom were scanned using the dual energy (DE) feature on the GE CT750 HD scanner. Synthesized monochromatic images at various energies between 40 and 120 keV and effective atomic number (Z{sub eff}) maps were generated. Regions of interest were placed within these images/maps to measure the average monochromatic CT numbers andmore » average Z{sub eff} of the materials within these phantoms. The true Z{sub eff} values were either supplied by the phantom manufacturer or computed using Mayneord's equation. The linear attenuation coefficients for the true CT numbers were computed using the NIST XCOM program with the input of manufacturer supplied elemental compositions and densities. The effects of small variations in the assumed true densities of the materials were also investigated. Finally, the effect of body size on the accuracies of the synthesized monochromatic CT numbers was investigated using a custom lumbar section phantom with and without an external fat-mimicking ring. Results: Other than the Z{sub eff} of the simulated lung inserts in the tissue characterization phantom, which could not be measured by DECT, the Z{sub eff} values of all of the other materials in the tissue characterization and Catphan phantoms were accurate to 15%. The accuracies of the synthesized monochromatic CT numbers of the materials in both phantoms varied with energy and material. For the 40-120 keV range, RMS errors between the measured and true CT numbers in the Catphan are 8-25 HU when the true CT numbers were computed using the nominal plastic densities. These RMS errors improve to 3-12 HU for assumed true densities within the nominal density {+-}0.02 g/cc range. The RMS errors between the measured and true CT numbers of the tissue mimicking materials in the tissue characterization phantom over the 40-120 keV range varied from about 6 HU-248 HU and did not improve as dramatically with small changes in assumed true density. Conclusions: Initial tests indicate that the Z{sub eff} values computed with DECT on this scanner are reasonably accurate; however, the synthesized monochromatic CT numbers can be very inaccurate, especially for dense tissue mimicking materials at low energies. Furthermore, the synthesized monochromatic CT numbers of materials still depend on the amount of the surrounding tissues especially at low keV, demonstrating that the numbers are not truly monochromatic. Further research is needed to develop DE methods that produce more accurate synthesized monochromatic CT numbers.« less

  20. Investigation of critical burning of fuel droplets. [monopropellants

    NASA Technical Reports Server (NTRS)

    Faeth, G. M.; Chanin, S.

    1974-01-01

    The steady combustion characteristics of droplets were considered in combustion chamber environments at various pressures, flow conditions, and ambient oxidizer concentrations for a number of hydrocarbon fuels. Using data obtained earlier, predicted gasification rates were within + or - 30% of measurements when the correction for convection was based upon average properties between the liquid surface and the flame around the droplet. Analysis was also completed for the open loop response of monopropellant droplets, based upon earlier strand combustion results. At the limit of large droplets, where the effect of flame curvature is small, the results suggest sufficient response to provide a viable mechanism for combustion instability in the frequency and droplet size range appropriate to practical combustors. Calculations are still in progress for a broader range of droplet sizes, including conditions where active combustion effects are small.

  1. Ras activation by SOS: Allosteric regulation by altered fluctuation dynamics

    PubMed Central

    Iversen, Lars; Tu, Hsiung-Lin; Lin, Wan-Chen; Christensen, Sune M.; Abel, Steven M.; Iwig, Jeff; Wu, Hung-Jen; Gureasko, Jodi; Rhodes, Christopher; Petit, Rebecca S.; Hansen, Scott D.; Thill, Peter; Yu, Cheng-Han; Stamou, Dimitrios; Chakraborty, Arup K.; Kuriyan, John; Groves, Jay T.

    2014-01-01

    Activation of the small guanosine triphosphatase H-Ras by the exchange factor Son of Sevenless (SOS) is an important hub for signal transduction. Multiple layers of regulation, through protein and membrane interactions, govern activity of SOS. We characterized the specific activity of individual SOS molecules catalyzing nucleotide exchange in H-Ras. Single-molecule kinetic traces revealed that SOS samples a broad distribution of turnover rates through stochastic fluctuations between distinct, long-lived (more than 100 seconds), functional states. The expected allosteric activation of SOS by Ras–guanosine triphosphate (GTP) was conspicuously absent in the mean rate. However, fluctuations into highly active states were modulated by Ras-GTP. This reveals a mechanism in which functional output may be determined by the dynamical spectrum of rates sampled by a small number of enzymes, rather than the ensemble average. PMID:24994643

  2. High Reynolds number analysis of flat plate and separated afterbody flow using non-linear turbulence models

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1996-01-01

    The ability of the three-dimensional Navier-Stokes method, PAB3D, to simulate the effect of Reynolds number variation using non-linear explicit algebraic Reynolds stress turbulence modeling was assessed. Subsonic flat plate boundary-layer flow parameters such as normalized velocity distributions, local and average skin friction, and shape factor were compared with DNS calculations and classical theory at various local Reynolds numbers up to 180 million. Additionally, surface pressure coefficient distributions and integrated drag predictions on an axisymmetric nozzle afterbody were compared with experimental data from 10 to 130 million Reynolds number. The high Reynolds data was obtained from the NASA Langley 0.3m Transonic Cryogenic Tunnel. There was generally good agreement of surface static pressure coefficients between the CFD and measurement. The change in pressure coefficient distributions with varying Reynolds number was similar to the experimental data trends, though slightly over-predicting the effect. The computational sensitivity of viscous modeling and turbulence modeling are shown. Integrated afterbody pressure drag was typically slightly lower than the experimental data. The change in afterbody pressure drag with Reynolds number was small both experimentally and computationally, even though the shape of the distribution was somewhat modified with Reynolds number.

  3. cBathy: A robust algorithm for estimating nearshore bathymetry

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd

    2013-01-01

    A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.

  4. DNA Barcode Analysis of Thrips (Thysanoptera) Diversity in Pakistan Reveals Cryptic Species Complexes.

    PubMed

    Iftikhar, Romana; Ashfaq, Muhammad; Rasool, Akhtar; Hebert, Paul D N

    2016-01-01

    Although thrips are globally important crop pests and vectors of viral disease, species identifications are difficult because of their small size and inconspicuous morphological differences. Sequence variation in the mitochondrial COI-5' (DNA barcode) region has proven effective for the identification of species in many groups of insect pests. We analyzed barcode sequence variation among 471 thrips from various plant hosts in north-central Pakistan. The Barcode Index Number (BIN) system assigned these sequences to 55 BINs, while the Automatic Barcode Gap Discovery detected 56 partitions, a count that coincided with the number of monophyletic lineages recognized by Neighbor-Joining analysis and Bayesian inference. Congeneric species showed an average of 19% sequence divergence (range = 5.6% - 27%) at COI, while intraspecific distances averaged 0.6% (range = 0.0% - 7.6%). BIN analysis suggested that all intraspecific divergence >3.0% actually involved a species complex. In fact, sequences for three major pest species (Haplothrips reuteri, Thrips palmi, Thrips tabaci), and one predatory thrips (Aeolothrips intermedius) showed deep intraspecific divergences, providing evidence that each is a cryptic species complex. The study compiles the first barcode reference library for the thrips of Pakistan, and examines global haplotype diversity in four important pest thrips.

  5. Fluctuation of blood pressure and pulse rate during colostomy irrigation.

    PubMed

    Sadahiro, S; Noto, T; Tajima, T; Mitomi, T; Miyazaki, T; Numata, M

    1995-06-01

    The aim of this study was to determine the effects of colostomy irrigation on the vital signs of patients with left colostomy. Twenty-two consecutive patients who underwent abdominoperineal resection for cancer of the lower rectum and had left lower quadrant end colostomy were included in this study. Subjective symptoms, blood pressure, and pulse rate during the first irrigation were investigated. Fluctuation of blood pressure during instillation was 8.0/8.5 mmHg (average) and 25.0/17.9 mmHg during evacuation. Fluctuation of pulse rate was 5.5 per minute (average) during instillation and 11.5 per minute during evacuation. The number of subjects who showed more than 20% fluctuation of systolic pressure was 12 (54.5 percent) and that of diastolic pressure was 14 (63.6 percent). One of 22 patients complained of illness during irrigation. Although colostomy irrigation showed no significant effects on vital signs in the majority of patients, it caused a significant reduction in both blood pressure and pulse rate in a small number of patients. Careful attention should be paid to vital signs considering the possibility of such effects, especially on the initial irrigation.

  6. Examination of the four-fifths law for longitudinal third-order moments in incompressible magnetohydrodynamic turbulence in a periodic box.

    PubMed

    Yoshimatsu, Katsunori

    2012-06-01

    The four-fifths law for third-order longitudinal moments is examined, using direct numerical simulation (DNS) data on three-dimensional (3D) forced incompressible magnetohydrodynamic (MHD) turbulence without a uniformly imposed magnetic field in a periodic box. The magnetic Prandtl number is set to one, and the number of grid points is 512(3). A generalized Kármán-Howarth-Kolmogorov equation for second-order velocity moments in isotropic MHD turbulence is extended to anisotropic MHD turbulence by means of a spherical average over the direction of r. Here, r is a separation vector. The viscous, forcing, anisotropic and nonstationary terms in the generalized equation are quantified. It is found that the influence of the anisotropic terms on the four-fifths law is negligible at small scales, compared to that of the viscous term. However, the influence of the directional anisotropy, which is measured by the departure of the third-order moments in a particular direction of r from the spherically averaged ones, on the four-fifths law is suggested to be substantial, at least in the case studied here.

  7. Improved canopy reflectance modeling and scene inference through improved understanding of scene pattern

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Simonett, David

    1988-01-01

    The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.

  8. Channel correlation and BER performance analysis of coherent optical communication systems with receive diversity over moderate-to-strong non-Kolmogorov turbulence.

    PubMed

    Fu, Yulong; Ma, Jing; Tan, Liying; Yu, Siyuan; Lu, Gaoyuan

    2018-04-10

    In this paper, new expressions of the channel-correlation coefficient and its components (the large- and small-scale channel-correlation coefficients) for a plane wave are derived for a horizontal link in moderate-to-strong non-Kolmogorov turbulence using a generalized effective atmospheric spectrum which includes finite-turbulence inner and outer scales and high-wave-number "bump". The closed-form expression of the average bit error rate (BER) of the coherent free-space optical communication system is derived using the derived channel-correlation coefficients and an α-μ distribution to approximate the sum of the square root of arbitrarily correlated Gamma-Gamma random variables. Analytical results are provided to investigate the channel correlation and evaluate the average BER performance. The validity of the proposed approximation is illustrated by Monte Carlo simulations. This work will help with further investigation of the fading correlation in spatial diversity systems.

  9. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Meta-structure correlation in protein space unveils different selection rules for folded and intrinsically disordered proteins.

    PubMed

    Naranjo, Yandi; Pons, Miquel; Konrat, Robert

    2012-01-01

    The number of existing protein sequences spans a very small fraction of sequence space. Natural proteins have overcome a strong negative selective pressure to avoid the formation of insoluble aggregates. Stably folded globular proteins and intrinsically disordered proteins (IDPs) use alternative solutions to the aggregation problem. While in globular proteins folding minimizes the access to aggregation prone regions, IDPs on average display large exposed contact areas. Here, we introduce the concept of average meta-structure correlation maps to analyze sequence space. Using this novel conceptual view we show that representative ensembles of folded and ID proteins show distinct characteristics and respond differently to sequence randomization. By studying the way evolutionary constraints act on IDPs to disable a negative function (aggregation) we might gain insight into the mechanisms by which function-enabling information is encoded in IDPs.

  11. An Estimate of North Atlantic Basin Tropical Cyclone Activity for 2008

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2008-01-01

    The statistics of North Atlantic basin tropical cyclones for the interval 1945-2007 are examined and estimates are given for the frequencies of occurrence of the number of tropical cyclones, number of hurricanes, number of major hurricanes, number of category 4/5 hurricanes, and number of U.S. land-falling hurricanes for the 2008 hurricane season. Also examined are the variations of peak wind speed, average peak wind speed per storm, lowest pressure, average lowest pressure per storm, recurrence rate and duration of extreme events (El Nino and La Nina), the variation of 10-yr moving averages of parametric first differences, and the association of decadal averages of frequencies of occurrence of North Atlantic basin tropical cyclones against decadal averages of Armagh Observatory, Northern Ireland, annual mean temperature (found to be extremely important for number of tropical cyclones and number of hurricanes). Because the 2008 hurricane season seems destined to be one that is non-El Nino-related and is a post-1995 season, estimates of the frequencies of occurrence for the various subsets of storms should be above long-term averages.

  12. Case study. Health hazards of automotive repair mechanics: thermal and lighting comfort, particulate matter and noise.

    PubMed

    Loupa, G

    2013-01-01

    An indoor environmental quality survey was conducted in a small private automotive repair shop during May 2009 (hot season) and February 2010 (cold season). It was established that the detached building, which is naturally ventilated and lit, had all the advantages of the temperate local climate. It provided a satisfactory microclimatic working environment, concerning the thermal and the lighting comfort, without excessive energy consumption for air-conditioning or lighting. Indoor number concentrations of particulate matter (PM) were monitored during both seasons. Their size distributions were strongly affected by the indoor activities and the air exchange rate of the building. During working hours, the average indoor/outdoor (I/O) number concentration ratio was 31 for PM0.3-1 in the hot season and 69 for the cold season. However I/O PM1-10 number concentration ratios were similar, 33 and 32 respectively, between the two seasons. The estimated indoor mass concentration of PM10 for the two seasons was on average 0.68 mg m(-3) and 1.19 mg m(-3), i.e., 22 and 36 times higher than outdoors, during the hot and the cold seasons, respectively. This is indicative that indoor air pollution may adversely affect mechanics' health. Noise levels were highly variable and the average LEX, 8 h of 69.3 dB(A) was below the European Union exposure limit value 87db (A). Noise originated from the use of manual hammers, the revving up of engines, and the closing of car doors or hoods. Octave band analysis indicated that the prevailing noise frequencies were in the area of the maximum ear sensitivity.

  13. Changes in the frequency distribution of energy deposited in short pathlengths as a function of energy degradation of the primary beam.

    NASA Technical Reports Server (NTRS)

    Baily, N. A.; Steigerwalt, J. E.; Hilbert, J. W.

    1972-01-01

    The frequency distributions of event size in the deposition of energy over small pathlengths have been measured after penetration of 44.3 MeV protons through various thicknesses of tissue-equivalent material. Results show that particle energy straggling of an initially monoenergetic proton beam after passage through an absorber causes the frequency distributions of energy deposited in short pathlengths of low atomic number materials to remain broad. In all cases investigated, the ratio of the most probable to the average energy losses has been significantly less than unity.

  14. The concentration of erlotinib in the cerebrospinal fluid of patients with brain metastasis from non-small-cell lung cancer

    PubMed Central

    DENG, YANMING; FENG, WEINENG; WU, JING; CHEN, ZECHENG; TANG, YICONG; ZHANG, HUA; LIANG, JIANMIAO; XIAN, HAIBING; ZHANG, SHUNDA

    2014-01-01

    It has been demonstrated that erlotinib is effective in treating patients with brain metastasis from non-small-cell lung cancer. However, the number of studies determining the erlotinib concentration in these patients is limited. The purpose of this study was to measure the concentration of erlotinib in the cerebrospinal fluid of patients with brain metastasis from non-small-cell lung carcinoma. Six patients were treated with the standard recommended daily dose of erlotinib (150 mg) for 4 weeks. All the patients had previously received chemotherapy, but no brain radiotherapy. At the end of the treatment period, blood plasma and cerebrospinal fluid samples were collected and the erlotinib concentration was determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The average erlotinib concentration in the blood plasma and the cerebrospinal fluid was 717.7±459.7 and 23.7±13.4 ng/ml, respectively. The blood-brain barrier permeation rate of erlotinib was found to be 4.4±3.2%. In patients with partial response (PR), stable disease (SD) and progressive disease (PD), the average concentrations of erlotinib in the cerebrospinal fluid were 35.5±19.0, 19.1±8.7 and 16.4±5.9 ng/ml, respectively. In addition, the efficacy rate of erlotinib for metastatic brain lesions was 33.3%, increasing to 50% in patients with EGFR mutations. However, erlotinib appeared to be ineffective in cases with wild-type EGFR. In conclusion, a relatively high concentration of erlotinib was detected in the cerebrospinal fluid of patients with brain metastases from non-small-cell lung cancer. Thus, erlotinib may be considered as a treatment option for this patient population. PMID:24649318

  15. Global transcriptomic profiling using small volumes of whole blood: a cost-effective method for translational genomic biomarker identification in small animals.

    PubMed

    Fricano, Meagan M; Ditewig, Amy C; Jung, Paul M; Liguori, Michael J; Blomme, Eric A G; Yang, Yi

    2011-01-01

    Blood is an ideal tissue for the identification of novel genomic biomarkers for toxicity or efficacy. However, using blood for transcriptomic profiling presents significant technical challenges due to the transcriptomic changes induced by ex vivo handling and the interference of highly abundant globin mRNA. Most whole blood RNA stabilization and isolation methods also require significant volumes of blood, limiting their effective use in small animal species, such as rodents. To overcome these challenges, a QIAzol-based RNA stabilization and isolation method (QSI) was developed to isolate sufficient amounts of high quality total RNA from 25 to 500 μL of rat whole blood. The method was compared to the standard PAXgene Blood RNA System using blood collected from rats exposed to saline or lipopolysaccharide (LPS). The QSI method yielded an average of 54 ng total RNA per μL of rat whole blood with an average RNA Integrity Number (RIN) of 9, a performance comparable with the standard PAXgene method. Total RNA samples were further processed using the NuGEN Ovation Whole Blood Solution system and cDNA was hybridized to Affymetrix Rat Genome 230 2.0 Arrays. The microarray QC parameters using RNA isolated with the QSI method were within the acceptable range for microarray analysis. The transcriptomic profiles were highly correlated with those using RNA isolated with the PAXgene method and were consistent with expected LPS-induced inflammatory responses. The present study demonstrated that the QSI method coupled with NuGEN Ovation Whole Blood Solution system is cost-effective and particularly suitable for transcriptomic profiling of minimal volumes of whole blood, typical of those obtained with small animal species.

  16. Developmental Changes in the Effect of Active Left and Right Head Rotation on Random Number Generation.

    PubMed

    Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine

    2018-01-01

    Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children's access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults ( n = 24), but also in 7- to 11-year-old elementary school children ( n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes.

  17. Productivity and impact of astronomical facilities: Three years of publications and citation rates

    NASA Astrophysics Data System (ADS)

    Trimble, V.; Ceja, J. A.

    2008-07-01

    In calendar years 2001 to 2003, 20 journals of astronomy and astrophysics published 11 831 papers that reported or analyzed observations at wavelengths from meter radio to ultrahigh energy gamma rays. These were cited 161 556 times in the three calendar years following publication, according to the Science Citation Index/Web of Science, for an average of 13.66 citations per paper or 4.55 citations per paper per year. We examine these numbers as a function of subject matter, wavelength bands, journals, and individual telescopes used and explore a small subset of possible temporal trends, anomalies, and sources of uncertainty, including blockbuster journals, papers and facilities. Many of the results resemble qualitative expectations. There are hot topics (cosmology, exoplanets) and not so hot topics (binary stars, planetary nebulae). Papers reporting data from space are cited a bit more often, and ground-based radio papers a bit less often, than optical papers, while multi-wavelength ones do noticeably better than average. The total number of telescopes involved is surprisingly large, approximately 350 optical and infrared (mostly ground-based but including HST because of its long life), 144 radio facilities on about 100 sites (including WMAP and COBE and a few balloon-borne CMB experiments), and 105 space-based detectors (including satellites, interplanetary probes, things carried on rockets, balloons, the Shuttle, and so forth). The outstanding telescopes are generally both stable with time and predictable. HST and the VLA are responsible for the largest number of optical and radio papers respectively, but the most frequently cited optical papers come from SDSS (by a wide margin), Keck, and the AAT, while the JCMT, Parkes and (especially) CMB observatories lead the radio brigade. Among things that fly, leadership changes more quickly, as missions are launched, vigorously exploited, and turned off, sometimes achieving geostationary, suboceanic orbits. If you have a choice, large trumps small, but well-supported sites trump struggling ones by a comparable factor. And service to the community, in the form of catalogues and mission descriptions, is rewarded, at least in citation numbers, if not always in other ways.

  18. Climate, soil water storage, and the average annual water balance

    USGS Publications Warehouse

    Milly, P.C.D.

    1994-01-01

    This paper describes the development and testing of the hypothesis that the long-term water balance is determined only by the local interaction of fluctuating water supply (precipitation) and demand (potential evapotranspiration), mediated by water storage in the soil. Adoption of this hypothesis, together with idealized representations of relevant input variabilities in time and space, yields a simple model of the water balance of a finite area having a uniform climate. The partitioning of average annual precipitation into evapotranspiration and runoff depends on seven dimensionless numbers: the ratio of average annual potential evapotranspiration to average annual precipitation (index of dryness); the ratio of the spatial average plant-available water-holding capacity of the soil to the annual average precipitation amount; the mean number of precipitation events per year; the shape parameter of the gamma distribution describing spatial variability of storage capacity; and simple measures of the seasonality of mean precipitation intensity, storm arrival rate, and potential evapotranspiration. The hypothesis is tested in an application of the model to the United States east of the Rocky Mountains, with no calibration. Study area averages of runoff and evapotranspiration, based on observations, are 263 mm and 728 mm, respectively; the model yields corresponding estimates of 250 mm and 741 mm, respectively, and explains 88% of the geographical variance of observed runoff within the study region. The differences between modeled and observed runoff can be explained by uncertainties in the model inputs and in the observed runoff. In the humid (index of dryness <1) parts of the study area, the dominant factor producing runoff is the excess of annual precipitation over annual potential evapotranspiration, but runoff caused by variability of supply and demand over time is also significant; in the arid (index of dryness >1) parts, all of the runoff is caused by variability of forcing over time. Contributions to model runoff attributable to small-scale spatial variability of storage capacity are insignificant throughout the study area. The consistency of the model with observational data is supportive of the supply-demand-storage hypothesis, which neglects infiltration excess runoff and other finite-permeability effects on the soil water balance.

  19. All brains are made of this: a fundamental building block of brain matter with matching neuronal and glial masses.

    PubMed

    Mota, Bruno; Herculano-Houzel, Suzana

    2014-01-01

    How does the size of the glial and neuronal cells that compose brain tissue vary across brain structures and species? Our previous studies indicate that average neuronal size is highly variable, while average glial cell size is more constant. Measuring whole cell sizes in vivo, however, is a daunting task. Here we use chi-square minimization of the relationship between measured neuronal and glial cell densities in the cerebral cortex, cerebellum, and rest of brain in 27 mammalian species to model neuronal and glial cell mass, as well as the neuronal mass fraction of the tissue (the fraction of tissue mass composed by neurons). Our model shows that while average neuronal cell mass varies by over 500-fold across brain structures and species, average glial cell mass varies only 1.4-fold. Neuronal mass fraction varies typically between 0.6 and 0.8 in all structures. Remarkably, we show that two fundamental, universal relationships apply across all brain structures and species: (1) the glia/neuron ratio varies with the total neuronal mass in the tissue (which in turn depends on variations in average neuronal cell mass), and (2) the neuronal mass per glial cell, and with it the neuronal mass fraction and neuron/glia mass ratio, varies with average glial cell mass in the tissue. We propose that there is a fundamental building block of brain tissue: the glial mass that accompanies a unit of neuronal mass. We argue that the scaling of this glial mass is a consequence of a universal mechanism whereby numbers of glial cells are added to the neuronal parenchyma during development, irrespective of whether the neurons composing it are large or small, but depending on the average mass of the glial cells being added. We also show how evolutionary variations in neuronal cell mass, glial cell mass and number of neurons suffice to determine the most basic characteristics of brain structures, such as mass, glia/neuron ratio, neuron/glia mass ratio, and cell densities.

  20. Oral health of the elderly living in residential homes in Slovenia.

    PubMed

    Petelin, Milan; Cotič, Jasna; Perkič, Katja; Pavlič, Alenka

    2012-06-01

    To evaluate oral health status of the elderly, living in eight randomly selected residential homes for senior citizens across the country. The percentage of the elderly is growing worldwide. With ageing, risks of various oral diseases, including dental caries and periodontal disease, are growing. Altogether 296 elderly people (88 men, 208 women) of average age 79.89 ± 7.4 years were questioned about their medical condition and oral health practice and examined orally. Evaluation of clinical examination was carried out by DMFT, plaque index (Silness and Löe, 1964) and Community Periodontal Index of Treatment Need (CPITN). Of 296 participants, 106 (35.8%) were edentulous, 95 (32.1%) had one to nine teeth and 95 persons (32.1%) had 10 or more teeth. The average number of teeth in an individual was small: 6.76 ± 7.47. The average number of teeth with caries lesions was 3.59 ± 4.70, filled teeth 1.94 ± 3.63 and teeth without caries or fillings 1.19 ± 2.41. The average DMFT value was 30.75. In 69.5% of participants, dental plaque was visible with the naked eye. Of 171 subjects, in whom CPITN index was appraised, 81.9% would need oral hygiene education, 56.7% would need scaling and root planning and 21.6% would need periodontal surgical treatment. The results of this study indicate poor oral health of the elderly living in residential homes situated in different towns in Slovenia. It is of utmost importance to highlight the necessity of improving oral health care of this population. © 2011 The Gerodontology Society and John Wiley & Sons A/S.

  1. Research output of health research institutions and its use in 42 sub-Saharan African countries: results of a questionnaire-based survey.

    PubMed

    Kebede, Derege; Zielinski, Chris; Mbondji, Peter Ebongue; Sanou, Issa; Kouvividila, Wenceslas; Lusamba-Dikassa, Paul-Samson

    2014-05-01

    To describe and analyse research output from surveyed national health research institutions in Africa. The survey used a structured questionnaire to solicit information from 847 health research institutions in 42 countries of the World Health Organization African Region. Eight hundred and forty-seven health research institutions in 42 sub-Saharan African countries. Key informants from the health research institutions. Volume, type and medium of publications, and distribution of research outputs. Books or chapters for books accounted for the highest number of information products published (on average 16.7 per respondent institution), followed by patents registered in country (8.2), discussion or working papers (6.5) and conference proceedings (6.4). Publication in a peer-reviewed journal constituted only a minor part of research output (on average about 1 paper per institution). Radio and TV broadcasts on health research accounted for the highest number of products issued by institution staff (on average 5.5 per institution), followed by peer-reviewed journals indexed internationally (3.8) or nationally (3.1). There were, on average, 1.5 press releases, 1.5 newspaper or magazine articles, and 1.4 policy briefs per institution. Over half of respondent institutions (52%) developed briefs and summaries of articles to share with their target audiences, 43% developed briefs for possible actions and 37% provided articles and reports upon request. Only a small proportion of information products produced were available in institutional databases. The research output of health research institutions in the Region is significant, but more effort is needed to strengthen research capacity, including human and financial resources. © The Royal Society of Medicine.

  2. Retinal specializations and visual ecology in an animal with an extremely elaborate pupil shape: The Little skate Leucoraja (Raja) erinacea Mitchell, 1825.

    PubMed

    Jinson, S Terrell; Liebich, Jan; Senft, Stephen L; Mäthger, Lydia M

    2018-05-14

    Investigating retinal specializations offers insights into eye functionality. Using retinal wholemount techniques, we investigated the distribution of retinal ganglion cells in the Little skate Leucoraja erinacea by (1) dye-backfilling into the optic nerve prior to retinal wholemounting; (2) Nissl-staining of retinal wholemounts. Retinas were examined for regional specializations (higher numbers) of ganglion cells that would indicate higher visual acuity in those areas. Total ganglion cell number were low compared to other elasmobranchs (backfilled: average 49,713 total ganglion cells, average peak cell density 1,315 ganglion cells mm -2 ; Nissl-stained: average 47,791 total ganglion cells, average peak cell density 1,319 ganglion cells mm -2 ). Ganglion cells fit into three size categories: small (5-20µm); medium (20-30µm); large: (≥ 30µm), and they were not homogeneously distributed across the retina. There was a dorsally located horizontal visual streak with increased ganglion cell density; additionally, there were approximately 3 local maxima in ganglion cell distribution (potential areae centrales) within this streak in which densities were highest. Using computerized tomography (CT) and micro-CT, geometrical dimensions of the eye were obtained. Combined with ganglion cell distributions, spatial resolving power was determined to be between 1.21 to 1.37 cycles per degree. Additionally, photoreceptor sizes across different retinal areas varied; photoreceptors were longest within the horizontal visual streak. Variations in the locations of retinal specializations appear to be related to the animal's anatomy: shape of the head and eyes, position of eyes, location of tapetum, and shape of pupil, as well as the visual demands associated with lifestyle and habitat type. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  3. Quantification and assessment of heat and cold waves in Novi Sad, Northern Serbia

    NASA Astrophysics Data System (ADS)

    Basarin, Biljana; Lukić, Tin; Matzarakis, Andreas

    2016-01-01

    Physiologically equivalent temperature (PET) has been applied to the analysis of heat and cold waves and human thermal conditions in Novi Sad, Serbia. A series of daily minimum and maximum air temperature, relative humidity, wind, and cloud cover was used to calculate PET for the investigated period 1949-2012. The heat and cold wave analysis was carried out on days with PET values exceeding defined thresholds. Additionally, the acclimatization approach was introduced to evaluate human adaptation to interannual thermal perception. Trend analysis has revealed the presence of increasing trend in summer PET anomalies, number of days above defined threshold, number of heat waves, and average duration of heat waves per year since 1981. Moreover, winter PET anomaly as well as the number of days below certain threshold and number of cold waves per year until 1980 was decreasing, but the decrease was not statistically significant. The highest number of heat waves during summer was registered in the last two decades, but also in the first decade of the investigated period. On the other hand, the number of cold waves during six decades is quite similar and the differences are very small.

  4. Quantification and assessment of heat and cold waves in Novi Sad, Northern Serbia.

    PubMed

    Basarin, Biljana; Lukić, Tin; Matzarakis, Andreas

    2016-01-01

    Physiologically equivalent temperature (PET) has been applied to the analysis of heat and cold waves and human thermal conditions in Novi Sad, Serbia. A series of daily minimum and maximum air temperature, relative humidity, wind, and cloud cover was used to calculate PET for the investigated period 1949-2012. The heat and cold wave analysis was carried out on days with PET values exceeding defined thresholds. Additionally, the acclimatization approach was introduced to evaluate human adaptation to interannual thermal perception. Trend analysis has revealed the presence of increasing trend in summer PET anomalies, number of days above defined threshold, number of heat waves, and average duration of heat waves per year since 1981. Moreover, winter PET anomaly as well as the number of days below certain threshold and number of cold waves per year until 1980 was decreasing, but the decrease was not statistically significant. The highest number of heat waves during summer was registered in the last two decades, but also in the first decade of the investigated period. On the other hand, the number of cold waves during six decades is quite similar and the differences are very small.

  5. Atomic and Molecular Systems in Intense Ultrashort Laser Pulses

    NASA Astrophysics Data System (ADS)

    Saenz, A.

    2008-07-01

    The full quantum mechanical treatment of atomic and molecular systems exposed to intense laser pulses is a so far unsolved challenge, even for systems as small as molecular hydrogen. Therefore, a number of simplified qualitative and quantitative models have been introduced in order to provide at least some interpretational tools for experimental data. The assessment of these models describing the molecular response is complicated, since a comparison to experiment requires often a number of averages to be performed. This includes in many cases averaging of different orientations of the molecule with respect to the laser field, focal volume effects, etc. Furthermore, the pulse shape and even the peak intensity is experimentally not known with very high precision; considering, e.g., the exponential intensity dependence of the ionization signal. Finally, experiments usually provide only relative yields. As a consequence of all these averagings and uncertainties, it is possible that different models may successfully explain some experimental results or features, although these models disagree substantially, if their predictions are compared before averaging. Therefore, fully quantum-mechanical approaches at least for small atomic and molecular systems are highly desirable and have been developed in our group. This includes efficient codes for solving the time-dependent Schrodinger equation of atomic hydrogen, helium or other effective one- or two-electron atoms as well as for the electronic motion in linear (effective) one-and two-electron diatomic molecules like H_2.Very recently, a code for larger molecular systems that adopts the so-called single-active electron approximation was also successfully implemented and applied. In the first part of this talk popular models describing intense laser-field ionization of atoms and their extensions to molecules are described. Then their validity is discussed on the basis of quantum-mechanical calculations. Finally, some peculiar molecular strong-field effects and the possibility of strong-field control mechanisms will be demonstrated. This includes phenomena like enhanced ionization and bond softening as well as the creation of vibrational wavepacket in the non-ionized electronic ground state of H_2 by creating a Schrodinger-cat state between the ionized and the non-ionized molecules. The latter, theoretically predicted phenomenon was very recently experimentally observed and lead to the real-time observation of the so far fastest molecular motion.

  6. Preliminary flood-frequency relations for small streams in Kansas

    USGS Publications Warehouse

    Irza, T.J.

    1966-01-01

    Preliminary flood-frequency relations have been defined for small streams in Kansas for floods having recurrence intervals not greater than 10 years. The defined relations will be useful for the design of culverts and other hydraulic structures. The relations are expressed in terms of basin characteristics.Peakflow records at 95 sites in Kansas for an 8-year period provided the basic data. The records were analyzed with respect to 20 basin characteristics by multiple-regression techniques. The resulting formulas relate flood magnitude and frequency to size of contributing drainage area, an index of stream-bed slope, and the average number of days per year when rainfall exceeded 1.0 inch. The other 17 factors had no statistical significance.To illustrate a typical application of the flood-frequency relation, a step-bystep method is presented for computing a frequency curve for Rock Creek near Meriden, Kans. The frequency curve shows that a peak discharge of 3,620 cfs (cubic feet per second) can be expected once every 10 years on the average, and that the 67 percent confidence interval ranges from 1,820 cfs to 7,230 cfs. The large range results from the fact that only 8 years of record have been collected and emphasizes the need for collecting records for a longer period.

  7. Coriolis effect on dynamic stall in a vertical axis wind turbine

    NASA Astrophysics Data System (ADS)

    Tsai, Hsieh-Chen; Colonius, Tim

    2013-11-01

    The immersed boundary method is used to simulate the flow around a two-dimensional rotating NACA 0018 airfoil at moderate (sub-scale) Reynolds number in order to investigate separated flow occurring on a vertical-axis wind turbine (VAWT). The influence of dynamic stall on the forces is characterized as a function of tip-speed ratio. The influence of the Coriolis effect is also investigated by comparing the rotating airfoil to one undergoing a surging and pitching motion that produces an equivalent speed and angle-of-attack variation over the cycle. While the Coriolis force produces only small differences in the averaged forces, it plays an important role during dynamic stall. Due to the fact that the Coriolis force deflects the fluid and propagates the vortices differently, the wake-capturing phenomenon of the trailing edge vortex is observed in the flow around the rotating airfoil during a certain range of azimuthal angle. This wake-capturing of the trailing edge vortex leads to a large decrease in lift. However, because of the phase difference between each wake-capturing, there are only small differences in the average forces. The simulations are also compared to results from companion water-tunnel experiments at Caltech. This project is supported by the Gordon and Betty Moore Foundation.

  8. Characterization of Metacarpal Fractures in a Military Population.

    PubMed

    Dichiera, Robert; Dunn, John; Bader, Julia; Bulken-Hoover, Jamie; Pallis, Mark

    2016-08-01

    The purpose of this study was to investigate the incidence and type of metacarpal (MC) fractures in a military population, and whether these fractures are related to age, military occupational specialty, aggression, or accidental injury. A retrospective record-based review was conducted at a single military center over a 5-year period. Service members with index finger through small finger MC fracture were identified. Data were collected utilizing Armed Forces Health Longitudinal Technology Application and electronic profile (e-profile) databases. Data collected included demographic information, mechanism of injury, nature of injury, total number of visits, and estimated time on physical restriction. 400 patients met inclusion criteria. Males accounted for 94% of the study population, 75% of fractures were of the small finger MC, 54% of patients were between 20 and 24 years, 90% were sustained by junior enlisted personnel, and most occurred by punching. Men aged <25 years were more likely to have intentional injuries. Total time on limited duty profile averaged 38 days and the average medically nondeployable profile was 26 days. MC fractures most commonly affect young, male, junior enlisted service members and are often self-inflicted. As a result, these injuries account for time lost at work, reduced job performance, and decreased medical readiness. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  9. The Occurrence of Repeated High Acceleration Ability (RHAA) in Elite Youth Football.

    PubMed

    Serpiello, Fabio R; Duthie, Grant M; Moran, Codey; Kovacevic, Damian; Selimi, Erch; Varley, Matthew C

    2018-06-05

    The aim of the present study was to investigate the occurrence of Repeated High-Acceleration Ability (RHAA) bouts in elite youth football games using 10-Hz GPS devices and two relative thresholds derived from players' actual maximal acceleration. Thirty-six outfield soccer players (age 14.9±0.6 years) participated in the study. Players wore 10-Hz GPS units during 41 official games. High accelerations were defined as efforts commencing above a threshold corresponding to 70% (T70%) or 80% (T80%) of the average 5-m acceleration obtained during a 40-m sprint test; RHAA bouts were defined as ≥3 efforts with ≤45 s recovery between efforts. Results were analysed via generalised linear mixed model and magnitude-based inferential statistics. On average, 8.0±4.6 and 5.1±3.5 bouts were detected in an entire game using T70% and T80%, respectively. When all positions were analysed together, there was a very-likely small difference in the number of RHAA bouts between first and second half for T70% and T80%, respectively. RHAA bouts occur frequently in elite youth football, with small differences between halves and between playing positions within the first or second half in most variables assessed. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Solid oxide fuel cell systems for residential micro-combined heat and power in the UK: Key economic drivers

    NASA Astrophysics Data System (ADS)

    Hawkes, Adam; Leach, Matthew

    The ability of combined heat and power (CHP) to meet residential heat and power demands efficiently offers potentially significant financial and environmental advantages over centralised power generation and heat-provision through natural-gas fired boilers. A solid oxide fuel cell (SOFC) can operate at high overall efficiencies (heat and power) of 80-90%, offering an improvement over centralised generation, which is often unable to utilise waste heat. This paper applies an equivalent annual cost (EAC) minimisation model to a residential solid oxide fuel cell CHP system to determine what the driving factors are behind investment in this technology. We explore the performance of a hypothetical SOFC system—representing expectations of near to medium term technology development—under present UK market conditions. We find that households with small to average energy demands do not benefit from installation of a SOFC micro-CHP system, but larger energy demands do benefit under these conditions. However, this result is sensitive to a number of factors including stack capital cost, energy import and export prices, and plant lifetime. The results for small and average dwellings are shown to reverse under an observed change in energy import prices, an increase in electricity export price, a decrease in stack capital costs, or an improvement in stack lifetime.

  11. Numerical investigation of roughness effects in aircraft icing calculations

    NASA Astrophysics Data System (ADS)

    Matheis, Brian Daniel

    2008-10-01

    Icing codes are playing a role of increasing significance in the design and certification of ice protected aircraft surfaces. However, in the interest of computational efficiency certain small scale physics of the icing problem are grossly approximated by the codes. One such small scale phenomena is the effect of ice roughness on the development of the surface water film and on the convective heat transfer. This study uses computational methods to study the potential effect of ice roughness on both of these small scale phenomena. First, a two-dimensional condensed layer code is used to examine the effect of roughness on surface water development. It is found that the Couette approximation within the film breaks down as the wall shear goes to zero, depending on the film thickness. Roughness elements with initial flow separation in the air induce flow separation in the water layer at steady state, causing a trapping of the film. The amount of trapping for different roughness configurations is examined. Second, a three-dimensional incompressible Navier-Stokes code is developed to examine large scale ice roughness on the leading edge. The effect on the convective heat transfer and potential effect on the surface water dynamics is examined for a number of distributed roughness parameters including Reynolds number, roughness height, streamwise extent, roughness spacing and roughness shape. In most cases the roughness field increases the net average convective heat transfer on the leading edge while narrowing surface shear lines, indicating a choking of the surface water flow. Both effects show significant variation on the scale of the ice roughness. Both the change in heat transfer as well as the potential change in surface water dynamics are presented in terms of the development of singularities in the surface shear pattern. Of particular interest is the effect of the smooth zone upstream of the roughness which shows both a relatively large increase in convective heat transfer as well as excessive choking of the surface shear lines at the upstream end of the roughness field. A summary of the heat transfer results is presented for both the averaged heat transfer as well as the maximum heat transfer over each roughness element, indicating that the roughness Reynolds number is the primary parameter which characterizes the behavior of the roughness for the problem of interest.

  12. Tuition Discounting Is on the Rise: Results of a NACUBO Study of Independent Institutions.

    ERIC Educational Resources Information Center

    Hubbell, Loren Loomis

    1995-01-01

    A study by the National Association of College and University Business Officers (NACUBO) analyzed tuition discount ratios for small colleges with lower tuition, small colleges with higher tuition, and large colleges/universities. It examined average tuition discount percentages, average tuition rates and enrollments, percent of students aided, and…

  13. Treating Small Bowel Obstruction with a Manual Physical Therapy: A Prospective Efficacy Study

    PubMed Central

    Rice, Amanda D.; Patterson, Kimberley; Reed, Evette D.; Wurn, Belinda F.; Klingenberg, Bernhard; King, C. Richard; Wurn, Lawrence J.

    2016-01-01

    Small bowel obstructions (SBOs) caused by adhesions are a common, often life-threatening postsurgical complication with few treatment options available for patients. This study examines the efficacy of a manual physical therapy treatment regimen on the pain and quality of life of subjects with a history of bowel obstructions due to adhesions in a prospective, controlled survey based study. Changes in six domains of quality of life were measured via ratings reported before and after treatment using the validated Small Bowel Obstruction Questionnaire (SBO-Q). Improvements in the domains for pain (p = 0.0087), overall quality of life (p = 0.0016), and pain severity (p = 0.0006) were significant when average scores before treatment were compared with scores after treatment. The gastrointestinal symptoms (p = 0.0258) domain was marginally significant. There was no statistically significant improvement identified in the diet or medication domains in the SBO-Q for this population. Significant improvements in range of motion in the trunk (p ≤ 0.001), often limited by adhesions, were also observed for all measures. This study demonstrates in a small number of subjects that this manual physical therapy protocol is an effective treatment option for patients with adhesive small bowel obstructions as measured by subject reported symptoms and quality of life. PMID:26989690

  14. Wing-kinematics measurement and aerodynamics in a small insect in hovering flight.

    PubMed

    Cheng, Xin; Sun, Mao

    2016-05-11

    Wing-motion of hovering small fly Liriomyza sativae was measured using high-speed video and flows of the wings calculated numerically. The fly used high wingbeat frequency (≈265 Hz) and large stroke amplitude (≈182°); therefore, even if its wing-length (R) was small (R ≈ 1.4 mm), the mean velocity of wing reached ≈1.5 m/s, the same as that of an average-size insect (R ≈ 3 mm). But the Reynolds number (Re) of wing was still low (≈40), owing to the small wing-size. In increasing the stroke amplitude, the outer parts of the wings had a "clap and fling" motion. The mean-lift coefficient was high, ≈1.85, several times larger than that of a cruising airplane. The partial "clap and fling" motion increased the lift by ≈7%, compared with the case of no aerodynamic interaction between the wings. The fly mainly used the delayed stall mechanism to generate the high-lift. The lift-to-drag ratio is only 0.7 (for larger insects, Re being about 100 or higher, the ratio is 1-1.2); that is, although the small fly can produce enough lift to support its weight, it needs to overcome a larger drag to do so.

  15. Industry structures in private dental markets in Finland.

    PubMed

    Widström, E; Mikkola, H

    2012-12-01

    To use industrial organisation and organisational ecology research methods to survey industry structures and performance in the markets for private dental services and the effect of competition. Data on practice characteristics, performance, and perceived competition were collected from full-time private dentists (n = 1,121) using a questionnaire. The response rate was 59.6%. Cluster analysis was used to identify practice type based on service differentiation and process integration variables formulated from the questionnaire. Four strategic groups were identified in the Finnish markets: Solo practices formed one distinct group and group practices were classified into three clusters Integrated practices, Small practices, and Loosely integrated practices. Statistically significant differences were found in performance and perceived competitiveness between the groups. Integrated practices with the highest level of process integration and service differentiation performed better than solo and small practices. Moreover, loosely integrated and small practices outperformed solo practises. Competitive intensity was highest among small practices which had a low level of service differentiation and was above average among solo practises. Private dental care providers that had differentiated their services from public services and that had a high number of integrated service production processes enjoyed higher performance and less competitive pressures than those who had not.

  16. Role of gastrografin challenge in early postoperative small bowel obstruction.

    PubMed

    Khasawneh, Mohammad A; Ugarte, Maria L Martinez; Srvantstian, Boris; Dozois, Eric J; Bannon, Michael P; Zielinski, Martin D

    2014-02-01

    Early small bowel obstruction following abdominal surgery presents a diagnostic and therapeutic challenge. Abdominal imaging using Gastrografin has been shown to have diagnostic and therapeutic properties when used in the setting of small bowel obstruction outside the early postoperative period (>6 weeks). We hypothesize that a GG challenge will reduce need for re-exploration. Patients with early small bowel obstruction who underwent a Gastrografin challenge between 2010 and 2012 were case controlled, based on age ±5 years, sex, and operative approach to an equal number of patients that did not receive the challenge. One hundred sixteen patients received a Gastrografin challenge. There were 87 males in each group with an average age of 62 years. A laparoscopic approach in the index operation was done equally between groups (18 vs. 18 %). There was no difference between groups in operative re-exploration rates (14 vs. 10 %); however, hospital duration of stay was greater in patients who received Gastrografin challenge (17 vs. 13 days). Two in hospital deaths occurred, one in each group, both of infectious complications. Use of the Gastrografin challenge in the immediate postoperative period appeared to be safe. There was no difference, however, in the rate of re-exploration between groups.

  17. Small-angle x-ray scattering in amorphous silicon: A computational study

    NASA Astrophysics Data System (ADS)

    Paudel, Durga; Atta-Fynn, Raymond; Drabold, David A.; Elliott, Stephen R.; Biswas, Parthapratim

    2018-05-01

    We present a computational study of small-angle x-ray scattering (SAXS) in amorphous silicon (a -Si) with particular emphasis on the morphology and microstructure of voids. The relationship between the scattering intensity in SAXS and the three-dimensional structure of nanoscale inhomogeneities or voids is addressed by generating large high-quality a -Si networks with 0.1%-0.3% volume concentration of voids, as observed in experiments using SAXS and positron annihilation spectroscopy. A systematic study of the variation of the scattering intensity in the small-angle scattering region with the size, shape, number density, and the spatial distribution of the voids in the networks is presented. Our results suggest that the scattering intensity in the small-angle region is particularly sensitive to the size and the total volume fraction of the voids, but the effect of the geometry or shape of the voids is less pronounced in the intensity profiles. A comparison of the average size of the voids obtained from the simulated values of the intensity, using the Guinier approximation and Kratky plots, with that of the same from the spatial distribution of the atoms in the vicinity of void surfaces is presented.

  18. Transcriptional changes during ovule development in two genotypes of litchi (Litchi chinensis Sonn.) with contrast in seed size.

    PubMed

    Pathak, Ashish K; Singh, Sudhir P; Gupta, Yogesh; Gurjar, Anoop K S; Mantri, Shrikant S; Tuli, Rakesh

    2016-11-08

    Litchi chinensis is a subtropical fruit crop, popular for its nutritional value and taste. Fruits with small seed size and thick aril are desirable in litchi. To gain molecular insight into gene expression that leads to the reduction in the size of seed in Litchi chinensis, transcriptomes of two genetically closely related genotypes, with contrasting seed size were compared in developing ovules. The cDNA library constructed from early developmental stages of ovules (0, 6, and 14 days after anthesis) of bold- and small-seeded litchi genotypes yielded 303,778,968 high quality paired-end reads. These were de-novo assembled into 1,19,939 transcripts with an average length of 865 bp. A total of 10,186 transcripts with contrast in expression were identified in developing ovules between the small- and large- seeded genotypes. A majority of these differences were present in ovules before anthesis, thus suggesting the role of maternal factors in seed development. A number of transcripts indicative of metabolic stress, expressed at higher level in the small seeded genotype. Several differentially expressed transcripts identified in such ovules showed homology with Arabidopsis genes associated with different stages of ovule development and embryogenesis.

  19. Average intensity and spreading of an astigmatic sinh-Gaussian beam with small beam width propagating in atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Zhu, Kaicheng; Tang, Huiqin; Xia, Hui

    2017-10-01

    Propagation properties of astigmatic sinh-Gaussian beams (ShGBs) with small beam width in turbulent atmosphere are investigated. Based on the extended Huygens-Fresnel integral, analytical formulae for the average intensity and the effective beam size of an astigmatic ShGB are derived in turbulent atmosphere. The average intensity distribution and the spreading properties of an astigmatic ShGB propagating in turbulent atmosphere are numerically demonstrated. The influences of the beam parameters and the structure constant of atmospheric turbulence on the propagation properties of astigmatic ShGBs are also discussed in detail. In particular, for sufficiently small beam width and sinh-part parameter as well as suitable astigmatism, we show that the average intensity pattern converts into a perfect dark-hollow profile from initial two-petal pattern when ShGBs with astigmatic aberration propagate through atmospheric turbulence.

  20. Direct costs associated with the disease management of patients with unresectable advanced non-small-cell lung cancer in The Netherlands.

    PubMed

    Pompen, Marjolein; Gok, Murat; Novák, Annoesjka; van Wuijtswinkel, Rob; Biesma, Bonne; Schramel, Franz; Stigt, Jos; Smit, Hans; Postmus, Pieter

    2009-04-01

    Disease management and costs of treatment of patients with unresectable advanced non-small-cell lung cancer (NSCLC) in The Netherlands are not well known. A retrospective medical chart review was performed by collecting data from the time of diagnosis until the time of death or the end of the evaluation period. In addition to the demographic data, information was collected on the overall management of the patient. Hospital resource utilisation data collected included number of outpatient specialist visits, number and length of hospitalisation, type and number of diagnostic and laboratory procedures, type and number of radiotherapy cycles and detailed information on chemotherapy. To evaluate the economic impact of second-line treatment, a distinction was made between patients who received only best supportive care (BSC, group A) and those who received chemotherapy as a second-line treatment in addition to BSC (group B). The study was performed from the hospital perspective and reports on 2005 costs. Of 102 patients, 74 belonged to group A and 28 to group B. Patient management included a multidisciplinary approach, the extent of which depended on symptoms of the disease and presence of metastases. The average total treatment cost per patient per year of unresectable advanced NSCLC in The Netherlands was euro32,840 in group A and euro31,187 in group B. In both groups, hospitalisation was the major cost driver. In group B second-line chemotherapy was the second largest contributor of the costs. In spite of the difference in numbers of treatment lines provided to patients in groups A and B the total average costs per patient per year were comparable. Overall, the management of unresectable advanced NSCLC appeared to conform with current guidelines in The Netherlands. These patients show high medical resource consumption, with hospitalisation being the main cost driver in both groups. As economic arguments are becoming increasingly important in medical decision making on both national and local levels, this information is relevant for both policy makers and specialists. These data can also be used in future research to evaluate the economic impact of new therapies in NSCLC, especially of those that aim to treat patients in an outpatient setting.

  1. Comparison of the Aeroacoustics of Two Small-Scale Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Ng, Wing

    1996-01-01

    An aerodynamic and acoustic investigation was performed on two small-scale supersonic inlets to determine which inlet would be more suitable for a High Speed Civil Transport (HSCT) aircraft during approach and takeoff flight conditions. The comparison was made between an axisymmetric supersonic P inlet and a bifurcated two-dimensional supersonic inlet. The 1/14 scale model supersonic inlets were used in conjunction with a 4.1 in (10.4 cm) turbofan engine simulator. A bellmouth was utilized on each inlet to eliminate lip separation commonly associated with airplane engine inlets that are tested under static conditions. Steady state measurements of the aerodynamic flowfield and acoustic farfield were made in order to evaluate the aeroacoustic performance of the inlets. The aerodynamic results show the total pressure recovery of the two inlets to be nearly identical, 99% at the approach condition and 98% at the takeoff condition. At the approach fan speed (60% design speed), there was no appreciable difference in the acoustic performance of either inlet over the entire 0 deg to 110 deg farfield measurement sector. The inlet flow field results at the takeoff fan speed (88% design speed), show the average inlet throat Mach number for the P inlet (Mach 0.52) to be approximately 2 times that of the 2D inlet (Mach 0.26). The difference in the throat Mach number is a result of the smaller throughflow area of the P inlet. This reduced area resulted in a 'soft choking' of the P inlet which lowered the tone and overall sound pressure levels of the simulator in the forward sector by an average of 9 dB and 3 dB, respectively, when compared to the 2D inlet.

  2. Low Reynolds number flow near tiny leaves, stems, and trichomes

    NASA Astrophysics Data System (ADS)

    Strickland, Christopher; Pasour, Virginia; Miller, Laura

    2016-11-01

    In terrestrial and aquatic environments such as forest canopies, grass fields, and seagrass beds, the density and shape of trunks, branches, stems, leaves and trichomes (the hairs or fine outgrowths on plants) can drastically alter both the average wind speed and profile through these environments and near each plant. While many studies of flow in these environments have focused on bulk properties of the flow at scales on the order of meters, the low Reynolds number flow close to vegetative structures is especially complex and relevant to nutrient exchange. Using three-dimensional immersed boundary simulations, we resolve the flow around trichomes and small leaves and quantify velocities, shear stresses, and mixing while varying the height and density of idealized structures. National Science Foundation Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute, and the Army Research Office.

  3. Numerical investigation of supersonic turbulent boundary layers with high wall temperature

    NASA Technical Reports Server (NTRS)

    Guo, Y.; Adams, N. A.

    1994-01-01

    A direct numerical approach has been developed to simulate supersonic turbulent boundary layers. The mean flow quantities are obtained by solving the parabolized Reynolds-averaged Navier-Stokes equations (globally). Fluctuating quantities are computed locally with a temporal direct numerical simulation approach, in which nonparallel effects of boundary layers are partially modeled. Preliminary numerical results obtained at the free-stream Mach numbers 3, 4.5, and 6 with hot-wall conditions are presented. Approximately 5 million grid points are used in all three cases. The numerical results indicate that compressibility effects on turbulent kinetic energy, in terms of dilatational dissipation and pressure-dilatation correlation, are small. Due to the hot-wall conditions the results show significant low Reynolds number effects and large streamwise streaks. Further simulations with a bigger computational box or a cold-wall condition are desirable.

  4. Structure and modeling of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, E.A.

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scalemore » motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).« less

  5. Graviton creation by small scale factor oscillations in an expanding universe

    NASA Astrophysics Data System (ADS)

    Schiappacasse, Enrico D.; Ford, L. H.

    2016-10-01

    We treat quantum creation of gravitons by small scale factor oscillations around the average of an expanding universe. Such oscillations can arise in standard general relativity due to oscillations of a homogeneous, minimally coupled scalar field. They can also arise in modified gravity theories with a term proportional to the square of the Ricci scalar in the gravitational action. The graviton wave equation is different in the two cases, leading to somewhat different creation rates. Both cases are treated using a perturbative method due to Birrell and Davies, involving an expansion in a conformal coupling parameter to calculate the number density and energy density of the created gravitons. Cosmological constraints on the present graviton energy density and the dimensionless amplitude of the oscillations are discussed. We also discuss decoherence of quantum systems produced by the spacetime geometry fluctuations due to such a graviton bath.

  6. Molecular kinetics. Ras activation by SOS: allosteric regulation by altered fluctuation dynamics.

    PubMed

    Iversen, Lars; Tu, Hsiung-Lin; Lin, Wan-Chen; Christensen, Sune M; Abel, Steven M; Iwig, Jeff; Wu, Hung-Jen; Gureasko, Jodi; Rhodes, Christopher; Petit, Rebecca S; Hansen, Scott D; Thill, Peter; Yu, Cheng-Han; Stamou, Dimitrios; Chakraborty, Arup K; Kuriyan, John; Groves, Jay T

    2014-07-04

    Activation of the small guanosine triphosphatase H-Ras by the exchange factor Son of Sevenless (SOS) is an important hub for signal transduction. Multiple layers of regulation, through protein and membrane interactions, govern activity of SOS. We characterized the specific activity of individual SOS molecules catalyzing nucleotide exchange in H-Ras. Single-molecule kinetic traces revealed that SOS samples a broad distribution of turnover rates through stochastic fluctuations between distinct, long-lived (more than 100 seconds), functional states. The expected allosteric activation of SOS by Ras-guanosine triphosphate (GTP) was conspicuously absent in the mean rate. However, fluctuations into highly active states were modulated by Ras-GTP. This reveals a mechanism in which functional output may be determined by the dynamical spectrum of rates sampled by a small number of enzymes, rather than the ensemble average. Copyright © 2014, American Association for the Advancement of Science.

  7. The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics

    PubMed Central

    Buice, Michael; Koch, Christof; Mihalas, Stefan

    2013-01-01

    The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219

  8. Scale-free Graphs for General Aviation Flight Schedules

    NASA Technical Reports Server (NTRS)

    Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.

    2003-01-01

    In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.

  9. Potential Drug-Drug Interactions in a Cohort of Elderly, Polymedicated Primary Care Patients on Antithrombotic Treatment.

    PubMed

    Schneider, Katharina Luise; Kastenmüller, Kathrin; Weckbecker, Klaus; Bleckwenn, Markus; Böhme, Miriam; Stingl, Julia Carolin

    2018-06-01

    Drug-drug interactions (DDIs) are an important risk factor for adverse drug reactions. Older, polymedicated patients are particularly affected. Although antithrombotics have been detected as high-risk drugs for DDIs, data on older patients exposed to them are scarce. Baseline data of 365 IDrug study outpatients (≥ 60 years, use of an antithrombotic and one or more additional long-term drug) were analyzed regarding potential drug-drug interactions (pDDIs) with a clinical decision support system. Data included prescription and self-medication drugs. The prevalence of having one or more pDDI was 85.2%. The median number of alerts per patient was three (range 0-17). For 58.4% of the patients, potential severe/contraindicated interactions were detected. Antiplatelets and non-steroidal anti-inflammatory drugs (NSAIDs) showed the highest number of average pDDI alert involvements per use (2.9 and 2.2, respectively). For NSAIDs, also the highest average number of severe/contraindicated alert involvements per use (1.2) was observed. 91.8% of all pDDI involvements concerned the 25 most frequently used drug classes. 97.5% of the severe/contraindicated pDDIs were attributed to only nine different potential clinical manifestations. The most common management recommendation for severe/contraindicated pDDIs was to intensify monitoring. Number of drugs was the only detected factor significantly associated with increased number of pDDIs (p < 0.001). The findings indicate a high risk for pDDIs in older, polymedicated patients on antithrombotics. As a consequence of patients' frequently similar drug regimens, the variety of potential clinical manifestations was small. Awareness of these pDDI symptoms and the triggering drugs as well as patients' self-medication use may contribute to increased patient safety.

  10. Airway remodeling associated with cough hypersensitivity as a consequence of persistent cough: An experimental study.

    PubMed

    Nakaji, Hitoshi; Niimi, Akio; Matsuoka, Hirofumi; Iwata, Toshiyuki; Cui, Shilei; Matsumoto, Hisako; Ito, Isao; Oguma, Tsuyoshi; Otsuka, Kojiro; Takeda, Tomoshi; Inoue, Hideki; Tajiri, Tomoko; Nagasaki, Tadao; Kanemitsu, Yoshihiro; Chin, Kazuo; Mishima, Michiaki

    2016-11-01

    Chronic cough involves airway remodeling associated with cough reflex hypersensitivity. Whether cough itself induces these features remains unknown. Guinea pigs were assigned to receive treatment with citric acid (CA), saline (SA), or CA+dextromethorphan (DEX). All animals were exposed to 0.5M CA on days 1 and 22. On days 4-20, the CA and CA+DEX groups were exposed to CA, and the SA group to saline thrice weekly, during which the CA+DEX group was administered DEX pretreatment to inhibit cough. The number of coughs was counted during each 10-min CA or SA exposure. Terbutaline premedication was started to prevent bronchoconstriction. Bronchoalveolar lavage and pathology were examined on day 25. Average cough number for 10 CA exposures was examined as "cough index" in the CA group, which was divided into frequent (cough index>5) and infrequent (<5) cough subgroups for lavage and pathology analysis. The number of coughs significantly increased in the CA group from day 13 onwards. In the CA+DEX and SA groups, the number of coughs did not differ between days 1 and 22, while average number of coughs during days 4-20 was significantly lower than at days 1 and 22. Bronchoalveolar cell profiles were similar among the four groups. The smooth muscle area of small airways was significantly greater in the frequent-cough subgroup than in the other groups (in which it was similar), and highly correlated with cough index in CA group. Repeated cough induces airway smooth muscle remodeling associated with cough reflex hypersensitivity. Copyright © 2016 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  12. Total Lifetime and Cancer-related Costs for Elderly Patients Diagnosed With Anal Cancer in the United States.

    PubMed

    Deshmukh, Ashish A; Zhao, Hui; Franzini, Luisa; Lairson, David R; Chiao, Elizabeth Y; Das, Prajnan; Swartz, Michael D; Giordano, Sharon H; Cantor, Scott B

    2018-02-01

    To determine the lifetime and phase-specific cost of anal cancer management and the economic burden of anal cancer care in elderly (66 y and older) patients in the United States. For this study, we used Surveillance Epidemiology and End Results-Medicare linked database (1992 to 2009). We matched newly diagnosed anal cancer patients (by age and sex) to noncancer controls. We estimated survival time from the date of diagnosis until death. Lifetime and average annual cost by stage and age at diagnosis were estimated by combining survival data with Medicare claims. The average lifetime cost, proportion of patients who were elderly, and the number of incident cases were used to estimate the economic burden. The average lifetime cost for patients with anal cancer was US$50,150 (N=2227) (2014 US dollars). The average annual cost in men and women was US$8025 and US$5124, respectively. The overall survival after the diagnosis of cancer was 8.42 years. As the age and stage at diagnosis increased, so did the cost of cancer-related care. The anal cancer-related lifetime economic burden in Medicare patients in the United States was US$112 million. Although the prevalence of anal cancer among the elderly in the United States is small, its economic burden is considerable.

  13. Comparative structural and computational analysis supports eighteen cellulose synthases in the plant cellulose synthesis complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less

  14. Comparative structural and computational analysis supports eighteen cellulose synthases in the plant cellulose synthesis complex

    DOE PAGES

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; ...

    2016-06-27

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individualmore » lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In conclusion, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains.« less

  15. Comparative Structural and Computational Analysis Supports Eighteen Cellulose Synthases in the Plant Cellulose Synthesis Complex

    PubMed Central

    Nixon, B. Tracy; Mansouri, Katayoun; Singh, Abhishek; Du, Juan; Davis, Jonathan K.; Lee, Jung-Goo; Slabaugh, Erin; Vandavasi, Venu Gopal; O’Neill, Hugh; Roberts, Eric M.; Roberts, Alison W.; Yingling, Yaroslava G.; Haigler, Candace H.

    2016-01-01

    A six-lobed membrane spanning cellulose synthesis complex (CSC) containing multiple cellulose synthase (CESA) glycosyltransferases mediates cellulose microfibril formation. The number of CESAs in the CSC has been debated for decades in light of changing estimates of the diameter of the smallest microfibril formed from the β-1,4 glucan chains synthesized by one CSC. We obtained more direct evidence through generating improved transmission electron microscopy (TEM) images and image averages of the rosette-type CSC, revealing the frequent triangularity and average cross-sectional area in the plasma membrane of its individual lobes. Trimeric oligomers of two alternative CESA computational models corresponded well with individual lobe geometry. A six-fold assembly of the trimeric computational oligomer had the lowest potential energy per monomer and was consistent with rosette CSC morphology. Negative stain TEM and image averaging showed the triangularity of a recombinant CESA cytosolic domain, consistent with previous modeling of its trimeric nature from small angle scattering (SAXS) data. Six trimeric SAXS models nearly filled the space below an average FF-TEM image of the rosette CSC. In summary, the multifaceted data support a rosette CSC with 18 CESAs that mediates the synthesis of a fundamental microfibril composed of 18 glucan chains. PMID:27345599

  16. Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel

    NASA Astrophysics Data System (ADS)

    Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.

    2015-01-01

    The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.

  17. [External lumbar drainage with volumetric continuing infusion pump in patients with cerebrospinal fluid leak. A case series].

    PubMed

    Manso Melgosa, Ana Belén; García Gutiérrez, Helena; Fernández Porras, Mónica; Castrillo Manero, Ana Berta; Pérez Marijuán, Belén

    To describe the incidence and complications arising in a number of cases of patients with cerebrospinal fluid leak treated by external lumbar drainage with infusion pump (IP) volumetric continuous from 2001 to 2014. Quantify cerebrospinal fluid leak closed by lumbar drainage with IP. Retrospective descriptive case series study. patients undergoing transsphenoidal pituitary surgery, Chiari surgery and laminectomy, that developed postoperative cerebrospinal fluid leak treated with continuous external lumbar drainage by IP. age, sex, type of intervention, variables related to the practice of the pump and complications. Average and medians were calculated for quantitative variables, frequencies and percentages for qualitative. Sample: 11 subjects. Incidence in running IP: disconnection, occlusion and acoustic alarm activation. Most frequently complication is headache; a case of pneumocephalus. The small number of subjects and the heterogeneity of these do not allow for comparison or establishing associations between variables. The resolution of the cerebrospinal fluid leak with continuous IP is lower in this study than others, and may be influenced by the small number of subjects. It should be noted the frequent activation of the pump alarm for no apparent cause. Protocol would be developed for preparing the IP team to reduce the acoustic alarm activation, and would make a prospective multicenter study. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  18. Being Sticker Rich: Numerical Context Influences Children’s Sharing Behavior

    PubMed Central

    Posid, Tasha; Fazio, Allyse; Cordes, Sara

    2015-01-01

    Young children spontaneously share resources with anonymous recipients, but little is known about the specific circumstances that promote or hinder these prosocial tendencies. Children (ages 3–11) received a small (12) or large (30) number of stickers, and were then given the opportunity to share their windfall with either one or multiple anonymous recipients (Dictator Game). Whether a child chose to share or not varied as a function of age, but was uninfluenced by numerical context. Moreover, children’s giving was consistent with a proportion-based account, such that children typically donated a similar proportion (but different absolute number) of the resources given to them, regardless of whether they originally received a small or large windfall. The proportion of resources donated, however, did vary based on the number of recipients with whom they were allowed to share, such that on average, children shared more when there were more recipients available, particularly when they had more resources, suggesting they take others into consideration when making prosocial decisions. Finally, results indicated that a child’s gender also predicted sharing behavior, with males generally sharing more resources than females. Together, findings suggest that the numerical contexts under which children are asked to share, as well as the quantity of resources that they have to share, may interact to promote (or hinder) altruistic behaviors throughout childhood. PMID:26535900

  19. Data-driven approach to detect common copy-number variations and frequency profiles in a population-based Korean cohort.

    PubMed

    Moon, Sanghoon; Kim, Young Jin; Hong, Chang Bum; Kim, Dong-Joon; Lee, Jong-Young; Kim, Bong-Jo

    2011-11-01

    To date, hundreds of thousands of copy-number variation (CNV) data have been reported using various platforms. The proportion of Asians in these data is, however, relatively small as compared with that of other ethnic groups, such as Caucasians and Yorubas. Because of limitations in platform resolution and the high noise level in signal intensity, in most CNV studies (particularly those using single nucleotide polymorphism arrays), the average number of CNVs in an individual is less than the number of known CNVs. In this study, we ascertained reliable, common CNV regions (CNVRs) and identified actual frequency rates in the Korean population to provide more CNV information. We performed two-stage analyses for detecting structural variations with two platforms. We discovered 576 common CNVRs (88 CNV segments on average in an individual), and 87% (501 of 576) of these CNVRs overlapped by ≥1 bp with previously validated CNV events. Interestingly, from the frequency analysis of CNV profiles, 52 of 576 CNVRs had a frequency rate of <1% in the 8842 individuals. Compared with other common CNV studies, this study found six common CNVRs that were not reported in previous CNV studies. In conclusion, we propose the data-driven detection approach to discover common CNVRs including those of unreported in the previous Korean CNV study while minimizing false positives. Through our approach, we successfully discovered more common CNVRs than previous Korean CNV study and conducted frequency analysis. These results will be a valuable resource for the effective level of CNVs in the Korean population.

  20. Can wireless text messaging improve adherence to preventive activities? Results of a randomised controlled trial.

    PubMed

    Cocosila, Mihail; Archer, Norm; Haynes, R Brian; Yuan, Yufei

    2009-04-01

    To determine the effectiveness of cell phone wireless text messaging for improving adherence to a healthy behaviour. A randomised, unblinded, controlled trial was conducted with 102 subjects, 18 years or older, each having a cell phone and willing to take 1 vitamin C pill per day for 1 month for preventive reasons. Intervention group participants received text messaging reminders and were asked to acknowledge receiving their messages after taking the vitamins, whereas control group subjects had no text messaging activity. Self-reported adherence and the number of participant text messages acknowledging vitamins taken. Both groups reported an increased adherence after the trial: by 246% for the intervention group and by 131% for the control group. There was a non-significant difference between the two groups at endpoint: an average difference of 0.8 between the number of pills missed in the last week of the trial (2.5 out of 7 in the intervention and 3.3 out of 7 in the control group) with a power of 0.54. The study revealed a significant correlation (coefficient=-0.352, sig.=0.01) between the average number of text messaging acknowledgements sent by the intervention group participants and the number of pills they reported missed during the last week of the trial. This was a small randomised controlled trial with inconclusive but encouraging results. It suggests a new approach in addressing insufficient adherence in outpatient conditions and shows that the use of information technology tools for compliance warrants further research.

  1. Spatial, road geometric, and biotic factors associated with Barn Owl mortality along an interstate highway

    USGS Publications Warehouse

    Arnold, Erin M.; Hanser, Steven E.; Regan, Tempe; Thompson, Jeremy; Lowe, Melinda; Kociolek, Angela; Belthoff, James R.

    2018-01-01

    Highway programs typically focus on reducing vehicle collisions with large mammals because of economic or safety reasons while overlooking the millions of birds that die annually from traffic. We studied wildlife‐vehicle collisions along an interstate highway in southern Idaho, USA, with among the highest reported rates of American Barn Owl Tyto furcata road mortality. Carcass data from systematic and ad hoc surveys conducted in 2004–2006 and 2013–2015 were used to explore the extent to which spatial, road geometric, and biotic factors explained Barn Owl‐vehicle collisions. Barn Owls outnumbered all other identified vertebrate species of roadkill and represented > 25% of individuals and 73.6% of road‐killed birds. At a 1‐km highway segment scale, the number of dead Barn Owls decreased with increasing numbers of human structures, cumulative length of secondary roads near the highway, and width of the highway median. Number of dead Barn Owls increased with higher commercial average annual daily traffic (CAADT), small mammal abundance index, and with grass rather than shrubs in the roadside verge. The small mammal abundance index was also greater in roadsides with grass versus mixed shrubs, suggesting that Barn Owls may be attracted to grassy portions of the highway with more abundant small mammals for hunting prey. When assessed at a 3‐km highway segment scale, the number of dead Barn Owls again increased with higher CAADT as well as with greater numbers of dairy farms. At a 5‐km scale, number of dead Barn Owls increased with greater percentage of cropland near the highway. While human conversion of the environment from natural shrub‐steppe to irrigated agriculture in this region of Idaho has likely enhanced habitat for Barns Owls, it simultaneously has increased risk for owl‐vehicle collisions where an interstate highway traverses the altered landscape. We review some approaches for highway mitigation and suggest that reducing wildlife‐vehicle collisions involving Barn Owls may contribute to the persistence of this species.

  2. Birds have primate-like numbers of neurons in the forebrain

    PubMed Central

    Olkowicz, Seweryn; Kocourek, Martin; Lučan, Radek K.; Porteš, Michal; Fitch, W. Tecumseh; Herculano-Houzel, Suzana; Němec, Pavel

    2016-01-01

    Some birds achieve primate-like levels of cognition, even though their brains tend to be much smaller in absolute size. This poses a fundamental problem in comparative and computational neuroscience, because small brains are expected to have a lower information-processing capacity. Using the isotropic fractionator to determine numbers of neurons in specific brain regions, here we show that the brains of parrots and songbirds contain on average twice as many neurons as primate brains of the same mass, indicating that avian brains have higher neuron packing densities than mammalian brains. Additionally, corvids and parrots have much higher proportions of brain neurons located in the pallial telencephalon compared with primates or other mammals and birds. Thus, large-brained parrots and corvids have forebrain neuron counts equal to or greater than primates with much larger brains. We suggest that the large numbers of neurons concentrated in high densities in the telencephalon substantially contribute to the neural basis of avian intelligence. PMID:27298365

  3. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  4. Sub-grid-scale description of turbulent magnetic reconnection in magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widmer, F., E-mail: widmer@mps.mpg.de; Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen; Büchner, J.

    Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could transport energy from large to small scales where binary particle collisions are rare. We have investigated the influence of small scale magnetohydrodynamics (MHD) turbulence on the reconnection rate in the framework of a compressible MHD approach including sub-grid-scale (SGS) turbulence. For this sake, we considered Harris-type and force-free current sheets with finite guide magnetic fields directed out of the reconnection plane. The goal is to find out whether unresolved by conventional simulations MHD turbulence can enhance the reconnection process in high-Reynolds-number astrophysicalmore » plasmas. Together with the MHD equations, we solve evolution equations for the SGS energy and cross-helicity due to turbulence according to a Reynolds-averaged turbulence model. The SGS turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. By this way, the feedback of the unresolved turbulence into the MHD reconnection process is taken into account. It is shown that the turbulence controls the regimes of reconnection by its characteristic timescale τ{sub t}. The dependence on resistivity was investigated for large-Reynolds-number plasmas for Harris-type as well as force-free current sheets with guide field. We found that magnetic reconnection depends on the relation between the molecular and apparent effective turbulent resistivity. We found that the turbulence timescale τ{sub t} decides whether fast reconnection takes place or whether the stored energy is just diffused away to small scale turbulence. If the amount of energy transferred from large to small scales is enhanced, fast reconnection can take place. Energy spectra allowed us to characterize the different regimes of reconnection. It was found that reconnection is even faster for larger Reynolds numbers controlled by the molecular resistivity η, as long as the initial level of turbulence is not too large. This implies that turbulence plays an important role to reach the limit of fast reconnection in large Reynolds number plasmas even for smaller amounts of turbulence.« less

  5. Hydrologic and climatic changes in three small watersheds after timber harvest.

    Treesearch

    W.B. Fowler; J.D. Helvey; E.N. Felix

    1987-01-01

    No significant increases in annual water yield were shown for three small watersheds in northeastern Oregon after shelterwood cutting (30-percent canopy removal, 50-percent basal area removal) and clearcutting. Average maximum air temperature increased after harvest and average minimum air temperature decreased by up to 2.6 °C. Both maximum and minimum water...

  6. A family of small-world network models built by complete graph and iteration-function

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Yao, Bing

    2018-02-01

    Small-world networks are popular in real-life complex systems. In the past few decades, researchers presented amounts of small-world models, in which some are stochastic and the rest are deterministic. In comparison with random models, it is not only convenient but also interesting to study the topological properties of deterministic models in some fields, such as graph theory, theorem computer sciences and so on. As another concerned darling in current researches, community structure (modular topology) is referred to as an useful statistical parameter to uncover the operating functions of network. So, building and studying such models with community structure and small-world character will be a demanded task. Hence, in this article, we build a family of sparse network space N(t) which is different from those previous deterministic models. Even though, our models are established in the same way as them, iterative generation. By randomly connecting manner in each time step, every resulting member in N(t) has no absolutely self-similar feature widely shared in a large number of previous models. This makes our insight not into discussing a class certain model, but into investigating a group various ones spanning a network space. Somewhat surprisingly, our results prove all members of N(t) to possess some similar characters: (a) sparsity, (b) exponential-scale feature P(k) ∼α-k, and (c) small-world property. Here, we must stress a very screming, but intriguing, phenomenon that the difference of average path length (APL) between any two members in N(t) is quite small, which indicates this random connecting way among members has no great effect on APL. At the end of this article, as a new topological parameter correlated to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees on a representative member NB(t) of N(t) is studied in detail, then an exact analytical solution for its spanning trees entropy is also obtained.

  7. Social networks as predictors of colorectal cancer screening in African Americans.

    PubMed

    Alema-Mensah, Ernest; Smith, Selina A; Claridy, Mechelle; Ede, Victor; Ansa, Benjamin; Blumenthal, Daniel S

    2017-01-01

    Early detection can reduce colorectal cancer (CRC) mortality by 15%-33%, and screening is widely recommended for average-risk adults beginning at age 50 years. Colorectal cancer mortality rates are higher in African Americans than in whites, while screening rates are somewhat lower. Individual social networks can reduce emotional and/or logistical barriers to health-promoting but distasteful procedures such as CRC screening. The aim of this study was to examine social network interactions, and their impact on CRC screening among African Americans. We hypothesized a positive association between social network index (SNI) scores and CRC screening. In a community intervention trial with four arms, we previously demonstrated the efficacy of a small group educational intervention to promote CRC screening among African Americans. This intervention outperformed a one-on-one educational intervention, a reduced out-of-pocket expense intervention, and a control condition. In the present analysis, we compared the SNI scores for participants in the small group intervention cohort with a comparison group comprised of the other three cohorts. Social networks were assessed using the Social Network Index developed by Cohen. Small group participants had a significantly higher network diversity score (Mean difference 0.71; 95% CI, 0.12-1.31; p=0.0017) than the comparison group. In the second component of the SNI score - the number of people talked to over a two week period - the small group intervention cohort also scored significantly higher than the comparison group. (Mean difference, 9.29; 95% CI, 3.963-14.6266; p=0.0004). The findings suggest that social interaction and support was at least partially responsible for the relatively high post-intervention screening rate in the small group intervention participants. Education in small groups could foster strong social networks. Strong and positive network diversity and a large number of people in social networks may enhance CRC screening rates among African Americans.

  8. Does Man Marking Influence Running Outputs and Intensity During Small-Sided Soccer Games?

    PubMed

    Aasgaard, Mats; Kilding, Andrew E

    2018-06-20

    Aasgaard, M and Kilding, AE. Does man marking influence running outputs and intensity during small-sided soccer games? J Strength Cond Res XX(X): 000-000, 2018-Small-sided games (SSGs) are considered an effective training tool for physical development in soccer. Small-sided games can be modified in several ways to manipulate the physical demands to best match the game demands, player characteristics, and session objectives. The aim of this study was to compare the physiological, perceptual, and Global Positioning System (GPS)-derived time-motion characteristics of man marking (MM) vs. non-man marking (NMM) in 2v2, 3v3, and 4v4 SSGs. In an acute crossover design, 8 amateur soccer players (mean age ± SD: 23.6 ± 3.3 years) played 2v2, 3v3, and 4v4 SSGs consisting of 4 × 4-minute bouts, with 2-minute passive recovery. During all SSGs, players wore a heart rate (HR) monitor and GPS unit and reported their rating of perceived exertion (RPE). Average percent HR (%HRave) induced small to moderate effects with MM compared with NMM (%Δ = 1-2.7%; effect size [ES] = 0.22-0.65). Comparisons between MM formats indicated a decrease in %HRave with increased player numbers (%Δ = 1.6-3.5%; ES = 0.39-0.86). Perceptual load increased with MM compared with NMM (%Δ = 6.7-17.6%; ES = 0.66-2.09), whereas increases in player numbers (MM only) reduced RPE output (%Δ = 9.4-24.3%; ES = 1.14-3.61). Time-motion characteristics revealed substantially greater total distance covered in MM irrespective of player number (%Δ = 6.8-14.7%; ES = 1.34-2.82). There were very likely increases in distances covered at striding (13.1-17.8 km·h) (%Δ: 23.4-33.2; ES = 2.42-4.35) and high-intensity running (HIR) (17.9-21 km·h) (%Δ: 47.3-104; ES = 0.91-1.68) for MM compared with NMM irrespective of player number. In conclusion, MM substantially elevated perceptual load and distances from striding to HIR regardless of player number, whereas differences between NMM and MM for internal load remain unclear. Use of MM may allow coaches to condition for particularly demanding phases of the game and prescription of larger SSG formats to increase distance covered at higher velocities.

  9. Comparison of 15 evaporation methods applied to a small mountain lake in the northeastern USA

    USGS Publications Warehouse

    Rosenberry, D.O.; Winter, T.C.; Buso, D.C.; Likens, G.E.

    2007-01-01

    Few detailed evaporation studies exist for small lakes or reservoirs in mountainous settings. A detailed evaporation study was conducted at Mirror Lake, a 0.15 km2 lake in New Hampshire, northeastern USA, as part of a long-term investigation of lake hydrology. Evaporation was determined using 14 alternate evaporation methods during six open-water seasons and compared with values from the Bowen-ratio energy-budget (BREB) method, considered the standard. Values from the Priestley-Taylor, deBruin-Keijman, and Penman methods compared most favorably with BREB-determined values. Differences from BREB values averaged 0.19, 0.27, and 0.20 mm d-1, respectively, and results were within 20% of BREB values during more than 90% of the 37 monthly comparison periods. All three methods require measurement of net radiation, air temperature, change in heat stored in the lake, and vapor pressure, making them relatively data intensive. Several of the methods had substantial bias when compared with BREB values and were subsequently modified to eliminate bias. Methods that rely only on measurement of air temperature, or air temperature and solar radiation, were relatively cost-effective options for measuring evaporation at this small New England lake, outperforming some methods that require measurement of a greater number of variables. It is likely that the atmosphere above Mirror Lake was affected by occasional formation of separation eddies on the lee side of nearby high terrain, although those influences do not appear to be significant to measured evaporation from the lake when averaged over monthly periods. ?? 2007 Elsevier B.V. All rights reserved.

  10. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  11. Development of a novel depth of interaction PET detector using highly multiplexed G-APD cross-strip encoding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolb, A., E-mail: armin.kolb@med.uni-tuebingen.de; Parl, C.; Liu, C. C.

    Purpose: The aim of this study was to develop a prototype PET detector module for a combined small animal positron emission tomography and magnetic resonance imaging (PET/MRI) system. The most important factor for small animal imaging applications is the detection sensitivity of the PET camera, which can be optimized by utilizing longer scintillation crystals. At the same time, small animal PET systems must yield a high spatial resolution. The measured object is very close to the PET detector because the bore diameter of a high field animal MR scanner is limited. When used in combination with long scintillation crystals, thesemore » small-bore PET systems generate parallax errors that ultimately lead to a decreased spatial resolution. Thus, we developed a depth of interaction (DoI) encoding PET detector module that has a uniform spatial resolution across the whole field of view (FOV), high detection sensitivity, compactness, and insensitivity to magnetic fields. Methods: The approach was based on Geiger mode avalanche photodiode (G-APD) detectors with cross-strip encoding. The number of readout channels was reduced by a factor of 36 for the chosen block elements. Two 12 × 2 G-APD strip arrays (25μm cells) were placed perpendicular on each face of a 12 × 12 lutetium oxyorthosilicate crystal block with a crystal size of 1.55 × 1.55 × 20 mm. The strip arrays were multiplexed into two channels and used to calculate the x, y coordinates for each array and the deposited energy. The DoI was measured in step sizes of 1.8 mm by a collimated {sup 18}F source. The coincident resolved time (CRT) was analyzed at all DoI positions by acquiring the waveform for each event and applying a digital leading edge discriminator. Results: All 144 crystals were well resolved in the crystal flood map. The average full width half maximum (FWHM) energy resolution of the detector was 12.8% ± 1.5% with a FWHM CRT of 1.14 ± 0.02 ns. The average FWHM DoI resolution over 12 crystals was 2.90 ± 0.15 mm. Conclusions: The novel DoI PET detector, which is based on strip G-APD arrays, yielded a DoI resolution of 2.9 mm and excellent timing and energy resolution. Its high multiplexing factor reduces the number of electronic channels. Thus, this cross-strip approach enables low-cost, high-performance PET detectors for dedicated small animal PET and PET/MRI and potentially clinical PET/MRI systems.« less

  12. Molecular transformations of phenolic SOA during photochemical aging in the aqueous phase: Competition among oligomerization, functionalization, and fragmentation

    DOE PAGES

    Yu, Lu; Smith, Jeremy; Laskin, Alexander; ...

    2016-04-13

    Organic aerosol is formed and transformed in atmospheric aqueous phases (e.g., cloud and fog droplets and deliquesced airborne particles containing small amounts of water) through a multitude of chemical reactions. Understanding these reactions is important for a predictive understanding of atmospheric aging of aerosols and their impacts on climate, air quality, and human health. In this study, we investigate the chemical evolution of aqueous secondary organic aerosol (aqSOA) formed during reactions of phenolic compounds with two oxidants – the triplet excited state of an aromatic carbonyl ( 3C *) and hydroxyl radical ( • OH). Changes in themore » molecular composition of aqSOA as a function of aging time are characterized using an offline nanospray desorption electrospray ionization mass spectrometer (nano-DESI MS) whereas the real-time evolution of SOA mass, elemental ratios, and average carbon oxidation state (OS C) are monitored using an online aerosol mass spectrometer (AMS). Our results indicate that oligomerization is an important aqueous reaction pathway for phenols, especially during the initial stage of photooxidation equivalent to ~2 h irradiation under midday winter solstice sunlight in Northern California. At later reaction times functionalization (i.e., adding polar oxygenated functional groups to the molecule) and fragmentation (i.e., breaking of covalent bonds) become more important processes, forming a large variety of functionalized aromatic and open-ring products with higher OS C values. Fragmentation reactions eventually dominate the photochemical evolution of phenolic aqSOA, forming a large number of highly oxygenated ring-opening molecules with carbon numbers ( n C) below 6. The average n C of phenolic aqSOA decreases while average OS C increases over the course of photochemical aging. In addition, the saturation vapor pressures ( C *) of dozens of the most abundant phenolic aqSOA molecules are estimated. A wide range of C * values is observed, varying from < 10 –20 µg m –3 for functionalized phenolic oligomers to > 10 µg m –3 for small open-ring species. Furthermore, the detection of abundant extremely low-volatile organic compounds (ELVOC) indicates that aqueous reactions of phenolic compounds are likely an important source of ELVOC in the atmosphere.« less

  13. Molecular transformations of phenolic SOA during photochemical aging in the aqueous phase: Competition among oligomerization, functionalization, and fragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Lu; Smith, Jeremy; Laskin, Alexander

    Organic aerosol is formed and transformed in atmospheric aqueous phases (e.g., cloud and fog droplets and deliquesced airborne particles containing small amounts of water) through a multitude of chemical reactions. Understanding these reactions is important for a predictive understanding of atmospheric aging of aerosols and their impacts on climate, air quality, and human health. In this study, we investigate the chemical evolution of aqueous secondary organic aerosol (aqSOA) formed during reactions of phenolic compounds with two oxidants – the triplet excited state of an aromatic carbonyl ( 3C *) and hydroxyl radical ( • OH). Changes in themore » molecular composition of aqSOA as a function of aging time are characterized using an offline nanospray desorption electrospray ionization mass spectrometer (nano-DESI MS) whereas the real-time evolution of SOA mass, elemental ratios, and average carbon oxidation state (OS C) are monitored using an online aerosol mass spectrometer (AMS). Our results indicate that oligomerization is an important aqueous reaction pathway for phenols, especially during the initial stage of photooxidation equivalent to ~2 h irradiation under midday winter solstice sunlight in Northern California. At later reaction times functionalization (i.e., adding polar oxygenated functional groups to the molecule) and fragmentation (i.e., breaking of covalent bonds) become more important processes, forming a large variety of functionalized aromatic and open-ring products with higher OS C values. Fragmentation reactions eventually dominate the photochemical evolution of phenolic aqSOA, forming a large number of highly oxygenated ring-opening molecules with carbon numbers ( n C) below 6. The average n C of phenolic aqSOA decreases while average OS C increases over the course of photochemical aging. In addition, the saturation vapor pressures ( C *) of dozens of the most abundant phenolic aqSOA molecules are estimated. A wide range of C * values is observed, varying from < 10 –20 µg m –3 for functionalized phenolic oligomers to > 10 µg m –3 for small open-ring species. Furthermore, the detection of abundant extremely low-volatile organic compounds (ELVOC) indicates that aqueous reactions of phenolic compounds are likely an important source of ELVOC in the atmosphere.« less

  14. Molecular transformations of phenolic SOA during photochemical aging in the aqueous phase: competition among oligomerization, functionalization, and fragmentation

    NASA Astrophysics Data System (ADS)

    Yu, L.; Smith, J.; Laskin, A.; George, K. M.; Anastasio, C.; Laskin, J.; Dillner, A. M.; Zhang, Q.

    2015-10-01

    Organic aerosol is formed and transformed in atmospheric aqueous phases (e.g., cloud and fog droplets and deliquesced airborne particles containing small amounts of water) through a multitude of chemical reactions. Understanding these reactions is important for a predictive understanding of atmospheric aging of aerosols and their impacts on climate, air quality, and human health. In this study, we investigate the chemical evolution of aqueous secondary organic aerosol (aqSOA) formed during reactions of phenolic compounds with two oxidants - the triplet excited state of an aromatic carbonyl (3C*) and hydroxyl radical (•OH). Changes in the molecular composition of aqSOA as a function of aging time are characterized using an offline nanospray desorption electrospray ionization mass spectrometer (nano-DESI MS) whereas the real-time evolution of SOA mass, elemental ratios, and average carbon oxidation state (OSC) are monitored using an online aerosol mass spectrometer (AMS). Our results indicate that oligomerization is an important aqueous reaction pathway for phenols, especially during the initial stage of photooxidation equivalent to ∼ 2 h irradiation under midday, winter solstice sunlight in northern California. At later reaction times functionalization (i.e., adding polar oxygenated functional groups to the molecule) and fragmentation (i.e., breaking of covalent bonds) become more important processes, forming a large variety of functionalized aromatic and open-ring products with higher OSC values. Fragmentation reactions eventually dominate the photochemical evolution of phenolic aqSOA, forming a large number of highly oxygenated open-ring molecules with carbon numbers (nC) below 6. The average nC of phenolic aqSOA decreases while average OSC increases over the course of photochemical aging. In addition, the saturation vapor pressures C*) of dozens of the most abundant phenolic aqSOA molecules are estimated. A wide range of C* values is observed, varying from < 10-20 μg m-3 for functionalized phenolic oligomers to > 10 μg m-3 for small open-ring species. The detection of abundant extremely low volatile organic compounds (ELVOC) indicates that aqueous reactions of phenolic compounds are likely an important source of ELVOC in the atmosphere.

  15. Molecular transformations of phenolic SOA during photochemical aging in the aqueous phase: competition among oligomerization, functionalization, and fragmentation

    NASA Astrophysics Data System (ADS)

    Yu, Lu; Smith, Jeremy; Laskin, Alexander; George, Katheryn M.; Anastasio, Cort; Laskin, Julia; Dillner, Ann M.; Zhang, Qi

    2016-04-01

    Organic aerosol is formed and transformed in atmospheric aqueous phases (e.g., cloud and fog droplets and deliquesced airborne particles containing small amounts of water) through a multitude of chemical reactions. Understanding these reactions is important for a predictive understanding of atmospheric aging of aerosols and their impacts on climate, air quality, and human health. In this study, we investigate the chemical evolution of aqueous secondary organic aerosol (aqSOA) formed during reactions of phenolic compounds with two oxidants - the triplet excited state of an aromatic carbonyl (3C∗) and hydroxyl radical (OH). Changes in the molecular composition of aqSOA as a function of aging time are characterized using an offline nanospray desorption electrospray ionization mass spectrometer (nano-DESI MS) whereas the real-time evolution of SOA mass, elemental ratios, and average carbon oxidation state (OSC) are monitored using an online aerosol mass spectrometer (AMS). Our results indicate that oligomerization is an important aqueous reaction pathway for phenols, especially during the initial stage of photooxidation equivalent to ˜ 2 h irradiation under midday winter solstice sunlight in Northern California. At later reaction times functionalization (i.e., adding polar oxygenated functional groups to the molecule) and fragmentation (i.e., breaking of covalent bonds) become more important processes, forming a large variety of functionalized aromatic and open-ring products with higher OSC values. Fragmentation reactions eventually dominate the photochemical evolution of phenolic aqSOA, forming a large number of highly oxygenated ring-opening molecules with carbon numbers (nC) below 6. The average nC of phenolic aqSOA decreases while average OSC increases over the course of photochemical aging. In addition, the saturation vapor pressures (C∗) of dozens of the most abundant phenolic aqSOA molecules are estimated. A wide range of C∗ values is observed, varying from < 10-20 µg m-3 for functionalized phenolic oligomers to > 10 µg m-3 for small open-ring species. The detection of abundant extremely low-volatile organic compounds (ELVOC) indicates that aqueous reactions of phenolic compounds are likely an important source of ELVOC in the atmosphere.

  16. Effect of Sb content on the physical properties of Ge-Se-Te chalcogenide glasses

    NASA Astrophysics Data System (ADS)

    Vashist, Priyanka; Anjali, Patial, Balbir Singh; Thakur, Nagesh

    2018-05-01

    In the present study, the bulk as-(Se80Te20)94-xGe6Sbx (x = 0, 1, 2, 4, 6, 8) glasses were synthesized using melt quenching technique. The physical properties viz coordination number, lone pair of electrons, number of constraints, glass transition temperature, mean bond energy, cohesive energy, electro-negativity and average heat of atomization of the investigated composition are reported and discussed. It is inferred that on increasing Sb content; average coordination number, average number of constraints, mean bond energy, cohesive energy and glass transition temperature increases but lone pair of electrons, average heat of atomization and deviation of stoichiometry decreases.

  17. On connecting large vessels to small. The meaning of Murray's law

    PubMed Central

    1981-01-01

    A large part of the branching vasculature of the mammalian circulatory and respiratory systems obeys Murray's law, which states that the cube of the radius of a parent vessel equals the sum of the cubes of the radii of the daughters. Where this law is obeyed, a functional relationship exists between vessel radius and volumetric flow, average linear velocity of flow, velocity profile, vessel-wall shear stress, Reynolds number, and pressure gradient in individual vessels. In homogeneous, full-flow sets of vessels, a relation is also established between vessel radius and the conductance, resistance, and cross- sectional area of a full-flow set. PMID:7288393

  18. Network dynamics: The World Wide Web

    NASA Astrophysics Data System (ADS)

    Adamic, Lada Ariana

    Despite its rapidly growing and dynamic nature, the Web displays a number of strong regularities which can be understood by drawing on methods of statistical physics. This thesis finds power-law distributions in website sizes, traffic, and links, and more importantly, develops a stochastic theory which explains them. Power-law link distributions are shown to lead to network characteristics which are especially suitable for scalable localized search. It is also demonstrated that the Web is a "small world": to reach one site from any other takes an average of only 4 hops, while most related sites cluster together. Additional dynamical properties of the Web graph are extracted from diffusion processes.

  19. [CT-guided intervention by means of a laser marking and targeting aid].

    PubMed

    Klöppel, R; Wilke, W; Weisse, T; Steinecke, R

    1997-08-01

    The present study evaluates the use of a laser guidance system for CT-guided intervention. 94 cases of diagnostic biopsies and lumbar sympathectomies (54 cases with laser guidance system and 40 without) were compared. Using the laser guidance system, the number of control scans decreased by 30 to 50%, and necessary corrections of needle location were reduced by a maximum of 30%. The average target deviation of the needle decreased to less than 5 mm in 50% of cases. The laser guidance system is strongly recommended in CT-guided interventions for quality assurance and higher efficiency. The advantage is especially marked if the target area is small.

  20. Turbulence Model Effects on RANS Simulations of the HIFiRE Flight 2 Ground Test Configurations

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Mankbadi, Mina R.; Vyas, Manan A.

    2014-01-01

    The Wind-US Reynolds-averaged Navier-Stokes solver was applied to the Hypersonic International Flight Research Experimentation (HIFiRE) Flight 2 scramjet ground test configuration. Two test points corresponding to flight Mach numbers of 5.9 and 8.9 were examined. The emphasis was examining turbulence model effects on the prediction of flow path pressures. Three variants of the Menter k-omega turbulence model family were investigated. These include the baseline (BSL) and shear stress transport (SST) as well as a modified SST model where the shear stress limiter was altered. Variations in the turbulent Schmidt number were also considered. Choice of turbulence model had a substantial effect on prediction of the flow path pressures. The BSL model produced the highest pressures and the SST model produced the lowest pressures. As expected, the settings for the turbulent Schmidt number also had significant effects on predicted pressures. Small values for the turbulent Schmidt number enabled more rapid mass transfer, faster combustion, and in turn higher flowpath pressures. Optimal settings for turbulence model and turbulent Schmidt number were found to be rather case dependent, as has been concluded in other scramjet investigations.

  1. Pressure and heating-rate distributions on a corrugated surface in a supersonic turbulent boundary layer

    NASA Technical Reports Server (NTRS)

    Sawyer, J. W.

    1977-01-01

    Drag and heating rates on wavy surfaces typical of current corrugated plate designs for thermal protection systems were determined experimentally. Pressure-distribution, heating-rate, and oil-flow tests were conducted in the Langley Unitary Plan wind tunnel at Mach numbers of 2.4 and 4.5 with the corrugated surface exposed to both thick and thin turbulent boundary layers. Tests were conducted with the corrugations at cross-flow angles from 0 deg to 90 deg to the flow. Results show that for cross-flow angles of 30 deg or less, the pressure drag coefficients are less than the local flat-plate skin-friction coefficients and are not significantly affected by Mach number, Reynolds number, or boundary-layer thickness over the ranges investigated. For cross-flow angles greater than 30 deg, the drag coefficients increase significantly with cross-flow angle and moderately with Reynolds number. Increasing the Mach number causes a significant reduction in the pressure drag. The average and peak heating penalties due to the corrugated surface are small for cross-flow angles of 10 deg or less but are significantly higher for the larger cross-flow angles.

  2. 77 FR 72766 - Small Business Size Standards: Support Activities for Mining

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-06

    ... its entirety for parties who have an interest in SBA's overall approach to establishing, evaluating....gov , Docket ID: SBA-2009- 0008. SBA continues to welcome comments on its methodology from interested.... Average firm size. SBA computes two measures of average firm size: simple average and weighted average...

  3. Combined particle-image velocimetry and force analysis of the three-dimensional fluid-structure interaction of a natural owl wing.

    PubMed

    Winzen, A; Roidl, B; Schröder, W

    2016-04-01

    Low-speed aerodynamics has gained increasing interest due to its relevance for the design process of small flying air vehicles. These small aircraft operate at similar aerodynamic conditions as, e.g. birds which therefore can serve as role models of how to overcome the well-known problems of low Reynolds number flight. The flight of the barn owl is characterized by a very low flight velocity in conjunction with a low noise emission and a high level of maneuverability at stable flight conditions. To investigate the complex three-dimensional flow field and the corresponding local structural deformation in combination with their influence on the resulting aerodynamic forces, time-resolved stereoscopic particle-image velocimetry and force and moment measurements are performed on a prepared natural barn owl wing. Several spanwise positions are measured via PIV in a range of angles of attack [Formula: see text] 6° and Reynolds numbers 40 000 [Formula: see text] 120 000 based on the chord length. Additionally, the resulting forces and moments are recorded for -10° ≤ α ≤ 15° at the same Reynolds numbers. Depending on the spanwise position, the angle of attack, and the Reynolds number, the flow field on the wing's pressure side is characterized by either a region of flow separation, causing large-scale vortical structures which lead to a time-dependent deflection of the flexible wing structure or wing regions showing no instantaneous deflection but a reduction of the time-averaged mean wing curvature. Based on the force measurements the three-dimensional fluid-structure interaction is assumed to considerably impact the aerodynamic forces acting on the wing leading to a strong mechanical loading of the interface between the wing and body. These time-depending loads which result from the flexibility of the wing should be taken into consideration for the design of future small flying air vehicles using flexible wing structures.

  4. Enumeration of spanning trees in planar unclustered networks

    NASA Astrophysics Data System (ADS)

    Xiao, Yuzhi; Zhao, Haixing; Hu, Guona; Ma, Xiujuan

    2014-07-01

    Among a variety of subgraphs, spanning trees are one of the most important and fundamental categories. They are relevant to diverse aspects of networks, including reliability, transport, self-organized criticality, loop-erased random walks and so on. In this paper, we introduce a family of modular, self-similar planar networks with zero clustering. Relevant properties of this family are comparable to those networks associated with technological systems having low clustering, like power grids, some electronic circuits, the Internet and some biological systems. So, it is very significant to research on spanning trees of planar networks. However, for a large network, evaluating the relevant determinant is intractable. In this paper, we propose a fairly generic linear algorithm for counting the number of spanning trees of a planar network. Using the algorithm, we derive analytically the exact numbers of spanning trees in planar networks. Our result shows that the computational complexity is O(t) , which is better than that of the matrix tree theorem with O(m2t2) , where t is the number of steps and m is the girth of the planar network. We also obtain the entropy for the spanning trees of a given planar network. We find that the entropy of spanning trees in the studied network is small, which is in sharp contrast to the previous result for planar networks with the same average degree. We also determine an upper bound and a lower bound for the numbers of spanning trees in the family of planar networks by the algorithm. As another application of the algorithm, we give a formula for the number of spanning trees in an outerplanar network with small-world features.

  5. Waterfowl nesting on small man-made islands in prairie wetlands

    USGS Publications Warehouse

    Johnson, R.F.; Woodward, R.O.; Kirsch, L.M.

    1978-01-01

    Small islands constructed in prairie wetlands were attractive nesting sites for mallards (Anas platyrhynchos) and Canada geese (Branta canadensis). Nest densities of mallards on islands averaged 135 per ha compared to 0.03 per ha on adjacent upland habitats. Construction time averaged 2 hours per island and cost $50. No maintenance was required during the first 10 years.

  6. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  7. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    DOE PAGES

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-03-24

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less

  8. High-resolution mapping of Martian water ice clouds using Mars Express OMEGA observations - Derivation of the diurnal cloud life cycle

    NASA Astrophysics Data System (ADS)

    Szantai, Andre; Audouard, Joachim; Madeleine, Jean-Baptiste; Forget, Francois; Pottier, Alizée; Millour, Ehouarn; Gondet, Brigitte; Langevin, Yves; Bibring, Jean-Pierre

    2016-10-01

    The mapping in space and time of water ice clouds can help to explain the Martian water cycle and atmospheric circulation. For this purpose, an ice cloud index (ICI) corresponding to the depth of a water ice absorption band at 3.4 microns is derived from a series of OMEGA images (spectels) covering 5 Martian years. The ICI values for the corresponding pixels are then binned on a high-resolution regular grid (1° longitude x 1° latitude x 5° Ls x 1 h local time) and averaged. Inside each bin, the cloud cover is calculated by dividing the number of pixels considered as cloudy (after comparison to a threshold) to the number of all (valid) pixelsWe compare the maps of clouds obtained around local time 14:00 with collocated TES cloud observations (which were only obtained around this time of the day). A good agreement is found.Averaged ICI compared to the water ice column variable from the Martian Climate Database (MCD) show a correct correlation (~0.5) , which increases when values limited to the tropics only are compared.The number of gridpoints containing ICI values is small ( ~1%), but by taking several neighbor gridpoints and over longer periods, we can observe a cloud life cycle during daytime. An example in the the tropics, around the northern summer solstice, shows a decrease of cloudiness in the morning followed by an increase in the afternoon.

  9. Comparative Analysis of User-Generated Online Yelp Reviews for Periodontal Practices in Multiple Metropolitan Markets.

    PubMed

    Holtzclaw, Dan J

    2017-02-01

    Previously published research for a single metropolitan market (Austin, Texas) found that periodontists fare poorly on the Yelp website for nearly all measured metrics, including average star ratings, number of reviews, review removal rate, and evaluations by "elite" Yelp users. The purpose of the current study is to confirm or refute these findings by expanding datasets to additional metropolitan markets of various sizes and geographic locations. A total of 6,559 Yelp reviews were examined for general dentists, endodontists, pediatric dentists, oral surgeons, orthodontists, and periodontists in small (Austin, Texas), medium (Seattle, Washington), and large (New York City, New York) metropolitan markets. Numerous review characteristics were evaluated, including: 1) total number of reviews; 2) average star rating; 3) review filtering rate; and 4) number of reviews by Yelp members with elite status. Results were compared in multiple ways to determine whether statistically significant differences existed. In all metropolitan markets, periodontists were outperformed by all other dental specialties for all measured Yelp metrics in this study. Intermetropolitan comparisons of periodontal practices showed no statistically significant differences. Periodontists were outperformed consistently by all other dental specialties in every measured metric on the Yelp website. These results were consistent and repeated in all three metropolitan markets evaluated in this study. Poor performance of periodontists on Yelp may be related to the age profile of patients in the typical periodontal practice. This may result in inadvertently biased filtering of periodontal reviews and subsequently poor performance in multiple other categories.

  10. Small Business Procurement Event

    DTIC Science & Technology

    2014-08-13

    Small Business Procurement Event 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of the Navy,Office of Small Business Programs,720 Kennon...distribution unlimited 13. SUPPLEMENTARY NOTES NDIA 27th Navy Gold Coast Small Business Procurement Event, 12-13 Aug 2014, San Diego, CA. 14. ABSTRACT

  11. Less is better.

    PubMed Central

    Samaras, Thomas T.; Elrick, Harold

    2002-01-01

    The "more is better" credo is evaluated in terms of its harmful ramifications on human health, the environment, and the survival of the human race. The trend towards greater height and body weight in developed countries is evaluated in terms of its negative aspects on health and longevity. The benefits of reduced caloric intake are discussed. Countries that survive on lower food intakes are shown to have much less heart disease, such as South Africa, where rural blacks outlive whites and also have a higher percentage of centenarians. The risks of increasing birth weight are discussed in terms of promoting cancer and overweight in adulthood. Rapid childhood growth also is shown to have risks because rapid maturity is in conflict with the need to have more time to learn about our complex world before reaching reproductive capability. The increase in the average size of humans aggravates our burgeoning population numbers, placing even greater demands on our need for fresh water, energy, resources, and a clean invironment. Many good things come in large packages, such as elephants, whales, and trees. While Sumo wrestlers, football players, and basketball players play an important role in our world, their small numbers do not pose a threat to our survival. However, their body habitus should not be the goal for the average male of the future. PMID:11853051

  12. Melatonin Promotes Superovulation in Sika Deer (Cervus nippon)

    PubMed Central

    Wang, Liang; Zhuo, Zhi-Yong; Shi, Wen-Qing; Tan, Dun-Xian; Gao, Chao; Tian, Xiu-Zhi; Zhang, Lu; Zhou, Guang-Bin; Zhu, Shi-En; Yun, Peng; Liu, Guo-Shi

    2014-01-01

    In this study, the effects of melatonin (MT) on superovulation and reproductive hormones (melatonin, follicle-stimulating hormone (FSH), luteinizing hormone (LH) and PRL) were investigated in female sika deer. Different doses (40 or 80 mg/animal) of melatonin were subcutaneously implanted into deer before the breeding season. Exogenous melatonin administration significantly elevated the serum FSH levels at the time of insemination compared with levels in control animals. During superovulation, the serum LH levels in donor sika deer reached their highest values (7.1 ± 2.04 ng/mL) at the point of insemination, compared with the baseline levels (4.98 ± 0.07 ng/mL) in control animals. This high level of LH was sustained until the day of embryo recovery. In contrast, the serum levels of PRL in the 80 mg of melatonin-treated group were significantly lower than those of control deer. The average number of corpora lutea in melatonin-treated deer was significantly higher than that of the control (p < 0.05). The average number of embryos in the deer treated with 40 mg of melatonin was higher than that of the control; however, this increase did not reach significant difference (p > 0.05), which may be related to the relatively small sample size. In addition, embryonic development in melatonin-treated groups was delayed. PMID:25007067

  13. Melatonin promotes superovulation in sika deer (Cervus nippon).

    PubMed

    Wang, Liang; Zhuo, Zhi-Yong; Shi, Wen-Qing; Tan, Dun-Xian; Gao, Chao; Tian, Xiu-Zhi; Zhang, Lu; Zhou, Guang-Bin; Zhu, Shi-En; Yun, Peng; Liu, Guo-Shi

    2014-07-08

    In this study, the effects of melatonin (MT) on superovulation and reproductive hormones (melatonin, follicle-stimulating hormone (FSH), luteinizing hormone (LH) and PRL) were investigated in female sika deer. Different doses (40 or 80 mg/animal) of melatonin were subcutaneously implanted into deer before the breeding season. Exogenous melatonin administration significantly elevated the serum FSH levels at the time of insemination compared with levels in control animals. During superovulation, the serum LH levels in donor sika deer reached their highest values (7.1±2.04 ng/mL) at the point of insemination, compared with the baseline levels (4.98±0.07 ng/mL) in control animals. This high level of LH was sustained until the day of embryo recovery. In contrast, the serum levels of PRL in the 80 mg of melatonin-treated group were significantly lower than those of control deer. The average number of corpora lutea in melatonin-treated deer was significantly higher than that of the control (p<0.05). The average number of embryos in the deer treated with 40 mg of melatonin was higher than that of the control; however, this increase did not reach significant difference (p>0.05), which may be related to the relatively small sample size. In addition, embryonic development in melatonin-treated groups was delayed.

  14. Optimised synthesis of ZnO-nano-fertiliser through green chemistry: boosted growth dynamics of economically important L. esculentum.

    PubMed

    Jabeen, Nyla; Maqbool, Qaisar; Bibi, Tahira; Nazar, Mudassar; Hussain, Syed Z; Hussain, Talib; Jan, Tariq; Ahmad, Ishaq; Maaza, Malik; Anwaar, Sadaf

    2018-06-01

    Mounting-up economic losses to annual crops yield due to micronutrient deficiency, fertiliser inefficiency and increasing microbial invasions (e.g. Xanthomonas cempestri attack on tomatoes) are needed to be solved via nano-biotechnology. So keeping this in view, the authors' current study presents the new horizon in the field of nano-fertiliser with highly nutritive and preservative effect of green fabricated zinc oxide-nanostructures (ZnO-NSs) during Lycopersicum esculentum (tomato) growth dynamics. ZnO-NS prepared via green chemistry possesses highly homogenous crystalline structures well-characterised through ultraviolet and visible spectroscopy, Fourier transform infrared spectroscopy, X-ray diffraction and scanning electron microscope. The ZnO-NS average size was found as small as 18 nm having a crystallite size of 5 nm. L. esculentum were grown in different concentrations of ZnO-NS to examine the different morphological parameters includes time of seed germination, germination percentage, the number of plant leaves, the height of the plant, average number of branches, days count for flowering and fruiting time period along with fruit quantity. Promising results clearly predict that bio-fabricated ZnO-NS at optimum concentration resulted as growth booster and dramatically triggered the plant yield.

  15. A relation between landsat digital numbers, surface reflectance, and the cosine of the solar zenith angle

    USGS Publications Warehouse

    Kowalik, William S.; Marsh, Stuart E.; Lyon, Ronald J. P.

    1982-01-01

    A method for estimating the reflectance of ground sites from satellite radiance data is proposed and tested. The method uses the known ground reflectance from several sites and satellite data gathered over a wide range of solar zenith angles. The method was tested on each of 10 different Landsat images using 10 small sites in the Walker Lake, Nevada area. Plots of raw Landsat digital numbers (DNs) versus the cosine of the solar zenith angle (cos Z) for the the test areas are linear, and the average correlation coefficients of the data for Landsat bands 4, 5, 6, and 7 are 0.94, 0.93, 0.94, and 0.94, respectively. Ground reflectance values for the 10 sites are proportional to the slope of the DN versus cos Z relation at each site. The slope of the DN versus cos Z relation for seven additional sites in Nevada and California were used to estimate the ground reflectances of those sites. The estimates for nearby sites are in error by an average of 1.2% and more distant sites are in error by 5.1%. The method can successfully estimate the reflectance of sites outside the original scene, but extrapolation of the reflectance estimation equations to other areas may violate assumptions of atmospheric homogeneity.

  16. Computation of heats of transport in crystalline solids: II

    NASA Astrophysics Data System (ADS)

    Grout, P. J.; Lidiard, A. B.

    2008-10-01

    This paper explores the application of classical molecular dynamics to the computation of the heat of transport of Au atoms in a model of solid gold at several elevated temperatures above the Debye temperature. It is assumed that the solid shows vacancy disorder. The work shows that to obtain consistent and reliable results it is necessary (a) to use very small time steps (≈1 fs) in the molecular dynamics integration routine and (b) to take averages over a very large number of vacancy displacements—a number which varies with temperature but which is of the order of 105. The results for the reduced heat of transport for the Au atoms show that: (1) it is positive in sign, i.e. that the diffusion of Au atoms in a temperature gradient is biassed towards the cold region or equivalently that the vacancies tend to migrate towards the hotter region; (2) it is predicted to fall as the average temperature increases and that the variation is closely linear in (1/T); (3) its value at high T relative to the energy of activation for vacancy movement is close to the corresponding ratio of experimental quantities. Analysis of these results indicates that the method and model may allow reliable predictions for other metals having the face centred cubic structure.

  17. Application of Self-Similarity Constrained Reynolds-Averaged Turbulence Models to Rayleigh-Taylor and Richtmyer-Meshkov Unstable Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Hartland, Tucker A.; Schilling, Oleg

    2016-11-01

    Analytical self-similar solutions corresponding to Rayleigh-Taylor, Richtmyer-Meshkov and Kelvin-Helmholtz instability are combined with observed values of the growth parameters in these instabilities to derive coefficient sets for K- ɛ and K- L- a Reynolds-averaged turbulence models. It is shown that full numerical solutions of the model equations give mixing layer widths, fields, and budgets in good agreement with the corresponding self-similar quantities for small Atwood number. Both models are then applied to Rayleigh-Taylor instability with increasing density contrasts to estimate the Atwood number above which the self-similar solutions become invalid. The models are also applied to a reshocked Richtmyer-Meshkov instability, and the predictions are compared with data. The expressions for the growth parameters obtained from the similarity analysis are used to develop estimates for the sensitivity of their values to changes in important model coefficients. Numerical simulations using these modified coefficient values are then performed to provide bounds on the model predictions associated with uncertainties in these coefficient values. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. This work was supported by the 2016 LLNL High-Energy-Density Physics Summer Student Program.

  18. Evidence for x -dependent proton color fluctuations in p A collisions at the CERN Large Hadron Collider

    DOE PAGES

    Alvioli, M.; Cole, B. A.; Frankfurt, L.; ...

    2016-01-21

    The centrality dependence of forward jet production in pA collisions at the Large Hadron Collider (LHC) has been found to grossly violate the Glauber model prediction in a way that depends on the x in the proton. In this paper, we argue that this modification pattern provides the first experimental evidence for x-dependent proton color fluctuation effects. On average, parton configurations in the projectile proton containing a parton with large x interact with a nuclear target with a significantly smaller than average cross section and have smaller than average size. We implement the effects of fluctuations of the interaction strengthmore » and, using the ATLAS analysis of how hadron production at backward rapidities depends on the number of wounded nucleons, make quantitative predictions for the centrality dependence of the jet production rate as a function of the x-dependent interaction strength σ(x). We find that σ(x) ~ 0.6(σ) gives a good description of the data at x = 0.6. Finally, these findings support an explanation of the European Muon Collaboration effect as arising from the suppression of small-size nucleon configurations in the nucleus.« less

  19. In−Vitro and In−Vivo Noise Analysis for Optical Neural Recording

    PubMed Central

    Foust, Amanda J.; Schei, Jennifer L.; Rojas, Manuel J.; Rector, David M.

    2008-01-01

    Laser diodes (LD) are commonly used for optical neural recordings in chronically recorded animals and humans, primarily due to their brightness and small size. However, noise introduced by LDs may counteract the benefits of brightness when compared to low−noise light emitting diodes (LEDs). To understand noise sources in optical recordings, we systematically compared instrument and physiological noise profiles in two recording paradigms. A better understanding of noise sources will help improve optical recordings and make them more practical with fewer averages. We stimulated lobster nerves and rat cortex, then compared the root mean square (RMS) noise and signal−to−noise ratios (SNRs) of data obtained with LED, superluminescent diode (SLD) and LD illumination for different numbers of averages. The LED data exhibited significantly higher SNRs in fewer averages than LD data in all recordings. In the absence of tissue, LED noise increased linearly with intensity, while LD noise increased sharply in the transition to lasing and settled to noise levels significantly higher than the LED’s, suggesting that speckle noise contributed to the LD’s higher noise and lower SNRs. Our data recommend low coherence and portable light sources for in−vivo chronic neural recording applications. PMID:19021365

  20. Scale disparity and spectral transfer in anisotropic numerical turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Yeung, P. K.; Brasseur, James G.

    1994-01-01

    To study the effect of cancellations within long-range interactions on local isotropy at the small scales, we calculate explicitly the degree of cancellation in distant interactions in the simulations of Yeung & Brasseur and Yeung, Brasseur & Wang using the single scale disparity parameter 's' developed by Zhou. In the simulations, initially isotropic simulated turbulence was subjected to coherent anisotropic forcing at the large scales and the smallest scales were found to become anisotropic as a consequence of direct large-small scale couplings. We find that the marginally distant interactions in the simulation do not cancel out under summation and that the development of small-scale anisotropy is indeed a direct consequence of the distant triadic group, as argued by Yeung, et. al. A reduction of anisotropy at later times occurs as a result of the isotropizing influences of more local energy-cascading triadic interactions. Nevertheless, the local-to-nonlocal triadic group persists as an isotropizing influence at later times. We find that, whereas long-range interactions, in general, contribute little to net energy transfer into or out of a high wavenumber shell k, the anisotropic transfer of component energy within the shell increases with increasing scale separations. These results are consistent with results by Zhou, and Brasseur & Wei, and suggest that the anisotropizing influences of long range interactions should persist to higher Reynolds numbers. The residual effect of the forced distant group in this low-Reynolds number simulation is found to be forward cascading, on average.

  1. The relations between network-operation and topological-property in a scale-free and small-world network with community structure

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Yao, Bing

    2017-10-01

    It is always an open, demanding and difficult task for generating available model to simulate dynamical functions and reveal inner principles from complex systems and networks. In this article, due to lots of real-life and artificial networks are built from series of simple and small groups (components), we discuss some interesting and helpful network-operation to generate more realistic network models. In view of community structure (modular topology), we present a class of sparse network models N(t , m) . At the moment, we capture the fact the N(t , 4) has not only scale-free feature, which means that the probability that a randomly selected vertex with degree k decays as a power-law, following P(k) ∼k-γ, where γ is the degree exponent, but also small-world property, which indicates that the typical distance between two uniform randomly chosen vertices grows proportionally to logarithm of the order of N(t , 4) , namely, relatively shorter diameter and lower average path length, simultaneously displays higher clustering coefficient. Next, as a new topological parameter correlating to reliability, synchronization capability and diffusion properties of networks, the number of spanning trees over a network is studied in more detail, an exact analytical solution for the number of spanning trees of the N(t , 4) is obtained. Based on the network-operation, part hub-vertex linking with each other will be helpful for structuring various network models and investigating the rules related with real-life networks.

  2. The evolution of the complex sensory and motor systems of the human brain.

    PubMed

    Kaas, Jon H

    2008-03-18

    Inferences about how the complex sensory and motor systems of the human brain evolved are based on the results of comparative studies of brain organization across a range of mammalian species, and evidence from the endocasts of fossil skulls of key extinct species. The endocasts of the skulls of early mammals indicate that they had small brains with little neocortex. Evidence from comparative studies of cortical organization from small-brained mammals of the six major branches of mammalian evolution supports the conclusion that the small neocortex of early mammals was divided into roughly 20-25 cortical areas, including primary and secondary sensory fields. In early primates, vision was the dominant sense, and cortical areas associated with vision in temporal and occipital cortex underwent a significant expansion. Comparative studies indicate that early primates had 10 or more visual areas, and somatosensory areas with expanded representations of the forepaw. Posterior parietal cortex was also expanded, with a caudal half dominated by visual inputs, and a rostral half dominated by somatosensory inputs with outputs to an array of seven or more motor and visuomotor areas of the frontal lobe. Somatosensory areas and posterior parietal cortex became further differentiated in early anthropoid primates. As larger brains evolved in early apes and in our hominin ancestors, the number of cortical areas increased to reach an estimated 200 or so in present day humans, and hemispheric specializations emerged. The large human brain grew primarily by increasing neuron number rather than increasing average neuron size.

  3. High-resolution observations of the polar magnetic fields of the sun

    NASA Technical Reports Server (NTRS)

    Lin, H.; Varsik, J.; Zirin, H.

    1994-01-01

    High-resolution magnetograms of the solar polar region were used for the study of the polar magnetic field. In contrast to low-resolution magnetograph observations which measure the polar magnetic field averaged over a large area, we focused our efforts on the properties of the small magnetic elements in the polar region. Evolution of the filling factor (the ratio of the area occupied by the magnetic elements to the total area) of these magnetic elements, as well as the average magnetic field strength, were studied during the maximum and declining phase of solar cycle 22, from early 1991 to mid-1993. We found that during the sunspot maximum period, the polar regions were occupied by about equal numbers of positive and negative magnetic elements, with equal average field strength. As the solar cycle progresses toward sunspot minimum, the magnetic field elements in the polar region become predominantly of one polarity. The average magnetic field of the dominant polarity elements also increases with the filling factor. In the meanwhile, both the filling factor and the average field strength of the non-dominant polarity elements decrease. The combined effects of the changing filling factors and average field strength produce the observed evolution of the integrated polar flux over the solar cycle. We compared the evolutionary histories of both filling factor and average field strength, for regions of high (70-80 deg) and low (60-70 deg) latitudes. For the south pole, we found no significant evidence of difference in the time of reversal. However, the low-latitude region of the north pole did reverse polarity much earlier than the high-latitude region. It later showed an oscillatory behavior. We suggest this may be caused by the poleward migration of flux from a large active region in 1989 with highly imbalanced flux.

  4. On the relation between the small world structure and scientific activities.

    PubMed

    Ebadi, Ashkan; Schiffauerova, Andrea

    2015-01-01

    The modern science has become more complex and interdisciplinary in its nature which might encourage researchers to be more collaborative and get engaged in larger collaboration networks. Various aspects of collaboration networks have been examined so far to detect the most determinant factors in knowledge creation and scientific production. One of the network structures that recently attracted much theoretical attention is called small world. It has been suggested that small world can improve the information transmission among the network actors. In this paper, using the data on 12 periods of journal publications of Canadian researchers in natural sciences and engineering, the co-authorship networks of the researchers are created. Through measuring small world indicators, the small worldiness of the mentioned network and its relation with researchers' productivity, quality of their publications, and scientific team size are assessed. Our results show that the examined co-authorship network strictly exhibits the small world properties. In addition, it is suggested that in a small world network researchers expand their team size through getting connected to other experts of the field. This team size expansion may result in higher productivity of the whole team as a result of getting access to new resources, benefitting from the internal referring, and exchanging ideas among the team members. Moreover, although small world network is positively correlated with the quality of the articles in terms of both citation count and journal impact factor, it is negatively related with the average productivity of researchers in terms of the number of their publications.

  5. On the Relation between the Small World Structure and Scientific Activities

    PubMed Central

    Ebadi, Ashkan; Schiffauerova, Andrea

    2015-01-01

    The modern science has become more complex and interdisciplinary in its nature which might encourage researchers to be more collaborative and get engaged in larger collaboration networks. Various aspects of collaboration networks have been examined so far to detect the most determinant factors in knowledge creation and scientific production. One of the network structures that recently attracted much theoretical attention is called small world. It has been suggested that small world can improve the information transmission among the network actors. In this paper, using the data on 12 periods of journal publications of Canadian researchers in natural sciences and engineering, the co-authorship networks of the researchers are created. Through measuring small world indicators, the small worldiness of the mentioned network and its relation with researchers’ productivity, quality of their publications, and scientific team size are assessed. Our results show that the examined co-authorship network strictly exhibits the small world properties. In addition, it is suggested that in a small world network researchers expand their team size through getting connected to other experts of the field. This team size expansion may result in higher productivity of the whole team as a result of getting access to new resources, benefitting from the internal referring, and exchanging ideas among the team members. Moreover, although small world network is positively correlated with the quality of the articles in terms of both citation count and journal impact factor, it is negatively related with the average productivity of researchers in terms of the number of their publications. PMID:25780922

  6. On the multiple zeros of a real analytic function with applications to the averaging theory of differential equations

    NASA Astrophysics Data System (ADS)

    García, Isaac A.; Llibre, Jaume; Maza, Susanna

    2018-06-01

    In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.

  7. [Professional strategy and institutional isomorphism: the dental health insurance industry in Brazil].

    PubMed

    Vieira, Cristine; Costa, Nilson do Rosário

    2008-01-01

    This article analyzes the organizational model of the dental health industry. The main organizational leaders in this industry are the professional cooperatives and group dental insurance companies. The theoretical basis of the article is the organizational theory developed by Di Maggio and Powell. The dental health industry consists of a great number of small and very dynamic companies, however an expressive part of clients and profit are concentrated in a few large companies. The results show that the industry has expanded the number of clients after the creation of the National Health Insurance Agency. The regulation regime has forced institutional changes in the firms with regard to the market entry, permanence or exit patterns. There was no evidence that the regulatory rules have interfered with the development and financial conditions of the industry. The average profitability of the sector, especially among the group dental insurance companies, is extremely high.

  8. A heuristic approach to handle capacitated facility location problem evaluated using clustering internal evaluation

    NASA Astrophysics Data System (ADS)

    Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.

    2018-03-01

    One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.

  9. Steady-state solutions of a diffusive energy-balance climate model and their stability

    NASA Technical Reports Server (NTRS)

    Ghil, M.

    1975-01-01

    A diffusive energy-balance climate model, governed by a nonlinear parabolic partial differential equation, was studied. Three positive steady-state solutions of this equation are found; they correspond to three possible climates of our planet: an interglacial (nearly identical to the present climate), a glacial, and a completely ice-covered earth. Models similar to the main one are considered, and the number of their steady states was determined. All the models have albedo continuously varying with latitude and temperature, and entirely diffusive horizontal heat transfer. The stability under small perturbations of the main model's climates was investigated. A stability criterion is derived, and its application shows that the present climate and the deep freeze are stable, whereas the model's glacial is unstable. The dependence was examined of the number of steady states and of their stability on the average solar radiation.

  10. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  11. Automated identification of OB associations in M31

    NASA Technical Reports Server (NTRS)

    Magnier, Eugene A.; Battinelli, Paolo; Lewin, Walter H. G.; Haiman, Zoltan; Paradijs, Jan Van; Hasinger, Guenther; Pietsch, Wolfgang; Supper, Rodrigo; Truemper, Joachim

    1993-01-01

    A new identification of OB associations in M31 has been performed using the Path Linkage Criterion (PLC) technique of Battinelli (1991). We found 174 associations with a very small contamination (less than 5%) by random clumps of stars. The expected total number and average size of OB associations in the region of M 31 covered by our data set (Magnier et al. 1992) are approximately 280 and approximately 90 pc, respectively. M31 associations therefore have sizes similar to those of OB associations observed in nearby galaxies, so that we can consider them to be classical OB associations. This list of OB associations will be used for the study of the spatial distribution of OB associations and their correlation with other objects. Taking into account the fact that we do not cover the entire disk of M31, we extrapolate a total number of association in M31 of approximately 420.

  12. Wireless Control of Smartphones with Tongue Motion Using Tongue Drive Assistive Technology

    PubMed Central

    Kim, Jeonghee; Huo, Xueliang

    2010-01-01

    Tongue Drive System (TDS) is a noninvasive, wireless and wearable assistive technology that helps people with severe disabilities control their environments using their tongue motion. TDS translates specific tongue gestures to commands by detecting a small permanent magnetic tracer on the users’ tongue. We have linked the TDS to a smartphone (iPhone/iPod Touch) with a customized wireless module, added to the iPhone. We also migrated and ran the TDS sensor signal processing algorithm and graphical user interface on the iPhone in real time. The TDS-iPhone interface was evaluated by four able-bodied subjects for dialing 10-digit phone numbers using the standard telephone keypad and three methods of prompting the numbers: visual, auditory, and cognitive. Preliminary results showed that the interface worked quite reliably at a rate of 15.4 digits per minute, on average, with negligible errors. PMID:21096049

  13. Importance-sampling computation of statistical properties of coupled oscillators

    NASA Astrophysics Data System (ADS)

    Gupta, Shamik; Leitão, Jorge C.; Altmann, Eduardo G.

    2017-07-01

    We introduce and implement an importance-sampling Monte Carlo algorithm to study systems of globally coupled oscillators. Our computational method efficiently obtains estimates of the tails of the distribution of various measures of dynamical trajectories corresponding to states occurring with (exponentially) small probabilities. We demonstrate the general validity of our results by applying the method to two contrasting cases: the driven-dissipative Kuramoto model, a paradigm in the study of spontaneous synchronization; and the conservative Hamiltonian mean-field model, a prototypical system of long-range interactions. We present results for the distribution of the finite-time Lyapunov exponent and a time-averaged order parameter. Among other features, our results show most notably that the distributions exhibit a vanishing standard deviation but a skewness that is increasing in magnitude with the number of oscillators, implying that nontrivial asymmetries and states yielding rare or atypical values of the observables persist even for a large number of oscillators.

  14. Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models

    DOE PAGES

    Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles; ...

    2016-06-08

    In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less

  15. Metrics for the Diurnal Cycle of Precipitation: Toward Routine Benchmarks for Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covey, Curt; Gleckler, Peter J.; Doutriaux, Charles

    In this paper, metrics are proposed—that is, a few summary statistics that condense large amounts of data from observations or model simulations—encapsulating the diurnal cycle of precipitation. Vector area averaging of Fourier amplitude and phase produces useful information in a reasonably small number of harmonic dial plots, a procedure familiar from atmospheric tide research. The metrics cover most of the globe but down-weight high-latitude wintertime ocean areas where baroclinic waves are most prominent. This enables intercomparison of a large number of climate models with observations and with each other. The diurnal cycle of precipitation has features not encountered in typicalmore » climate model intercomparisons, notably the absence of meaningful “average model” results that can be displayed in a single two-dimensional map. Displaying one map per model guides development of the metrics proposed here by making it clear that land and ocean areas must be averaged separately, but interpreting maps from all models becomes problematic as the size of a multimodel ensemble increases. Global diurnal metrics provide quick comparisons with observations and among models, using the most recent version of the Coupled Model Intercomparison Project (CMIP). This includes, for the first time in CMIP, spatial resolutions comparable to global satellite observations. Finally, consistent with earlier studies of resolution versus parameterization of the diurnal cycle, the longstanding tendency of models to produce rainfall too early in the day persists in the high-resolution simulations, as expected if the error is due to subgrid-scale physics.« less

  16. The effect of small rodents on northern red oak acorns in north-central West Virginia

    Treesearch

    Linda S. Gribko; David M. Hix

    1993-01-01

    The effects of small mammals on surface-sown northern red oak (Quercus rubra) acorns was assessed in highly productive Appalachian hardwood stands. Study plots were established in October 1990 on excellent (average site index of 89 feet for red oak) and good (average site index of 72 feet) sites. Each plot included: 1) a rodent-proof exclosure, 2) an exclosure...

  17. Amphibian recovery after a decrease in acidic precipitation.

    PubMed

    Dolmen, Dag; Finstad, Anders Gravbrøt; Skei, Jon Kristian

    2018-04-01

    We here report the first sign of amphibian recovery after a strong decline due to acidic precipitation over many decades and peaking around 1980-90. In 2010, the pH level of ponds and small lakes in two heavily acidified areas in southwestern Scandinavia (Aust-Agder and Østfold in Norway) had risen significantly at an (arithmetic) average of 0.14 since 1988-89. Parallel with the general rise in pH, amphibians (Rana temporaria, R. arvalis, Bufo bufo, Lissotriton vulgaris, and Triturus cristatus) had become significantly more common: the frequency of amphibian localities rose from 33% to 49% (n = 115), and the average number of amphibian species per locality had risen from 0.51 to 0.88. In two other (reference) areas, one with better buffering capacity (Telemark, n = 21) and the other with much less input of acidic precipitation (Nord-Trøndelag, n = 106), there were no significant changes in pH or amphibians.

  18. Comparative performance of short-term diffusion barrier charcoal canisters and long-term alpha-track monitors for indoor 222Rn measurements.

    PubMed

    Martz, D E; George, J L; Langner, G H

    1991-04-01

    The accuracy and precision of indoor 222Rn measurements obtained with the use of diffusion barrier charcoal canisters (DBCC) under actual field conditions were determined by comparing the integrated average of 26 successive 7-d exposures of DBCC in each of 16 occupied residences over a 6-mo period with simultaneous measurements using four types of commercially available alpha-track monitors (ATM) and one type of scintillation chamber continuous 222Rn monitor. The results suggest that properly calibrated DBCCs provide very good estimates of the integrated 222Rn concentrations in residential structures over the standard 1-wk exposure period despite the occurrence of large diurnal variations in the actual 222Rn concentrations. The results also suggest that a relatively small number of 1-wk DBCC measurements at selected times throughout the calendar year would provide estimates of the annual average indoor 222Rn concentrations that compare favorably with single long-term ATM measurements.

  19. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  20. Dynamics of thermal plumes in three-dimensional isoviscous thermal convection

    NASA Astrophysics Data System (ADS)

    Zhong, Shijie

    2005-07-01

    The dynamics of mantle plumes are important for understanding intraplate volcanism and heat transfer in the mantle. Using 3-D numerical models and scaling analyses, we investigated the controls of convective vigour or Ra (Rayleigh number) on the dynamics of thermal plumes in isoviscous and basal heating thermal convection. We examined the Ra dependence of plume number, plume spacing, plume vertical velocity and plume radius. We found that plume number does not increase monotonically with Ra. At relatively small Ra(<=106), plume number is insensitive to Ra. For 3 × 106<=Ra<= 3 × 107, plume number scales as Ra0.31 and plume spacing λ~Ra-0.16~δ1/2, where δ is the thickness of the thermal boundary layer. However, for larger Ra(~108) plume number and plume spacing again become insensitive to Ra. This indicates that the box depth poses a limit on plume spacing and plume number. We demonstrate from both scaling analyses and numerical experiments that the scaling exponents for plume number, n, heat flux, β, and average velocity on the bottom boundary, v, satisfy n= 4β- 2v. Our scaling analyses also suggest that vertical velocity in upwelling plumes Vup~Ra2(1-n+β/2)/3 and that plume radius Rup~Ra(β-1-n/2)/3, which differ from the scalings for the bottom boundary velocity and boundary layer thickness.

  1. Wing-kinematics measurement and aerodynamics in a small insect in hovering flight

    PubMed Central

    Cheng, Xin; Sun, Mao

    2016-01-01

    Wing-motion of hovering small fly Liriomyza sativae was measured using high-speed video and flows of the wings calculated numerically. The fly used high wingbeat frequency (≈265 Hz) and large stroke amplitude (≈182°); therefore, even if its wing-length (R) was small (R ≈ 1.4 mm), the mean velocity of wing reached ≈1.5 m/s, the same as that of an average-size insect (R ≈ 3 mm). But the Reynolds number (Re) of wing was still low (≈40), owing to the small wing-size. In increasing the stroke amplitude, the outer parts of the wings had a “clap and fling” motion. The mean-lift coefficient was high, ≈1.85, several times larger than that of a cruising airplane. The partial “clap and fling” motion increased the lift by ≈7%, compared with the case of no aerodynamic interaction between the wings. The fly mainly used the delayed stall mechanism to generate the high-lift. The lift-to-drag ratio is only 0.7 (for larger insects, Re being about 100 or higher, the ratio is 1–1.2); that is, although the small fly can produce enough lift to support its weight, it needs to overcome a larger drag to do so. PMID:27168523

  2. Bayesian road safety analysis: incorporation of past evidence and effect of hyper-prior choice.

    PubMed

    Miranda-Moreno, Luis F; Heydari, Shahram; Lord, Dominique; Fu, Liping

    2013-09-01

    This paper aims to address two related issues when applying hierarchical Bayesian models for road safety analysis, namely: (a) how to incorporate available information from previous studies or past experiences in the (hyper) prior distributions for model parameters and (b) what are the potential benefits of incorporating past evidence on the results of a road safety analysis when working with scarce accident data (i.e., when calibrating models with crash datasets characterized by a very low average number of accidents and a small number of sites). A simulation framework was developed to evaluate the performance of alternative hyper-priors including informative and non-informative Gamma, Pareto, as well as Uniform distributions. Based on this simulation framework, different data scenarios (i.e., number of observations and years of data) were defined and tested using crash data collected at 3-legged rural intersections in California and crash data collected for rural 4-lane highway segments in Texas. This study shows how the accuracy of model parameter estimates (inverse dispersion parameter) is considerably improved when incorporating past evidence, in particular when working with the small number of observations and crash data with low mean. The results also illustrates that when the sample size (more than 100 sites) and the number of years of crash data is relatively large, neither the incorporation of past experience nor the choice of the hyper-prior distribution may affect the final results of a traffic safety analysis. As a potential solution to the problem of low sample mean and small sample size, this paper suggests some practical guidance on how to incorporate past evidence into informative hyper-priors. By combining evidence from past studies and data available, the model parameter estimates can significantly be improved. The effect of prior choice seems to be less important on the hotspot identification. The results show the benefits of incorporating prior information when working with limited crash data in road safety studies. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  3. KEPLER EXOPLANET CANDIDATE HOST STARS ARE PREFERENTIALLY METAL RICH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlaufman, Kevin C.; Laughlin, Gregory, E-mail: kcs@ucolick.org, E-mail: laughlin@ucolick.org

    We find that Kepler exoplanet candidate (EC) host stars are preferentially metal rich, including the low-mass stellar hosts of small-radius ECs. The last observation confirms a tentative hint that there is a correlation between the metallicity of low-mass stars and the presence of low-mass and small-radius exoplanets. In particular, we compare the J-H-g-r color-color distribution of Kepler EC host stars with a control sample of dwarf stars selected from the {approx}150, 000 stars observed during Q1 and Q2 of the Kepler mission but with no detected planets. We find that at J - H = 0.30 characteristic of solar-type stars,more » the average g-r color of stars that host giant ECs is 4{sigma} redder than the average color of the stars in the control sample. At the same J - H color, the average g-r color of solar-type stars that host small-radius ECs is indistinguishable from the average color of the stars in the control sample. In addition, we find that at J - H = 0.62 indicative of late K dwarfs, the average g-r color of stars that host small-radius ECs is 4{sigma} redder than the average color of the stars in the control sample. These offsets are unlikely to be caused by differential reddening, age differences between the two populations, or the presence of giant stars in the control sample. Stellar models suggest that the first color offset is due to a 0.2 dex enhancement in [Fe/H] of the giant EC host population at M{sub *} {approx} 1 M{sub sun}, while Sloan photometry of M 67 and NGC 6791 suggests that the second color offset is due to a similar [Fe/H] enhancement of the small-radius EC host population at M{sub *} {approx} 0.7 M{sub sun}. These correlations are a natural consequence of the core-accretion model of planet formation.« less

  4. Characterization of oxidized tannins: comparison of depolymerization methods, asymmetric flow field-flow fractionation and small-angle X-ray scattering.

    PubMed

    Vernhet, Aude; Dubascoux, Stéphane; Cabane, Bernard; Fulcrand, Hélène; Dubreucq, Eric; Poncet-Legrand, Céline

    2011-09-01

    Condensed tannins are a major class of plant polyphenols. They play an important part in the colour and taste of foods and beverages. Due to their chemical reactivity, tannins are not stable once extracted from plants. A number of chemical reactions can take place, leading to structural changes of the native structures to give so-called derived tannins and pigments. This paper compares results obtained on native and oxidized tannins with different techniques: depolymerization followed by high-performance liquid chromatography analysis, small-angle X-ray scattering (SAXS) and asymmetric flow field-flow fractionation (AF4). Upon oxidation, new macromolecules were formed. Thioglycolysis experiments showed no evidence of molecular weight increase, but thioglycolysis yields drastically decreased. When oxidation was performed at high concentration (e.g., 10 g L(-1)), the weight average degree of polymerization determined from SAXS increased, whereas it remained stable when oxidation was done at low concentration (0.1 g L(-1)), indicating that the reaction was intramolecular, yet the conformations were different. Differences in terms of solubility were observed; ethanol being a better solvent than water. We also separated soluble and non-water-soluble species of a much oxidized fraction. Thioglycolysis showed no big differences between the two fractions, whereas SAXS and AF4 showed that insoluble macromolecules have a weight average molecular weight ten times higher than the soluble ones.

  5. The emissions of monoaromatic hydrocarbons from small polymeric toys placed in chocolate food products.

    PubMed

    Marć, Mariusz; Formela, Krzysztof; Klein, Marek; Namieśnik, Jacek; Zabiegała, Bożena

    2015-10-15

    The article presents findings on the emissions of selected monoaromatic hydrocarbons from children's toys placed in chocolate food products. The emission test system involved the application of a new type of microscale stationary emission chamber, μ-CTE™ 250. In order to determine the type of the applied polymer in the manufacture of the tested toys, Fourier transform infrared spectroscopy and thermogravimetric analysis coupled with differential scanning calorimetry were used. It was found that the tested toy components or the whole toys (figurines) are made of two main types of polymers: polyamide and acrylonitrile-butadiene-styrene copolymer. Total number of studied small polymeric toys was 52. The average emissions of selected monoaromatic hydrocarbons from studied toys made of polyamide were as follows: benzene: 0.45 ± 0.33 ng/g; toluene: 3.3 ± 2.6 ng/g; ethylbenzene: 1.4 ± 1.4 ng/g; p,m-xylene: 2.5 ± 4.5 ng/g; and styrene: 8.2 ± 9.9 ng/g. In the case of studied toys made of acrylonitrile-butadiene-styrene copolymer the average emissions of benzene, toluene, ethylbeznene, p,m-xylene and styrene were: 0.31 ± 0.29 ng/g; 2.5 ± 1.4 ng/g; 4.6 ± 8.9 ng/g; 1.4 ± 1.1 ng/g; and 36 ± 44 ng/g, respectively. Copyright © 2015. Published by Elsevier B.V.

  6. Translation, adaptation and validation of the American short form Patient Activation Measure (PAM13) in a Danish version.

    PubMed

    Maindal, Helle Terkildsen; Sokolowski, Ineta; Vedsted, Peter

    2009-06-29

    The Patient Activation Measure (PAM) is a measure that assesses patient knowledge, skill, and confidence for self-management. This study validates the Danish translation of the 13-item Patient Activation Measure (PAM13) in a Danish population with dysglycaemia. 358 people with screen-detected dysglycaemia participating in a primary care health education study responded to PAM13. The PAM13 was translated into Danish by a standardised forward-backward translation. Data quality was assessed by mean, median, item response, missing values, floor and ceiling effects, internal consistency (Cronbach's alpha and average inter-item correlation) and item-rest correlations. Scale properties were assessed by Rasch Rating Scale models. The item response was high with a small number of missing values (0.8-4.2%). Floor effect was small (range 0.6-3.6%), but the ceiling effect was above 15% for all items (range 18.6-62.7%). The alpha-coefficient was 0.89 and the average inter-item correlation 0.38. The Danish version formed a unidimensional, probabilistic Guttman-like scale explaining 43.2% of the variance. We did however, find a different item sequence compared to the original scale. A Danish version of PAM13 with acceptable validity and reliability is now available. Further development should focus on single items, response categories in relation to ceiling effects and further validation of reproducibility and responsiveness.

  7. Theory of prokaryotic genome evolution.

    PubMed

    Sela, Itamar; Wolf, Yuri I; Koonin, Eugene V

    2016-10-11

    Bacteria and archaea typically possess small genomes that are tightly packed with protein-coding genes. The compactness of prokaryotic genomes is commonly perceived as evidence of adaptive genome streamlining caused by strong purifying selection in large microbial populations. In such populations, even the small cost incurred by nonfunctional DNA because of extra energy and time expenditure is thought to be sufficient for this extra genetic material to be eliminated by selection. However, contrary to the predictions of this model, there exists a consistent, positive correlation between the strength of selection at the protein sequence level, measured as the ratio of nonsynonymous to synonymous substitution rates, and microbial genome size. Here, by fitting the genome size distributions in multiple groups of prokaryotes to predictions of mathematical models of population evolution, we show that only models in which acquisition of additional genes is, on average, slightly beneficial yield a good fit to genomic data. These results suggest that the number of genes in prokaryotic genomes reflects the equilibrium between the benefit of additional genes that diminishes as the genome grows and deletion bias (i.e., the rate of deletion of genetic material being slightly greater than the rate of acquisition). Thus, new genes acquired by microbial genomes, on average, appear to be adaptive. The tight spacing of protein-coding genes likely results from a combination of the deletion bias and purifying selection that efficiently eliminates nonfunctional, noncoding sequences.

  8. Shading correction algorithm for cone-beam CT in radiotherapy: extensive clinical validation of image quality improvement

    NASA Astrophysics Data System (ADS)

    Joshi, K. D.; Marchant, T. E.; Moore, C. J.

    2017-03-01

    A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.

  9. Size effect on atomic structure in low-dimensional Cu-Zr amorphous systems.

    PubMed

    Zhang, W B; Liu, J; Lu, S H; Zhang, H; Wang, H; Wang, X D; Cao, Q P; Zhang, D X; Jiang, J Z

    2017-08-04

    The size effect on atomic structure of a Cu 64 Zr 36 amorphous system, including zero-dimensional small-size amorphous particles (SSAPs) and two-dimensional small-size amorphous films (SSAFs) together with bulk sample was investigated by molecular dynamics simulations. We revealed that sample size strongly affects local atomic structure in both Cu 64 Zr 36 SSAPs and SSAFs, which are composed of core and shell (surface) components. Compared with core component, the shell component of SSAPs has lower average coordination number and average bond length, higher degree of ordering, and lower packing density due to the segregation of Cu atoms on the shell of Cu 64 Zr 36 SSAPs. These atomic structure differences in SSAPs with various sizes result in different glass transition temperatures, in which the glass transition temperature for the shell component is found to be 577 K, which is much lower than 910 K for the core component. We further extended the size effect on the structure and glasses transition temperature to Cu 64 Zr 36 SSAFs, and revealed that the T g decreases when SSAFs becomes thinner due to the following factors: different dynamic motion (mean square displacement), different density of core and surface and Cu segregation on the surface of SSAFs. The obtained results here are different from the results for the size effect on atomic structure of nanometer-sized crystalline metallic alloys.

  10. The Effects of Lightning NO(x) Production during the July 21 EULINOX Storm studied with a 3-D Cloud-scale Chemical Transport Model

    NASA Technical Reports Server (NTRS)

    Ott, Lesley E.; Pickering, Kenneth E.; Stenchikov, Georgiy L.; Huntrieser, Heidi; Schumann, Ulrich

    2006-01-01

    The July 21,1998 thunderstonn observed during the European Lightning Nitrogen Oxides Project (EULINOX) project was simulated using the three-dimensional Goddard Cumulus Ensemble (GCE) model. The simulation successfully reproduced a number of observed storm features including the splitting of the original cell into a southern cell which developed supercell characteristics, and a northern cell which became multicellular. Output from the GCE simulation was used to drive an offline cloud-scale chemical transport model which calculates tracer transport and includes a parameterization of lightning NO(x) production which uses observed flash rates as input. Estimates of lightning NO(x) production were deduced by assuming various values of production per intracloud and production per cloud-to-ground flash and comparing the results with in-cloud aircraft observations. The assumption that both types of flashes produce 360 moles of NO per flash on average compared most favorably with column mass and probability distribution functions calculated from observations. This assumed production per flash corresponds to a global annual lightning NOx source of 7 Tg N per yr. Chemical reactions were included in the model to evaluate the impact of lightning NO(x), on ozone. During the storm, the inclusion of lightning NOx in the model results in a small loss of ozone (on average less than 4 ppbv) at all model levels. Simulations of the chemical environment in the 24 hours following the storm show on average a small increase in the net production of ozone at most levels resulting from lightning NO(x), maximizing at approximately 5 ppbv per day at 5.5 km. Between 8 and 10.5 km, lightning NO(x) causes decreased net ozone production.

  11. Scented guide ropes as a method to enhance brown treesnake (Boiga irregularis) trap capture success on Guam

    USGS Publications Warehouse

    Mason, L.C.; Savidge, J.A.; Rodda, G.H.; Yackel Adams, A.A.

    2011-01-01

    Current methods for controlling the invasive Brown Treesnake (Boiga irregularis) on Guam include a modified minnow trap with a live mouse lure. We investigated the effects on capture success of augmenting these traps with scented guide ropes leading to trap entrances. Initial screening of scent preferences was based on time spent in scented and unscented arms of a Y-maze. Preferences of large and small snakes were scored for six different prey scents (live and carrion gecko, skink, and mouse). Large snakes spent more time in the maze arm scented with live gecko and carrion gecko, whereas small snakes spent more time in the arm scented with carrion mouse and carrion gecko. After the laboratory study, a pilot trapping session was conducted in the field using three treatments (live mouse-scented ropes, carrion gecko-scented ropes, and carrion mouse-scented ropes) and two controls (traps with unscented guide ropes and those with no ropes attached). Contrary to laboratory results, live mouse-scented ropes were most effective. We conducted a second trapping session using live mouse-scented ropes as well as the two controls used in the pilot study. For snakes of below-average to average condition, the number of captures for traps with live mouse-scented ropes was higher than for traps with no ropes. However, for snakes of above-average condition, there were no differences in capture rates between trap treatments. Overall, treatment effects were weaker than latent individual heterogeneity and the influence of snake body size, with large snakes trapped more readily. ?? 2011 Society for the Study of Amphibians and Reptiles.

  12. 3D-HST+CANDELS: The Evolution of the Galaxy Size-Mass Distribution since z = 3

    NASA Astrophysics Data System (ADS)

    van der Wel, A.; Franx, M.; van Dokkum, P. G.; Skelton, R. E.; Momcheva, I. G.; Whitaker, K. E.; Brammer, G. B.; Bell, E. F.; Rix, H.-W.; Wuyts, S.; Ferguson, H. C.; Holden, B. P.; Barro, G.; Koekemoer, A. M.; Chang, Yu-Yen; McGrath, E. J.; Häussler, B.; Dekel, A.; Behroozi, P.; Fumagalli, M.; Leja, J.; Lundgren, B. F.; Maseda, M. V.; Nelson, E. J.; Wake, D. A.; Patel, S. G.; Labbé, I.; Faber, S. M.; Grogin, N. A.; Kocevski, D. D.

    2014-06-01

    Spectroscopic+photometric redshifts, stellar mass estimates, and rest-frame colors from the 3D-HST survey are combined with structural parameter measurements from CANDELS imaging to determine the galaxy size-mass distribution over the redshift range 0 < z < 3. Separating early- and late-type galaxies on the basis of star-formation activity, we confirm that early-type galaxies are on average smaller than late-type galaxies at all redshifts, and we find a significantly different rate of average size evolution at fixed galaxy mass, with fast evolution for the early-type population, R effvprop(1 + z)-1.48, and moderate evolution for the late-type population, R effvprop(1 + z)-0.75. The large sample size and dynamic range in both galaxy mass and redshift, in combination with the high fidelity of our measurements due to the extensive use of spectroscopic data, not only fortify previous results but also enable us to probe beyond simple average galaxy size measurements. At all redshifts the slope of the size-mass relation is shallow, R_{eff}\\propto M_*^{0.22}, for late-type galaxies with stellar mass >3 × 109 M ⊙, and steep, R_{eff}\\propto M_*^{0.75}, for early-type galaxies with stellar mass >2 × 1010 M ⊙. The intrinsic scatter is lsim0.2 dex for all galaxy types and redshifts. For late-type galaxies, the logarithmic size distribution is not symmetric but is skewed toward small sizes: at all redshifts and masses, a tail of small late-type galaxies exists that overlaps in size with the early-type galaxy population. The number density of massive (~1011 M ⊙), compact (R eff < 2 kpc) early-type galaxies increases from z = 3 to z = 1.5-2 and then strongly decreases at later cosmic times.

  13. Effects of lightning NOx production during the 21 July European Lightning Nitrogen Oxides Project storm studied with a three-dimensional cloud-scale chemical transport model

    NASA Astrophysics Data System (ADS)

    Ott, Lesley E.; Pickering, Kenneth E.; Stenchikov, Georgiy L.; Huntrieser, Heidi; Schumann, Ulrich

    2007-03-01

    The 21 July 1998 thunderstorm observed during the European Lightning Nitrogen Oxides Project (EULINOX) project was simulated using the three-dimensional Goddard Cumulus Ensemble (GCE) model. The simulation successfully reproduced a number of observed storm features including the splitting of the original cell into a southern cell which developed supercell characteristics and a northern cell which became multicellular. Output from the GCE simulation was used to drive an offline cloud-scale chemical transport model which calculates tracer transport and includes a parameterization of lightning NOx production which uses observed flash rates as input. Estimates of lightning NOx production were deduced by assuming various values of production per intracloud and production per cloud-to-ground flash and comparing the results with in-cloud aircraft observations. The assumption that both types of flashes produce 360 moles of NO per flash on average compared most favorably with column mass and probability distribution functions calculated from observations. This assumed production per flash corresponds to a global annual lightning NOx source of 7 Tg N yr-1. Chemical reactions were included in the model to evaluate the impact of lightning NOx on ozone. During the storm, the inclusion of lightning NOx in the model results in a small loss of ozone (on average less than 4 ppbv) at all model levels. Simulations of the chemical environment in the 24 hours following the storm show on average a small increase in the net production of ozone at most levels resulting from lightning NOx, maximizing at approximately 5 ppbv day-1 at 5.5 km. Between 8 and 10.5 km, lightning NOx causes decreased net ozone production.

  14. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs.

    PubMed

    Dexter, Franklin; Abouleish, Amr E; Epstein, Richard H; Whitten, Charles W; Lubarsky, David A

    2003-10-01

    Potential benefits to reducing turnover times are both quantitative (e.g., complete more cases and reduce staffing costs) and qualitative (e.g., improve professional satisfaction). Analyses have shown the quantitative arguments to be unsound except for reducing staffing costs. We describe a methodology by which each surgical suite can use its own numbers to calculate its individual potential reduction in staffing costs from reducing its turnover times. Calculations estimate optimal allocated operating room (OR) time (based on maximizing OR efficiency) before and after reducing the maximum and average turnover times. At four academic tertiary hospitals, reductions in average turnover times of 3 to 9 min would result in 0.8% to 1.8% reductions in staffing cost. Reductions in average turnover times of 10 to 19 min would result in 2.5% to 4.0% reductions in staffing costs. These reductions in staffing cost are achieved predominantly by reducing allocated OR time, not by reducing the hours that staff work late. Heads of anesthesiology groups often serve on OR committees that are fixated on turnover times. Rather than having to argue based on scientific studies, this methodology provides the ability to show the specific quantitative effects (small decreases in staffing costs and allocated OR time) of reducing turnover time using a surgical suite's own data. Many anesthesiologists work at hospitals where surgeons and/or operating room (OR) committees focus repeatedly on turnover time reduction. We developed a methodology by which the reductions in staffing cost as a result of turnover time reduction can be calculated for each facility using its own data. Staffing cost reductions are generally very small and would be achieved predominantly by reducing allocated OR time to the surgeons.

  15. Estimation of genetic diversity in Gute sheep: pedigree and microsatellite analyses of an ancient Swedish breed.

    PubMed

    Rochus, Christina M; Johansson, Anna M

    2017-01-01

    Breeds with small population size are in danger of an increased inbreeding rate and loss of genetic diversity, which puts them at risk for extinction. In Sweden there are a number of local breeds, native breeds which have adapted to specific areas in Sweden, for which efforts are being made to keep them pure and healthy over time. One example of such a breed is the Swedish Gute sheep. The objective of this study was to estimate inbreeding and genetic diversity of Swedish Gute sheep. Three datasets were analysed: pedigree information of the whole population, pedigree information for 100 animals of the population, and microsatellite genotypes for 94 of the 100 animals. The average inbreeding coefficient for lambs born during a six year time period (2007-2012) did not increase during that time period. The inbreeding calculated from the entire pedigree (0.038) and for a sample of the population (0.018) was very low. Sheep were more heterozygous at the microsatellite markers than expected (average multilocus heterozygosity and Ritland inbreeding estimates 1.01845 and -0.03931) and five of seven microsatellite markers were not in Hardy Weinberg equilibrium due to heterozygosity excess. The total effective population size estimated from the pedigree information was 155.4 and the average harmonic mean effective population size estimated from microsatellites was 88.3. Pedigree and microsatellite genotype estimations of inbreeding were consistent with a breeding program with the purpose of reducing inbreeding. Our results showed that current breeding programs of the Swedish Gute sheep are consistent with efforts of keeping this breed viable and these breeding programs are an example for other small local breeds in conserving breeds for the future.

  16. Water balance and soil losses in an irrigated catchment under conservation tillage in Southern Spain

    NASA Astrophysics Data System (ADS)

    Cid, Patricio; Mateos, Luciano; Taguas, Encarnación V.; Gómez-Macpherson, Helena

    2013-04-01

    Conservation tillage based on permanent beds with crop-residue retention and controlled traffic has been recently introduced in irrigated annual crops in Southern Spain as one way to improve water infiltration, reduce soil losses, and save energy. The water balance and soil losses in water runoff have been monitored during 4 years in a 28-ha catchment within a production farm where this kind of soil conservation practice was established in 2004 for a maize-cotton-wheat rotation. The catchment average slope is 6 %. Soils are Typic Calcixerept and Typic Haploxerert. The water balance components that were measured include: applied irrigation water, rainfall, and runoff. Runoff was measured at the outlet of the catchment by means of a hydrological station that consisted of long-throated flume, ultrasonic water level sensor, automatic water sampler, data logger and transmission system, weather station, and ancillary equipment. We present here results from three hydrological seasons (October to September): 2009-10, 2010-11, and 2011-12. The first season the catchment was grown with wheat, thus the irrigation depth was small (25 mm); rainfall above average, 1103 mm; and the runoff coefficient was 26 %. In the season 2010-11, the catchment was grown with cotton, the irrigation depth was 503 mm, rainfall was 999 mm, and the seasonal runoff coefficient was 7 %. The last season, the crop was maize, rainfall was below average (368 mm), irrigation 590 mm, and the runoff coefficient as the previous year, 7 %. Soil losses were very small: 0.05, 1.26, and 1.33 t per ha and year, the first, second, and third monitored seasons, respectively. A simple water balance model allowed simulating evapotranspiration, deep percolation and runoff. The Curve Number for the catchment was calibrated using the balance model.

  17. Developmental Changes in the Effect of Active Left and Right Head Rotation on Random Number Generation

    PubMed Central

    Sosson, Charlotte; Georges, Carrie; Guillaume, Mathieu; Schuller, Anne-Marie; Schiltz, Christine

    2018-01-01

    Numbers are thought to be spatially organized along a left-to-right horizontal axis with small/large numbers on its left/right respectively. Behavioral evidence for this mental number line (MNL) comes from studies showing that the reallocation of spatial attention by active left/right head rotation facilitated the generation of small/large numbers respectively. While spatial biases in random number generation (RNG) during active movement are well established in adults, comparable evidence in children is lacking and it remains unclear whether and how children’s access to the MNL is affected by active head rotation. To get a better understanding of the development of embodied number processing, we investigated the effect of active head rotation on the mean of generated numbers as well as the mean difference between each number and its immediately preceding response (the first order difference; FOD) not only in adults (n = 24), but also in 7- to 11-year-old elementary school children (n = 70). Since the sign and absolute value of FODs carry distinct information regarding spatial attention shifts along the MNL, namely their direction (left/right) and size (narrow/wide) respectively, we additionally assessed the influence of rotation on the total of negative and positive FODs regardless of their numerical values as well as on their absolute values. In line with previous studies, adults produced on average smaller numbers and generated smaller mean FODs during left than right rotation. More concretely, they produced more negative/positive FODs during left/right rotation respectively and the size of negative FODs was larger (in terms of absolute value) during left than right rotation. Importantly, as opposed to adults, no significant differences in RNG between left and right head rotations were observed in children. Potential explanations for such age-related changes in the effect of active head rotation on RNG are discussed. Altogether, the present study confirms that numerical processing is spatially grounded in adults and suggests that its embodied aspect undergoes significant developmental changes. PMID:29541048

  18. Analysis of turbulent transport and mixing in transitional Rayleigh–Taylor unstable flow using direct numerical simulation data

    DOE PAGES

    Schilling, Oleg; Mueschke, Nicholas J.

    2010-10-18

    Data from a 1152X760X1280 direct numerical simulation (DNS) of a transitional Rayleigh-Taylor mixing layer modeled after a small Atwood number water channel experiment is used to comprehensively investigate the structure of mean and turbulent transport and mixing. The simulation had physical parameters and initial conditions approximating those in the experiment. The budgets of the mean vertical momentum, heavy-fluid mass fraction, turbulent kinetic energy, turbulent kinetic energy dissipation rate, heavy-fluid mass fraction variance, and heavy-fluid mass fraction variance dissipation rate equations are constructed using Reynolds averaging applied to the DNS data. The relative importance of mean and turbulent production, turbulent dissipationmore » and destruction, and turbulent transport are investigated as a function of Reynolds number and across the mixing layer to provide insight into the flow dynamics not presently available from experiments. The analysis of the budgets supports the assumption for small Atwood number, Rayleigh/Taylor driven flows that the principal transport mechanisms are buoyancy production, turbulent production, turbulent dissipation, and turbulent diffusion (shear and mean field production are negligible). As the Reynolds number increases, the turbulent production in the turbulent kinetic energy dissipation rate equation becomes the dominant production term, while the buoyancy production plateaus. Distinctions between momentum and scalar transport are also noted, where the turbulent kinetic energy and its dissipation rate both grow in time and are peaked near the center plane of the mixing layer, while the heavy-fluid mass fraction variance and its dissipation rate initially grow and then begin to decrease as mixing progresses and reduces density fluctuations. All terms in the transport equations generally grow or decay, with no qualitative change in their profile, except for the pressure flux contribution to the total turbulent kinetic energy flux, which changes sign early in time (a countergradient effect). The production-to-dissipation ratios corresponding to the turbulent kinetic energy and heavy-fluid mass fraction variance are large and vary strongly at small evolution times, decrease with time, and nearly asymptote as the flow enters a self-similar regime. The late-time turbulent kinetic energy production-to-dissipation ratio is larger than observed in shear-driven turbulent flows. The order of magnitude estimates of the terms in the transport equations are shown to be consistent with the DNS at late-time, and also confirms both the dominant terms and their evolutionary behavior. Thus, these results are useful for identifying the dynamically important terms requiring closure, and assessing the accuracy of the predictions of Reynolds-averaged Navier-Stokes and large-eddy simulation models of turbulent transport and mixing in transitional Rayleigh-Taylor instability-generated flow.« less

  19. Release from the cone ribbon synapse under bright light conditions can be controlled by the opening of only a few Ca2+ channels

    PubMed Central

    Bartoletti, Theodore M.; Jackman, Skyler L.; Babai, Norbert; Mercer, Aaron J.; Kramer, Richard H.

    2011-01-01

    Light hyperpolarizes cone photoreceptors, causing synaptic voltage-gated Ca2+ channels to open infrequently. To understand neurotransmission under these conditions, we determined the number of L-type Ca2+ channel openings necessary for vesicle fusion at the cone ribbon synapse. Ca2+ currents (ICa) were activated in voltage-clamped cones, and excitatory postsynaptic currents (EPSCs) were recorded from horizontal cells in the salamander retina slice preparation. Ca2+ channel number and single-channel current amplitude were calculated by mean-variance analysis of ICa. Two different comparisons—one comparing average numbers of release events to average ICa amplitude and the other involving deconvolution of both EPSCs and simultaneously recorded cone ICa—suggested that fewer than three Ca2+ channel openings accompanied fusion of each vesicle at the peak of release during the first few milliseconds of stimulation. Opening fewer Ca2+ channels did not enhance fusion efficiency, suggesting that few unnecessary channel openings occurred during strong depolarization. We simulated release at the cone synapse, using empirically determined synaptic dimensions, vesicle pool size, Ca2+ dependence of release, Ca2+ channel number, and Ca2+ channel properties. The model replicated observations when a barrier was added to slow Ca2+ diffusion. Consistent with the presence of a diffusion barrier, dialyzing cones with diffusible Ca2+ buffers did not affect release efficiency. The tight clustering of Ca2+ channels, along with a high-Ca2+ affinity release mechanism and diffusion barrier, promotes a linear coupling between Ca2+ influx and vesicle fusion. This may improve detection of small light decrements when cones are hyperpolarized by bright light. PMID:21880934

  20. The Effect of Citalopram on Midbrain CRF Receptors 1 and 2 in a Primate Model of Stress-Induced Amenorrhea

    PubMed Central

    Senashova, Olga; Reddy, Arubala P.; Cameron, Judy L.; Bethea, Cynthia L.

    2012-01-01

    We have demonstrated marked differences in the neurobiology of the serotonin system between stress-sensitive (SS) and stress-resilient (SR) cynomolgus macaques characterized in a model of stress-induced amenorrhea, also called functional hypothalamic amenorrhea (FHA). Dysfunction of the serotonin system in SS monkeys suggested that administration of a selective serotonin reuptake inhibitor (SSRI) might correct FHA. This study examines the effect of escitalopram (CIT) administration to SS and SR monkeys on corticotrophin-releasing factor (CRF) receptor 1 (CRF-R1) and CRF receptor 2 (CRF-R2) gene expression in the serotonin cell body region of the midbrain dorsal raphe. CRF-R1 was not significantly different between groups. There was a significant effect of treatment and a significant interaction between treatment and stress sensitivity on the average CRF-R2-positive pixel area (P < .004 and P < .006, respectively) and on the average number of CRF-R2-positive cells (P < .023 and P < .025, respectively). CIT significantly increased CRF-R2-positive pixel area and cell number in the SS group (pixel area P < .001; cell number P < .01; Bonferoni) but not in the SR group. In summary, CIT administration tended to decrease CRF-R1, but the small animal number precluded significance. CIT administration significantly increased CRF-R2 only in SS animals. These data suggest that the administration of CIT reduces anxiogenic components and increases anxiolytic components of the CRF system in the midbrain serotonin network, which in turn leads to improved ovarian function. Moreover, these data raise the possibility that SSRIs may be effective in the treatment of stress-induced infertility. PMID:22412189

  1. Release from the cone ribbon synapse under bright light conditions can be controlled by the opening of only a few Ca(2+) channels.

    PubMed

    Bartoletti, Theodore M; Jackman, Skyler L; Babai, Norbert; Mercer, Aaron J; Kramer, Richard H; Thoreson, Wallace B

    2011-12-01

    Light hyperpolarizes cone photoreceptors, causing synaptic voltage-gated Ca(2+) channels to open infrequently. To understand neurotransmission under these conditions, we determined the number of L-type Ca(2+) channel openings necessary for vesicle fusion at the cone ribbon synapse. Ca(2+) currents (I(Ca)) were activated in voltage-clamped cones, and excitatory postsynaptic currents (EPSCs) were recorded from horizontal cells in the salamander retina slice preparation. Ca(2+) channel number and single-channel current amplitude were calculated by mean-variance analysis of I(Ca). Two different comparisons-one comparing average numbers of release events to average I(Ca) amplitude and the other involving deconvolution of both EPSCs and simultaneously recorded cone I(Ca)-suggested that fewer than three Ca(2+) channel openings accompanied fusion of each vesicle at the peak of release during the first few milliseconds of stimulation. Opening fewer Ca(2+) channels did not enhance fusion efficiency, suggesting that few unnecessary channel openings occurred during strong depolarization. We simulated release at the cone synapse, using empirically determined synaptic dimensions, vesicle pool size, Ca(2+) dependence of release, Ca(2+) channel number, and Ca(2+) channel properties. The model replicated observations when a barrier was added to slow Ca(2+) diffusion. Consistent with the presence of a diffusion barrier, dialyzing cones with diffusible Ca(2+) buffers did not affect release efficiency. The tight clustering of Ca(2+) channels, along with a high-Ca(2+) affinity release mechanism and diffusion barrier, promotes a linear coupling between Ca(2+) influx and vesicle fusion. This may improve detection of small light decrements when cones are hyperpolarized by bright light.

  2. Impact of orphan drugs on Latvian budget.

    PubMed

    Logviss, Konstantins; Krievins, Dainis; Purvina, Santa

    2016-05-11

    Number of orphan medicinal products on the market and number of rare disease patients, taking these usually expensive products, are increasing. As a result, budget impact of orphan drugs is growing. This factor, along with the cost-effectiveness of orphan drugs, is often considered in the reimbursement decisions, directly affecting accessibility of rare disease therapies. The current study aims to assess the budget impact of orphan drugs in Latvia. Our study covered a 5-year period, from 2010 to 2014. Impact of orphan drugs on Latvian budget was estimated from the National Health Service's perspective. It was calculated in absolute values and relative to total pharmaceutical market and total drug reimbursement budget. A literature review was performed for comparison with other European countries. Orphan drug annual expenditure ranged between EUR 2.065 and 3.065 million, with total 5-year expenditure EUR 12.467 million. It constituted, on average, 0.84 % of total pharmaceutical market and 2.14 % of total drug reimbursement budget, respectively. Average annual per patient expenditures varied widely, from EUR 1 534 to EUR 580 952. The most costly treatment was enzyme replacement therapy (Elaprase) for MPS II. Glivec had the highest share (34 %) of the total orphan drug expenditure. Oncological drugs represented more than a half of the total orphan drug expenditure, followed by drugs for metabolic and endocrine conditions and medicines for cardiopulmonary diseases. Three indications: Ph+ CML, MPS II, and PAH accounted for nearly 90 % of the total orphan drug expenditure. Budget impact of orphan drugs in Latvia is very small. It increased slightly over a period of five years, due to the slight increase in the number of patients and the number of orphan drugs reimbursed. Current Latvian drug reimbursement system is not sufficient for most orphan drugs.

  3. Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Singer, J.; Armstrong, J. T.

    2016-12-01

    Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.

  4. Using state-issued identification cards for obesity tracking.

    PubMed

    Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P

    2015-01-01

    Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.

  5. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  6. Plasma concentrations of acyl-ghrelin are associated with average daily gain and feeding behavior in grow-finish pigs.

    PubMed

    Lents, C A; Brown-Brandl, T M; Rohrer, G A; Oliver, W T; Freking, B A

    2016-04-01

    The objectives of this study were to determine the effect of sex, sire line, and litter size on concentrations of acyl-ghrelin and total ghrelin in plasma of grow-finish pigs and to understand the relationship of plasma concentrations of ghrelin with feeding behavior, average daily gain (ADG), and back fat in grow-finish swine. Yorkshire-Landrace crossbred dams were inseminated with semen from Yorkshire, Landrace, or Duroc sires. Within 24 h of birth, pigs were cross-fostered into litter sizes of normal (N; >12 pigs/litter) or small (S; ≤ 9 pigs/litter). At 8 wk of age, pigs (n = 240) were blocked by sire breed, sex, and litter size and assigned to pens (n = 6) containing commercial feeders modified with a system to monitor feeding behavior. Total time eating, number of daily meals, and duration of meals were recorded for each individual pig. Body weight was recorded every 4 wk. Back fat and loin eye area were recorded at the conclusion of the 12-wk feeding study. A blood sample was collected at week 7 of the study to quantify concentrations of acyl- and total ghrelin in plasma. Pigs from small litters weighed more (P < 0.05) and tended (P = 0.07) to be fatter than pigs from normal litters. Postnatal litter size did not affect ADG, feeding behavior, or concentrations of ghrelin in plasma during the grow-finish phase. Barrows spent more time eating (P < 0.001) than gilts, but the number of meals and concentrations of ghrelin did not differ with sex of the pig. Pigs from Duroc and Yorkshire sires had lesser (P < 0.0001) concentrations of acyl-ghrelin than pigs from Landrace sires, but plasma concentrations of total ghrelin were not affected by sire breed. Concentrations of acyl-ghrelin were positively correlated with the number of meals and negatively correlated with meal length and ADG (P < 0.05). A larger number of short-duration meals may indicate that pigs with greater concentrations of acyl-ghrelin consumed less total feed, which likely explains why they were leaner and grew more slowly. Acyl-ghrelin is involved in regulating feeding behavior in pigs, and measuring acyl-ghrelin is important when trying to understand the role of this hormone in swine physiology. Published by Elsevier Inc.

  7. MODELING GALACTIC EXTINCTION WITH DUST AND 'REAL' POLYCYCLIC AROMATIC HYDROCARBONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulas, Giacomo; Casu, Silvia; Cecchi-Pestellini, Cesare

    We investigate the remarkable apparent variety of galactic extinction curves by modeling extinction profiles with core-mantle grains and a collection of single polycyclic aromatic hydrocarbons. Our aim is to translate a synthetic description of dust into physically well-grounded building blocks through the analysis of a statistically relevant sample of different extinction curves. All different flavors of observed extinction curves, ranging from the average galactic extinction curve to virtually 'bumpless' profiles, can be described by the present model. We prove that a mixture of a relatively small number (54 species in 4 charge states each) of polycyclic aromatic hydrocarbons can reproducemore » the features of the extinction curve in the ultraviolet, dismissing an old objection to the contribution of polycyclic aromatic hydrocarbons to the interstellar extinction curve. Despite the large number of free parameters (at most the 54 Multiplication-Sign 4 column densities of each species in each ionization state included in the molecular ensemble plus the 9 parameters defining the physical properties of classical particles), we can strongly constrain some physically relevant properties such as the total number of C atoms in all species and the mean charge of the mixture. Such properties are found to be largely independent of the adopted dust model whose variation provides effects that are orthogonal to those brought about by the molecular component. Finally, the fitting procedure, together with some physical sense, suggests (but does not require) the presence of an additional component of chemically different very small carbonaceous grains.« less

  8. Peripherally Inserted Central Catheter Placement with the Sonic Flashlight: Initial Clinical Trial by Nurses

    PubMed Central

    Wang, David; Amesur, Nikhil; Shukla, Gaurav; Bayless, Angela; Weiser, David; Scharl, Adam; Mockel, Derek; Banks, Christopher; Mandella, Bernadette; Klatzky, Roberta; Stetten, George

    2010-01-01

    Objective We describe a case series comprising the first clinical trial by intravenous (IV) team nurses using the Sonic Flashlight for ultrasound guidance of peripherally inserted central catheter (PICC) placement. Methods Two IV team nurses with more than 10 years experience placing PICCs and from 3–6 years experience with ultrasound (US) attempted to place PICCs under US guidance in patients requiring long-term IV access. One of two methods of US guidance was used: conventional ultrasound (CUS) (60 patients) or a new device called the Sonic Flashlight (SF) (44 patients). The number of needle punctures required to gain IV access was recorded for each subject. Results In both methods, 87% of the cases resulted in successful venous access on the first attempt. The average number of needle sticks per patient was 1.18 for SF-guided procedures, as compared to 1.20 for CUS-guided procedures. No significant difference was found in the distribution of the number of attempts between the two methods. Anecdotal comments by the nurses indicated the comparative ease of use of the SF display, although relatively small scale of the SF image compared to the CUS image was also noted. Conclusions We have shown that the Sonic Flashlight is a safe and effective device for guidance of PICC placement in the hands of experienced IV team nurses. The advantage of placing the ultrasound image at its actual location must be balanced against the relatively small scale of the SF image. PMID:19389904

  9. 7 CFR 786.106 - Determination of losses incurred.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of the dairy operation's average number of cows in the dairy herd and actual commercial production... cow determined according to the following: (1) The average of annual marketed production during the base period calendar years of 2003 and 2004, divided by; (2) The average number of cows in the dairy...

  10. Highly turbulent solutions of the Lagrangian-averaged Navier-Stokes alpha model and their large-eddy-simulation potential.

    PubMed

    Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick

    2007-11-01

    We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.

  11. Irradiation-induced microchemical changes in highly irradiated 316 stainless steel

    NASA Astrophysics Data System (ADS)

    Fujii, K.; Fukuya, K.

    2016-02-01

    Cold-worked 316 stainless steel specimens irradiated to 74 dpa in a pressurized water reactor (PWR) were analyzed by atom probe tomography (APT) to extend knowledge of solute clusters and segregation at higher doses. The analyses confirmed that those clusters mainly enriched in Ni-Si or Ni-Si-Mn were formed at high number density. The clusters were divided into three types based on their size and Mn content; small Ni-Si clusters (3-4 nm in diameter), and large Ni-Si and Ni-Si-Mn clusters (8-10 nm in diameter). The total cluster number density was 7.7 × 1023 m-3. The fraction of large clusters was almost 1/10 of the total density. The average composition (in at%) for small clusters was: Fe, 54; Cr, 12; Mn, 1; Ni, 22; Si, 11; Mo, 1, and for large clusters it was: Fe, 44; Cr, 9; Mn, 2; Ni, 29; Si, 14; Mo,1. It was likely that some of the Ni-Si clusters correspond to γ‧ phase precipitates while the Ni-Si-Mn clusters were precursors of G phase precipitates. The APT analyses at grain boundaries confirmed enrichment of Ni, Si, P and Cu and depletion of Fe, Cr, Mo and Mn. The segregation behavior was consistent with previous knowledge of radiation induced segregation.

  12. [MORPHO-FUNCTIONAL CHARACTERISTIC OF THE THYROID GLAND AFTER ADMINISTRATION OF COPPER NANOPARTICLES].

    PubMed

    Polyakova, V S; Sizova, Ye A; Miroshnikov, S A; Notova, S V; Zavaleyeva, S M

    2015-01-01

    The study was conducted on 15 male Wistar rats, which were injected intramuscularly with copper nanoparticle lyosols at a dose of 2.0 mg/kg body weight once a week for 3 months. Thyroid gland was removed 7 days after the last injection and was studied using histological, morphometric and immunohistochemical methods (demonstration of Ki-67 and caspase-3). Animals of the control group (n = 15) were administered distilled water at the same time intervals. It was found that after 1, 2, 3 and 4 injections of copper nanoparticles, the thyroid gland contained an increased number of large cyst-like follicles, while the average thyrocyte height of and the volume of their nuclei were reduced, which indicate a decline in hormone production. After 12 injections of copper nanoparticles, the hyperplastic thyroid gland demonstrated small follicles lined with columnar epithelium, which contained no or small amount of the colloid. The number of mitotically dividing thyrocytes was increased. Parafollicular cells demonstrated an apoptotic dominant. Morphological data suggest goitrogenic effect of multiple doses of copper nanopartides. The data received indicating the readiness of the thyroid cells to the programmed death and its possible depression (absence of signs of thyrocyte apoptosis) at different stages of the experiment confirm the modulating effect of copper on apoptosis.

  13. Phase Transition for the Maki-Thompson Rumour Model on a Small-World Network

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Pachon, Angelica; Rodriguez, Pablo M.; Tavani, Flavia

    2017-11-01

    We consider the Maki-Thompson model for the stochastic propagation of a rumour within a population. In this model the population is made up of "spreaders", "ignorants" and "stiflers"; any spreader attempts to pass the rumour to the other individuals via pair-wise interactions and in case the other individual is an ignorant, it becomes a spreader, while in the other two cases the initiating spreader turns into a stifler. In a finite population the process will eventually reach an equilibrium situation where individuals are either stiflers or ignorants. We extend the original hypothesis of homogenously mixed population by allowing for a small-world network embedding the model, in such a way that interactions occur only between nearest-neighbours. This structure is realized starting from a k-regular ring and by inserting, in the average, c additional links in such a way that k and c are tuneable parameters for the population architecture. We prove that this system exhibits a transition between regimes of localization (where the final number of stiflers is at most logarithmic in the population size) and propagation (where the final number of stiflers grows algebraically with the population size) at a finite value of the network parameter c. A quantitative estimate for the critical value of c is obtained via extensive numerical simulations.

  14. Protein domain assignment from the recurrence of locally similar structures

    PubMed Central

    Tai, Chin-Hsien; Sam, Vichetra; Gibrat, Jean-Francois; Garnier, Jean; Munson, Peter J.

    2010-01-01

    Domains are basic units of protein structure and essential for exploring protein fold space and structure evolution. With the structural genomics initiative, the number of protein structures in the Protein Databank (PDB) is increasing dramatically and domain assignments need to be done automatically. Most existing structural domain assignment programs define domains using the compactness of the domains and/or the number and strength of intra-domain versus inter-domain contacts. Here we present a different approach based on the recurrence of locally similar structural pieces (LSSPs) found by one-against-all structure comparisons with a dataset of 6,373 protein chains from the PDB. Residues of the query protein are clustered using LSSPs via three different procedures to define domains. This approach gives results that are comparable to several existing programs that use geometrical and other structural information explicitly. Remarkably, most of the proteins that contribute the LSSPs defining a domain do not themselves contain the domain of interest. This study shows that domains can be defined by a collection of relatively small locally similar structural pieces containing, on average, four secondary structure elements. In addition, it indicates that domains are indeed made of recurrent small structural pieces that are used to build protein structures of many different folds as suggested by recent studies. PMID:21287617

  15. Structural convergence properties of amorphous InGaZnO4 from simulated liquid-quench methods.

    PubMed

    Buchanan, Jacob C; Fast, Dylan B; Hanken, Benjamin E; Mustard, Thomas J L; Laurita, Geneva; Chiang, Tsung-Han; Keszler, Douglas A; Subramanian, Mas A; Wager, John F; Dolgos, Michelle R; Rustad, James R; Cheong, Paul Ha-Yeon

    2017-11-14

    The study of structural properties of amorphous structures is complicated by the lack of long-range order and necessitates the use of both cutting-edge computer modeling and experimental techniques. With regards to the computer modeling, many questions on convergence arise when trying to assess the accuracy of a simulated system. What cell size maximizes the accuracy while remaining computationally efficient? More importantly, does averaging multiple smaller cells adequately describe features found in bulk amorphous materials? How small is too small? The aims of this work are: (1) to report a newly developed set of pair potentials for InGaZnO 4 and (2) to explore the effects of structural parameters such as simulation cell size and numbers on the structural convergence of amorphous InGaZnO 4 . The total number of formula units considered over all runs is found to be the critical factor in convergence as long as the cell considered contains a minimum of circa fifteen formula units. There is qualitative agreement between these simulations and X-ray total scattering data - peak trends and locations are consistently reproduced while intensities are weaker. These new IGZO pair potentials are a valuable starting point for future structural refinement efforts.

  16. Towards a comprehensive knowledge of the star cluster population in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Piatti, A. E.

    2018-07-01

    The Small Magellanic Cloud (SMC) has recently been found to harbour an increase of more than 200 per cent in its known cluster population. Here, we provide solid evidence that this unprecedented number of clusters could be greatly overestimated. On the one hand, the fully automatic procedure used to identify such an enormous cluster candidate sample did not recover ˜50 per cent, on average, of the known relatively bright clusters located in the SMC main body. On the other hand, the number of new cluster candidates per time unit as a function of time is noticeably different from the intrinsic SMC cluster frequency (CF), which should not be the case if these new detections were genuine physical systems. We found additionally that the SMC CF varies spatially, in such a way that it resembles an outside-in process coupled with the effects of a relatively recent interaction with the Large Magellanic Cloud. By assuming that clusters and field stars share the same formation history, we showed for the first time that the cluster dissolution rate also depends on position in the galaxy. The cluster dissolution becomes higher as the concentration of galaxy mass increases or if external tidal forces are present.

  17. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  18. Patient, Physician and Organizational Influences on Variation in Antipsychotic Prescribing Behavior.

    PubMed

    Tang, Yan; Chang, Chung-Chou H; Lave, Judith R; Gellad, Walid F; Huskamp, Haiden A; Donohue, Julie M

    2016-03-01

    Physicians face the choice of multiple ingredients when prescribing drugs in many therapeutic categories. For conditions with considerable patient heterogeneity in treatment response, customizing treatment to individual patient needs and preferences may improve outcomes. To assess variation in the diversity of antipsychotic prescribing for mental health conditions, a necessary although not sufficient condition for personalizing treatment. To identify patient caseload, physician, and organizational factors associated with the diversity of antipsychotic prescribing. Using 2011 data from Pennsylvania's Medicaid program, IMS Health's HCOSTM database, and the AMA Masterfile, we identified 764 psychiatrists who prescribed antipsychotics to 10 patients. We constructed three physician-level measures of diversity/concentration of antipsychotic prescribing: number of ingredients prescribed, share of prescriptions for most preferred ingredient, and Herfindahl-Hirschman index (HHI). We used multiple membership linear mixed models to examine patient caseload, physician, and healthcare organizational predictors of physician concentration of antipsychotic prescribing. There was substantial variability in antipsychotic prescribing concentration among psychiatrists, with number of ingredients ranging from 2-17, share for most preferred ingredient from 16%-85%, and HHI from 1,088-7,270. On average, psychiatrist prescribing behavior was relatively diversified; however, 11% of psychiatrists wrote an average of 55% of their prescriptions for their most preferred ingredient. Female prescribers and those with smaller shares of disabled or serious mental illness patients had more concentrated prescribing behavior on average. Antipsychotic prescribing by individual psychiatrists in a large state Medicaid program varied substantially across psychiatrists. Our findings illustrate the importance of understanding physicians' prescribing behavior and indicate that even among specialties regularly prescribing a therapeutic category, some physicians rely heavily on a small number of agents. Health systems may need to offer educational interventions to clinicians in order to improve their ability to tailor treatment decisions to the needs of individual patients. Future studies should examine the impact of the diversity of antipsychotic prescribing to determine whether more diversified prescribing improves patient adherence and outcomes.

  19. Modeling Cancer Cell Growth Dynamics In vitro in Response to Antimitotic Drug Treatment

    PubMed Central

    Lorz, Alexander; Botesteanu, Dana-Adriana; Levy, Doron

    2017-01-01

    Investigating the role of intrinsic cell heterogeneity emerging from variations in cell-cycle parameters and apoptosis is a crucial step toward better informing drug administration. Antimitotic agents, widely used in chemotherapy, target exclusively proliferative cells and commonly induce a prolonged mitotic arrest followed by cell death via apoptosis. In this paper, we developed a physiologically motivated mathematical framework for describing cancer cell growth dynamics that incorporates the intrinsic heterogeneity in the time individual cells spend in the cell-cycle and apoptosis process. More precisely, our model comprises two age-structured partial differential equations for the proliferative and apoptotic cell compartments and one ordinary differential equation for the quiescent compartment. To reflect the intrinsic cell heterogeneity that governs the growth dynamics, proliferative and apoptotic cells are structured in “age,” i.e., the amount of time remaining to be spent in each respective compartment. In our model, we considered an antimitotic drug whose effect on the cellular dynamics is to induce mitotic arrest, extending the average cell-cycle length. The prolonged mitotic arrest induced by the drug can trigger apoptosis if the time a cell will spend in the cell cycle is greater than the mitotic arrest threshold. We studied the drug’s effect on the long-term cancer cell growth dynamics using different durations of prolonged mitotic arrest induced by the drug. Our numerical simulations suggest that at confluence and in the absence of the drug, quiescence is the long-term asymptotic behavior emerging from the cancer cell growth dynamics. This pattern is maintained in the presence of small increases in the average cell-cycle length. However, intermediate increases in cell-cycle length markedly decrease the total number of cells and can drive the cancer population to extinction. Intriguingly, a large “switch-on/switch-off” increase in the average cell-cycle length maintains an active cell population in the long term, with oscillating numbers of proliferative cells and a relatively constant quiescent cell number. PMID:28913178

  20. Multi-arm group sequential designs with a simultaneous stopping rule.

    PubMed

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  1. Information Flow Between Resting-State Networks.

    PubMed

    Diez, Ibai; Erramuzpe, Asier; Escudero, Iñaki; Mateos, Beatriz; Cabrera, Alberto; Marinazzo, Daniele; Sanz-Arigita, Ernesto J; Stramaglia, Sebastiano; Cortes Diaz, Jesus M

    2015-11-01

    The resting brain dynamics self-organize into a finite number of correlated patterns known as resting-state networks (RSNs). It is well known that techniques such as independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting-state magnetic resonance imaging. After hemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of transfer entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k=1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k≥1, our method calculates the k multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension dependent, increasing from k=1 (i.e., the average voxel activity) up to a maximum occurring at k=5 and to finally decay to zero for k≥10. This suggests that a small number of components (close to five) is sufficient to describe the IF pattern between RSNs. Our method--addressing differences in IF between RSNs for any generic data--can be used for group comparison in health or disease. To illustrate this, we have calculated the inter-RSN IF in a data set of Alzheimer's disease (AD) to find that the most significant differences between AD and controls occurred for k=2, in addition to AD showing increased IF w.r.t. The spatial localization of the k=2 component, within RSNs, allows the characterization of IF differences between AD and controls.

  2. Reoperations following proximal interphalangeal joint nonconstrained arthroplasties.

    PubMed

    Pritsch, Tamir; Rizzo, Marco

    2011-09-01

    To retrospectively analyze the reasons for reoperations following primary nonconstrained proximal interphalangeal (PIP) joint arthroplasty and review clinical outcomes in this group of patients with 1 or more reoperations. Between 2001 and 2009, 294 nonconstrained (203 pyrocarbon and 91 metal-plastic) PIP joint replacements were performed in our institution. A total of 76 fingers (59 patients) required reoperation (50 pyrocarbon and 26 metal-plastic). There were 40 women and 19 men with an average age of 51 years (range, 19-83 y). Primary diagnoses included osteoarthritis in 35, posttraumatic arthritis in 24, and inflammatory arthritis in 17 patients. There were 21 index, 27 middle, 18 ring, and 10 small fingers. The average number of reoperations per PIP joint was 1.6 (range, 1-4). A total of 45 joints had 1 reoperation, 19 had 2, 11 had 3, and 1 had 4. Extensor mechanism dysfunction was the most common reason for reoperation; it involved 51 of 76 fingers and was associated with Chamay or tendon-reflecting surgical approaches. Additional etiologies included component loosening in 17, collateral ligament failure in 10, and volar plate contracture in 8 cases. Inflammatory arthritis was associated with collateral ligament failure. Six fingers were eventually amputated, 9 had PIP joint arthrodeses, and 2 had resection arthroplasties. The arthrodesis and amputation rates correlated with the increased number of reoperations per finger. Clinically, most patients had no or mild pain at the most recent follow-up, and the PIP joint range-of-motion was not significantly different from preoperative values. Pain levels improved with longer follow-up. Reoperations following primary nonconstrained PIP joint arthroplasties are common. Extensor mechanism dysfunction was the most common reason for reoperation. The average reoperation rate was 1.6, and arthrodesis and amputation are associated with an increasing number of operations. Overall clinical outcomes demonstrated no significant change in range of motion, and most patients had mild or no pain. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  3. On the evolution of flow topology in turbulent Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2016-11-01

    Small-scale dynamics is the spirit of turbulence physics. It implicates many attributes of flow topology evolution, coherent structures, hairpin vorticity dynamics, and mechanism of the kinetic energy cascade. In this work, several dynamical aspects of the small-scale motions have been numerically studied in a framework of Rayleigh-Bénard convection (RBC). To do so, direct numerical simulations have been carried out at two Rayleigh numbers Ra = 108 and 1010, inside an air-filled rectangular cell of aspect ratio unity and π span-wise open-ended distance. As a main feature, the average rate of the invariants of the velocity gradient tensor (QG, RG) has displayed the so-called "teardrop" spiraling shape through the bulk region. Therein, the mean trajectories are swirling inwards revealing a periodic spin around the converging origin of a constant period that is found to be proportional to the plumes lifetime. This suggests that the thermal plumes participate in the coherent large-scale circulation and the turbulent wind created in the bulk. Particularly, it happens when the plumes elongate substantially to contribute to the large-scale eddies at the lower turbulent state. Supplementary small-scale properties, which are widely common in many turbulent flows have been observed in RBC. For example, the strong preferential alignment of vorticity with the intermediate eigenstrain vector, and the asymmetric alignment between vorticity and the vortex-stretching vector. It has been deduced that in a hard turbulent flow regime, local self-amplifications of straining regions aid in contracting the vorticity worms, and enhance the local interactions vorticity/strain to support the linear vortex-stretching contributions. On the other hand, the evolution of invariants pertained to the traceless part of velocity-times-temperature gradient tensor has also been considered in order to determine the role of thermals in the fine-scale dynamics. These new invariants show an incorporation of kinetic and thermal gradient dynamics that indicate directly the evolution and lifetime of thermal plume structures. By applying an identical approach, the rates of the new invariants have shown a symmetric cycling behaviour decaying towards two skew-symmetric converging origins at the lower Ra number. The trajectories near origins address the hot and cold coherent plumes that travel as an average large-scale heat flux in the sidewall vicinities, and denote a periodic spin period close to the plumes lifetime. At the hard turbulent case, the spiraling trajectories travel in shorter tracks to reveal the reduced lifetime of plumes under the dissipative and mixing effects. The turbulent background kinetic derivatives get self-amplified and the trajectories converge to a zero-valued origin indicating that there is no contribution from the plumes to the average coherent large scales of heat flux. These and other peculiar scrutinies on the small-scale motions in RBC have been enlightened, and may have a fruitful consequence on modelling approaches of buoyancy-driven turbulence.

  4. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  5. Study on characteristics of the aperture-averaging factor of atmospheric scintillation in terrestrial optical wireless communication

    NASA Astrophysics Data System (ADS)

    Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun

    2018-02-01

    In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.

  6. Food photography. I: The perception of food portion size from photographs.

    PubMed

    Nelson, M; Atkinson, M; Darbyshire, S

    1994-11-01

    Fifty-one male and female volunteers aged 18-90 years from a wide variety of social and occupational backgrounds completed 7284 assessments of portion size in relation to food photographs. Subjects were shown six portion sizes (two small, two medium and two large) for each of six foods, and asked to compare the amount on the plate in front of them to (a) a series of eight photographs showing weights of portions from the 5th to the 95th centile of portion size (British Adult Dietary Survey), or (b) a single photograph of the average (median) portion size. Photographs were prepared either in colour or in black and white, and in two different sizes. The order of presentation of foods; use of black and white or colour; the size of photographs; and presentation of eight or average photographs were each randomized independently. On average, the mean differences between the portion size presented and the estimate of portion size using the photographs varied from -8 to +6 g (-4 to +5%) for the series of eight photographs, and from -34 to -1 g (-23 to +9%) for the single average photograph. Large portion sizes tended to be underestimated more than medium or small portion sizes, especially when using the average photograph (from -79 to -14 g, -37 to -13%). Being female, 65 years and over, or retired, or seeing photographs in colour, were all associated with small but statistically significant overestimations of portion size. Having a body mass index > or = 30 kg/m2 was associated with an 8% underestimate of portion size. We conclude that use of a series of eight photographs is associated with relatively small errors in portion size perception, whereas use of an average photograph is consistently associated with substantial underestimation across a variety of foods.

  7. Poster - Thur Eve - 52: Clinical use of nanoDots: In-vivo dosimetry and treatment validation for stereotactic targets with VMAT techniques.

    PubMed

    Wierzbicki, W; Nicol, S; Furstoss, C; Brunet-Benkhoucha, M; Leduc, V

    2012-07-01

    A newly acquired nanoDot In-Light system was compared with TLD-100 dosimeters to confirm the treatment dose in the multiple cases: an electron eye treatment, H&N IMRT and VMAT validation for small targets. Eye tumour treatment with 9 MeV electrons A dose of 1.8 Gy per fraction was prescribed to the 85% isodose. The average dose measured by three TLDs and three Dots was 1.90 and 1.97 Gy. Both detectors overestimated dose, by 2.9% and 6.7% respectively. H&N IMRT treatment of skin cancer with 6 MV photons Dose per fraction is 2.5 Gy. The average doses measured by two TLDs and two Dots were 2.48 and 2.56 Gy, which represent errors of -0.8% and 2.2%, respectively. VMAT validation for small targets using an Agarose phantom, dose 15 Gy A single-tumour brain treatment was delivered using two coplanar arcs to an Agarise phantom containing a large plastic insert holding 3 nanoDots and 4 TLDs. The difference between the average Pinnacle dose and the average dose of the corresponding detectors was -0.6% for Dots and -1.7% for TLDs. A two-tumour brain treatment was delivered using three non-coplanar arcs. Small and large plastic inserts separated by 5 cm were used to validate the dose. The difference between the average Pinnacle dose and the average dose of the corresponding detectors was the following; small phantom 0.7% for Dots and 0.3% for TLDs, large phantom-1.9% for Dots and -0.6% for TLDs. In conclusion, nanoDot detectors are suitable for in-vivo dosimetry with photon and electron beams. © 2012 American Association of Physicists in Medicine.

  8. Icing Frequencies Experienced During Climb and Descent by Fighter-Interceptor Aircraft

    NASA Technical Reports Server (NTRS)

    Perkins, Porter J.

    1958-01-01

    Data and analyses are presented on the relative frequencies of occurrence and severity of icing cloud layers encountered by jet aircraft in the climb and descent phases of flights to high altitudes. Fighter-interceptor aircraft operated by the Air Defense Command (USAF) at bases in the Duluth and Seattle areas collected the data with icing meters installed for a l-year period. The project was part of an extensive program conducted by the NACA to collect Icing cloud data for evaluating the icing problem relevant to routine operations. The average frequency of occurrence of icing was found to be about 5 percent of the number of climbs and descents during 1 year of operations The icing encounters were predominantly in the low and middle cloud layers, decreasing above 15,000 feet to practically none above 25,000 feet. The greatest thickness of ice that would accumulate on any aircraft component (as indicated by the accretion on a small object) was measured with the icing meters. The ice thicknesses on a small sensing probe averaged less than 1/32 inch and did not exceed 1/2 inch. Such accumulations are relatively small when compared with those that can form during horizontal flight in icing clouds. The light accretions resulted from relatively steep angles of flight through generally thin cloud layers. Because of the limited statistical reliability of the results, an analysis was made using previous statistics on icing clouds below an altitude of 20,000 feet to determine the general icing severity probabilities. The calculations were made using adiabatic lifting as a basis to establish the liquid-water content. Probabilities of over-all ice accretions on a small object as a function of airspeed and rate of climb were computed from the derived water contents. These results were then combined with the probability of occurrence of icing in order to give the icing severity that can be expected for routine aircraft operations.

  9. PoLi: A Virtual Screening Pipeline Based On Template Pocket And Ligand Similarity

    PubMed Central

    Roy, Ambrish; Srinivasan, Bharath; Skolnick, Jeffrey

    2015-01-01

    Often in pharmaceutical research, the goal is to identify small molecules that can interact with and appropriately modify the biological behavior of a new protein target. Unfortunately, most proteins lack both known structures and small molecule binders, prerequisites of many virtual screening, VS, approaches. For such proteins, ligand homology modeling, LHM, that copies ligands from homologous and perhaps evolutionarily distant template proteins, has been shown to be a powerful VS approach to identify possible binding ligands. However, if we want to target a specific pocket for which there is no homologous holo template protein structure, then LHM will not work. To address this issue, in a new pocket based approach, PoLi, we generalize LHM by exploiting the fact that the number of distinct small molecule ligand binding pockets in proteins is small. PoLi identifies similar ligand binding pockets in a holo-template protein library, selectively copies relevant parts of template ligands and uses them for VS. In practice, PoLi is a hybrid structure and ligand based VS algorithm that integrates 2D fingerprint-based and 3D shape-based similarity metrics for improved virtual screening performance. On standard DUD and DUD-E benchmark databases, using modeled receptor structures, PoLi achieves an average enrichment factor of 13.4 and 9.6 respectively, in the top 1% of the screened library. In contrast, traditional docking based VS using AutoDock Vina and homology-based VS using FINDSITEfilt have an average enrichment of 1.6 (3.0) and 9.0 (7.9) on the DUD (DUD-E) sets respectively. Experimental validation of PoLi predictions on dihydrofolate reductase, DHFR, using differential scanning fluorimetry, DSF, identifies multiple ligands with diverse molecular scaffolds, thus demonstrating the advantage of PoLi over current state-of-the-art VS methods. PMID:26225536

  10. Queueing system analysis of multi server model at XYZ insurance company in Tasikmalaya city

    NASA Astrophysics Data System (ADS)

    Muhajir, Ahmad; Binatari, Nikenasih

    2017-08-01

    Queueing theory or waiting line theory is a theory that deals with the queue process from the customer comes, queue to be served, served and left on service facilities. Queue occurs because of a mismatch between the numbers of customers that will be served with the available number of services, as an example at XYZ insurance company in Tasikmalaya. This research aims to determine the characteristics of the queue system which then to optimize the number of server in term of total cost. The result shows that the queue model can be represented by (M/M/4):(GD/∞/∞), where the arrivals are Poisson distributed while the service time is following exponential distribution. The probability of idle customer service is 2,39% of the working time, the average number of customer in the queue is 3 customers, the average number of customer in a system is 6 customers, the average time of a customer spent in the queue is 15,9979 minutes, the average time a customer spends in the system is 34,4141 minutes, and the average number of busy customer servicer is 3 server. The optimized number of customer service is 5 servers, and the operational cost has minimum cost at Rp 4.323.

  11. Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model

    NASA Astrophysics Data System (ADS)

    Vazifedan, Turaj; Shitan, Mahendran

    Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.

  12. The Temporal Evolution of the Nanostructure of a Model Ni-Al-Cr Superalloy

    NASA Technical Reports Server (NTRS)

    Sudbrack, Chantal K.; Yoon, Kevin E.; Noebe, Ronald D.; Seidman, David N.

    2004-01-01

    The early to the later stages of precipitation of ordered gamma'-precipitates (L1(sub 2)) in Ni-5.2 Al-14.2 Cr (at.%) are studied at 873 K. Precipitates with radii as small as 0.45 nm are characterized fully by three-dimensional atom-probe (3DAP) microscopy. Contrary to what is often assumed by theory or in models, the average precipitate composition is shown to evolve with time, such that solute concentrations decrease toward an equilibrium value given by the solvus lines. Power-law time dependencies of the number density, mean radius, and supersaturations of Al and Cr are discussed in light of theoretical predictions for Ostwald ripening.

  13. Test-to-Test Repeatability of Results From a Subsonic Wing-Body Configuration in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Pendergraft, Odis C., Jr.

    2000-01-01

    Results from three wind tunnel tests in the National Transonic Facility of a model of an advanced-technology, subsonic-transport wing-body configuration have been analyzed to assess the test-to-test repeatability of several aerodynamic parameters. The scatter, as measured by the prediction interval, in the longitudinal force and moment coefficients increases as the Mach number increases. Residual errors with and without the ESP tubes installed suggest a bias leading to lower drag with the tubes installed. Residual errors as well as average values of the longitudinal force and moment coefficients show that there are small bias errors between the different tests.

  14. Transient effects in ice nucleation of a water drop impacting onto a cold substrate

    NASA Astrophysics Data System (ADS)

    Schremb, Markus; Roisman, Ilia V.; Tropea, Cameron

    2017-02-01

    The impact of water drops onto a solid surface at subfreezing temperatures has been experimentally studied. Drop nucleation has been observed using a high-speed video system. The statistics of nucleation allows the estimation of the average number of nucleation sites per unit area of the wetted part of the substrate. We have discovered that the nucleation rate in the impacting drop is not constant. The observed significant increase of the nucleation rate at small times after impact t <50 ms can be explained by the generation of nanobubbles at early times of drop impact. These bubbles serve as additional nucleation sites and enhance the nucleation rate.

  15. Plasmids as stochastic model systems

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2003-05-01

    Plasmids are self-replicating gene clusters present in on average 2-100 copies per bacterial cell. To reduce random fluctuations and thereby avoid extinction, they ubiquitously autoregulate their own synthesis using negative feedback loops. Here I use van Kampen's Ω-expansion for a two-dimensional model of negative feedback including plasmids and ther replication inhibitors. This analytically summarizes the standard perspective on replication control -- including the effects of sensitivity amplification, exponential time-delays and noisy signaling. I further review the two most common molecular sensitivity mechanisms: multistep control and cooperativity. Finally, I discuss more controversial sensitivity schemes, such as noise-enhanced sensitivity, the exploitation of small-number combinatorics and double-layered feedback loops to suppress noise in disordered environments.

  16. Transient effects in ice nucleation of a water drop impacting onto a cold substrate.

    PubMed

    Schremb, Markus; Roisman, Ilia V; Tropea, Cameron

    2017-02-01

    The impact of water drops onto a solid surface at subfreezing temperatures has been experimentally studied. Drop nucleation has been observed using a high-speed video system. The statistics of nucleation allows the estimation of the average number of nucleation sites per unit area of the wetted part of the substrate. We have discovered that the nucleation rate in the impacting drop is not constant. The observed significant increase of the nucleation rate at small times after impact t<50 ms can be explained by the generation of nanobubbles at early times of drop impact. These bubbles serve as additional nucleation sites and enhance the nucleation rate.

  17. Decline in sea snake abundance on a protected coral reef system in the New Caledonian Lagoon

    NASA Astrophysics Data System (ADS)

    Goiran, C.; Shine, R.

    2013-03-01

    Monitoring results from a small reef (Ile aux Canards) near Noumea in the New Caledonian Lagoon reveal that numbers of turtle-headed sea snakes ( Emydocephalus annulatus) have been in consistent decline over a 9-year period, with average daily counts of snakes decreasing from >6 to <2 over this period. Causal factors for the decline are unclear, because the site is a protected area used only for tourism. Our results suggest that wildlife management authorities should carefully monitor sea snake populations to check whether the declines now documented for New Caledonia and in nearby Australian waters also occur around the islands of the Indo-Pacific.

  18. Periodontal status and selected parameters of oral condition of Poles aged 65 to 74 years.

    PubMed

    Konopka, Tomasz; Dembowska, Elżbieta; Pietruska, Małgorzata; Dymalski, Paweł; Górska, Renata

    2015-01-01

    The goal of this study was the evaluation of the periodontal health by means of CPI score in inhabitants of big and small cities in the age range from 65 to 74 and making comparison with previous Polish and European studies from XXI century. Also an average number of natural teeth, the edentulous persons percentage, the percentage of people with oral function maintenance and prevalence of oral mucosal diseases were evaluated. There were also attempts to evaluate essential behaviours related to the oral health and the percentage of people that are treated with use of non-reimbursed or reimbursed services. Studies were conducted in 5 big cities: Warszawa, Szczecin, Wrocław, Białystok and Toruń, as well as in 4 towns, such as Oława, Police, Łobez and Ełk. From sampling 7400 people aged from 65 to 74 years for the study reported only 807 people. In the mouth evaluated CPI score, number of natural teeth and prevalence of pathological lesions on cavity mucosa. Answers for questions on selected attitudes and health-seeking behaviours related to the oral health and the range of dental treatment were also analysed. Distribution of values of CPI codes in the whole group was as follows: CPI0-1.2%, CPI1-9.4%, CPI2-16.6%, CPI3-21.8%, CPI4-19.7% and the number of people excluded from examinations (1 tooth in the sextant or edentulous 31.3%). The state of the periodontium was worse in big cities and in men. An average number of teeth was 13.7 and was higher in big cities and in men. The percentage of edentulous persons was 28.9% and was higher in towns and in woman. The percentage of people with oral function maintenance was 25.15% and was higher in big cities and in men. The most three common pathologies of the oral cavity were leukoplakia and leukokeratosis that were found in 10.5% of examined people, candidiasis 5.82% and lichen planus 2.2%. The state of the periodontium of Poles at the age from 65 to 74 has not been improved in XXI century, but also does not significantly differ from an average European level. An average number of remaining teeth of Poles at this age has increased, but remains under a European average; also the prevalence of edentulism has decreased, but still remains one of the highest in Europe. The percentage of people with oral function maintenance is very low, thus needs for prosthetic treatment and rehabilitation of masticatory ability remain high. Precancerous lesions in the oral cavity are quite common in this age group. Health-seeking behaviours related to the oral health of older Poles are inadequate and result from a low level of knowledge on dental prophylaxis.

  19. Flight Test Results from the Rake Airflow Gage Experiment on the F-15B

    NASA Technical Reports Server (NTRS)

    Frederick, Michael; Ratnayake, Nalin

    2011-01-01

    The results are described of the Rake Airflow Gage Experiment (RAGE), which was designed and fabricated to support the flight test of a new supersonic inlet design using Dryden's Propulsion Flight Test Fixture (PFTF) and F-15B testbed airplane (see figure). The PFTF is a unique pylon that was developed for flight-testing propulsion-related experiments such as inlets, nozzles, and combustors over a range of subsonic and supersonic flight conditions. The objective of the RAGE program was to quantify the local flowfield at the aerodynamic interface plane of the Channeled Centerbody Inlet Experiment (CCIE). The CCIE is a fixed representation of a conceptual mixed-compression supersonic inlet with a translating biconic centerbody. The primary goal of RAGE was to identify the relationship between free-stream and local Mach number in the low supersonic regime, with emphasis on the identification of the particular free-stream Mach number that produced a local Mach number of 1.5. Measurements of the local flow angularity, total pressure distortion, and dynamic pressure over the interface plane were also desired. The experimental data for the RAGE program were obtained during two separate research flights. During both flights, local flowfield data were obtained during straight and level acceleration segments out to steady-state test points. The data obtained from the two flights showed small variations in Mach number, flow angularity, and dynamic pressure across the interface plane at all flight conditions. The data show that a free-stream Mach number of 1.65 will produce the desired local Mach number of 1.5 for CCIE. The local total pressure distortion over the interface plane at this condition was approximately 1.5%. At this condition, there was an average of nearly 2 of downwash over the interface plane. This small amount of downwash is not expected to adversely affect the performance of the CCIE inlet.

  20. Relationship between neighbor number and vibrational spectra in disordered colloidal clusters with attractive interactions

    NASA Astrophysics Data System (ADS)

    Yunker, Peter J.; Zhang, Zexin; Gratale, Matthew; Chen, Ke; Yodh, A. G.

    2013-03-01

    We study connections between vibrational spectra and average nearest neighbor number in disordered clusters of colloidal particles with attractive interactions. Measurements of displacement covariances between particles in each cluster permit calculation of the stiffness matrix, which contains effective spring constants linking pairs of particles. From the cluster stiffness matrix, we derive vibrational properties of corresponding "shadow" glassy clusters, with the same geometric configuration and interactions as the "source" cluster but without damping. Here, we investigate the stiffness matrix to elucidate the origin of the correlations between the median frequency of cluster vibrational modes and average number of nearest neighbors in the cluster. We find that the mean confining stiffness of particles in a cluster, i.e., the ensemble-averaged sum of nearest neighbor spring constants, correlates strongly with average nearest neighbor number, and even more strongly with median frequency. Further, we find that the average oscillation frequency of an individual particle is set by the total stiffness of its nearest neighbor bonds; this average frequency increases as the square root of the nearest neighbor bond stiffness, in a manner similar to the simple harmonic oscillator.

  1. An optical wavefront sensor based on a double layer microlens array.

    PubMed

    Lin, Vinna; Wei, Hsiang-Chun; Hsieh, Hsin-Ta; Su, Guo-Dung John

    2011-01-01

    In order to determine light aberrations, Shack-Hartmann optical wavefront sensors make use of microlens arrays (MLA) to divide the incident light into small parts and focus them onto image planes. In this paper, we present the design and fabrication of long focal length MLA with various shapes and arrangements based on a double layer structure for optical wavefront sensing applications. A longer focal length MLA could provide high sensitivity in determining the average slope across each microlens under a given wavefront, and spatial resolution of a wavefront sensor is increased by numbers of microlenses across a detector. In order to extend focal length, we used polydimethysiloxane (PDMS) above MLA on a glass substrate. Because of small refractive index difference between PDMS and MLA interface (UV-resin), the incident light is less refracted and focused in further distance. Other specific focal lengths could also be realized by modifying the refractive index difference without changing the MLA size. Thus, the wavefront sensor could be improved with better sensitivity and higher spatial resolution.

  2. Preferential partner selection in an evolutionary study of Prisoner's Dilemma.

    PubMed

    Ashlock, D; Smucker, M D; Stanley, E A; Tesfatsion, L

    1996-01-01

    Partner selection is an important process in many social interactions, permitting individuals to decrease the risks associated with cooperation. In large populations, defectors may escape punishment by roving from partner to partner, but defectors in smaller populations risk social isolation. We investigate these possibilities for an evolutionary Prisoner's Dilemma in which agents use expected payoffs to choose and refuse partners. In comparison to random or round-robin partner matching, we find that the average payoffs attained with preferential partner selection tend to be more narrowly confined to a few isolated payoff regions. Most ecologies evolve to essentially full cooperative behavior, but when agents are intolerant of defections, or when the costs of refusal and social isolation are small, we also see the emergence of wallflower ecologies in which all agents are socially isolated. Between these two extremes, we see the emergence of ecologies whose agents tend to engage in a small number of defections followed by cooperation thereafter. The latter ecologies exhibit a plethora of interesting social interaction patterns.

  3. Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains

    DOE PAGES

    Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John

    2015-08-18

    This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less

  4. Olive Oil Tracer Particle Size Analysis for Optical Flow Investigations in a Gas Medium

    NASA Astrophysics Data System (ADS)

    Harris, Shaun; Smith, Barton

    2014-11-01

    Seed tracer particles must be large enough to scatter sufficient light while being sufficiently small to follow the flow. These requirements motivate a desire for control over the particle size. For gas measurements, it is common to use atomized oil droplets as tracer particles. A Laskin nozzle is a device for generating oil droplets in air by directing high-pressure air through small holes under an oil surface. The droplet diameter frequency distribution can be varied by altering the hole diameter, the number of holes, or the inlet pressure. We will present a systematic study of the effect of these three parameters on the resultant particle distribution as it leaves the Laskin nozzle. The study was repeated for cases where the particles moved through a typical jet facility before their size was measured. While the jet facility resulted in an elimination of larger particles, the average particle diameter could be varied by a factor of two at both the seeder exit and downstream of the jet facility.

  5. Dinosaur incubation periods directly determined from growth-line counts in embryonic teeth show reptilian-grade development

    PubMed Central

    Erickson, Gregory M.; Zelenitsky, Darla K.; Kay, David Ian; Norell, Mark A.

    2017-01-01

    Birds stand out from other egg-laying amniotes by producing relatively small numbers of large eggs with very short incubation periods (average 11–85 d). This aspect promotes high survivorship by limiting exposure to predation and environmental perturbation, allows for larger more fit young, and facilitates rapid attainment of adult size. Birds are living dinosaurs; their rapid development has been considered to reflect the primitive dinosaurian condition. Here, nonavian dinosaurian incubation periods in both small and large ornithischian taxa are empirically determined through growth-line counts in embryonic teeth. Our results show unexpectedly slow incubation (2.8 and 5.8 mo) like those of outgroup reptiles. Developmental and physiological constraints would have rendered tooth formation and incubation inherently slow in other dinosaur lineages and basal birds. The capacity to determine incubation periods in extinct egg-laying amniotes has implications for dinosaurian embryology, life history strategies, and survivorship across the Cretaceous–Paleogene mass extinction event. PMID:28049837

  6. A time-series method for automated measurement of changes in mitotic and interphase duration from time-lapse movies.

    PubMed

    Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W

    2011-01-01

    Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.

  7. Counting polymers moving through a single ion channel

    NASA Astrophysics Data System (ADS)

    Bezrukov, Sergey M.; Vodyanoy, Igor; Parsegian, V. Adrian

    1994-07-01

    THE change in conductance of a small electrolyte-filled capillary owing to the passage of sub-micrometre-sized particles has long been used for particle counting and sizing. A commercial device for such measurements, the Coulter counter, is able to detect particles of sizes down to several tenths of a micrometre1-3. Nuclepore technology (in which pores are etched particle tracks) has extended the lower limit of size detection to 60-nm particles by using a capillary of diameter 0.45 μm (ref. 4). Here we show that natural channel-forming peptides incorporated into a bilayer lipid membrane can be used to detect the passage of single molecules with gyration radii as small as 5-15 Å. From our experiments with alamethicin pores we infer both the average number and the diffusion coefficients of poly(ethylene glycol) molecules in the pore. Our approach provides a means of observing the statistics and mechanics of flexible polymers moving within the confines of precisely defined single-molecule structures.

  8. Carbon dioxide supersaturation in the surface waters of lakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, J.J.; Caraco, N.F.; Kling, G.W.

    1994-09-09

    Data on the partial pressure of carbon dioxide (CO{sub 2}) in the surface waters from a large number of lakes (1835) with a worldwide distribution show that only a small proportion of the 4665 samples analyzed (less than 10 percent) were within {+-}20 percent of equilibrium with the atmosphere and that most samples (87 percent) were supersaturated. The mean partial pressure of CO{sub 2} averaged 1036 microatmospheres, about three times the value in the overlying atmosphere, indicating that lakes are sources rather than sinks of atmospheric CO{sub 2}. On a global scale, the potential efflux of CO{sub 2} from lakesmore » (about 0.14 x 10{sup 15} grams of carbon per year) is about half as large as riverine transport of organic plus inorganic carbon to the ocean. Lakes are a small but potentially important conduit for carbon for terrestrial sources to the atmospheric sink. 18 refs., 2 figs., 1 tab.« less

  9. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA

    2012-07-03

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  10. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA

    2008-10-28

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  11. Dinosaur incubation periods directly determined from growth-line counts in embryonic teeth show reptilian-grade development.

    PubMed

    Erickson, Gregory M; Zelenitsky, Darla K; Kay, David Ian; Norell, Mark A

    2017-01-17

    Birds stand out from other egg-laying amniotes by producing relatively small numbers of large eggs with very short incubation periods (average 11-85 d). This aspect promotes high survivorship by limiting exposure to predation and environmental perturbation, allows for larger more fit young, and facilitates rapid attainment of adult size. Birds are living dinosaurs; their rapid development has been considered to reflect the primitive dinosaurian condition. Here, nonavian dinosaurian incubation periods in both small and large ornithischian taxa are empirically determined through growth-line counts in embryonic teeth. Our results show unexpectedly slow incubation (2.8 and 5.8 mo) like those of outgroup reptiles. Developmental and physiological constraints would have rendered tooth formation and incubation inherently slow in other dinosaur lineages and basal birds. The capacity to determine incubation periods in extinct egg-laying amniotes has implications for dinosaurian embryology, life history strategies, and survivorship across the Cretaceous-Paleogene mass extinction event.

  12. Size dependent fragmentation of argon clusters in the soft x-ray ionization regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gisselbrecht, Mathieu; Lindgren, Andreas; Burmeister, Florian

    Photofragmentation of argon clusters of average size ranging from 10 up to 1000 atoms is studied using soft x-ray radiation below the 2p threshold and multicoincidence mass spectroscopy technique. For small clusters (=10), ionization induces fast fragmentation with neutral emission imparting a large amount of energy. While the primary dissociation takes place on a picosecond time scale, the fragments undergo slow degradation in the spectrometer on a microsecond time scale. For larger clusters ({>=}100) we believe that we observe the fragmentation pattern of multiply charged species on a time-scale which lasts a few hundred nanoseconds. The reason for these slowermore » processes is the large number of neutral atoms which act as an efficient cooling bath where the excess energy ('heat') dissipates among all degrees of freedom. Further degradation of the photoionic cluster in spectrometer then takes place on the microsecond time scale, similar to small clusters.« less

  13. Relation between native ensembles and experimental structures of proteins

    PubMed Central

    Best, Robert B.; Lindorff-Larsen, Kresten; DePristo, Mark A.; Vendruscolo, Michele

    2006-01-01

    Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of “high-sequence similarity Protein Data Bank” (HSP) structures and consider the extent to which such ensembles represent the structural heterogeneity of the native state in solution. We find that different NMR measurements probing structure and dynamics of given proteins in solution, including order parameters, scalar couplings, and residual dipolar couplings, are remarkably well reproduced by their respective high-sequence similarity Protein Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest that even a modest number of structures of a protein determined under different conditions, or with small variations in sequence, capture a representative subset of the true native-state ensemble. PMID:16829580

  14. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  15. The Effect of Inhibitory Neuron on the Evolution Model of Higher-Order Coupling Neural Oscillator Population

    PubMed Central

    Qi, Yi; Wang, Rubin; Jiao, Xianfa; Du, Ying

    2014-01-01

    We proposed a higher-order coupling neural network model including the inhibitory neurons and examined the dynamical evolution of average number density and phase-neural coding under the spontaneous activity and external stimulating condition. The results indicated that increase of inhibitory coupling strength will cause decrease of average number density, whereas increase of excitatory coupling strength will cause increase of stable amplitude of average number density. Whether the neural oscillator population is able to enter the new synchronous oscillation or not is determined by excitatory and inhibitory coupling strength. In the presence of external stimulation, the evolution of the average number density is dependent upon the external stimulation and the coupling term in which the dominator will determine the final evolution. PMID:24516505

  16. Symmetry-breaking bifurcations and enhanced mixing in microfluidic cross-slots

    NASA Astrophysics Data System (ADS)

    Poole, Rob; Haward, Simon; Oliveira, Paulo; Alves, Manuel

    2014-11-01

    We investigate, both experimentally and numerically, a new subcritical bifurcation phenomenon for a Newtonian fluid flowing through three-dimensional cross-slot geometries. At low Reynolds numbers the flow remains steady and symmetric. For the case of square inlets and outlets, at a critical Reynolds number of approximately 40 (based on average velocity) a pitchfork bifurcation is observed beyond which the unstable symmetrical solution is replaced by a pair of steady asymmetric solutions. Sensitivity of this critical Reynolds number to the initial conditions of the simulation, resulting in a small degree of hysteresis, suggests a subcritical instability. At higher flowrates the flow becomes unsteady. The effects of channel aspect ratio are investigated on the critical conditions and excellent agreement is found between three-dimensional finite volume simulations and flow visualisation experiments in microfluidic channels. Finally we suggest this new flow bifurcation could be an effective method of enhancing mixing in microfluidic channels as significant increases in mixing quality are observed beyond the bifurcation. This enhancement occurs at flowrates more than a factor of two smaller than those observed in the well-known T-channel micromixer.

  17. Effects of the computational domain on the secondary flow in turbulent plane Couette flow

    NASA Astrophysics Data System (ADS)

    Gai, Jie; Xia, Zhen-Hua; Cai, Qing-Dong

    2015-10-01

    A series of direct numerical simulations of the fully developed plane Couette flow at a Reynolds number of 6000 (based on the relative wall speed and half the channel height h) with different streamwise and spanwise lengths are conducted to investigate the effects of the computational box sizes on the secondary flow (SF). Our focuses are the number of counter-rotating vortex pairs and its relationship to the statistics of the mean flow and the SF in the small and moderate computational box sizes. Our results show that the number of vortex pairs is sensitive to the computational box size, and so are the slope parameter, the rate of the turbulent kinetic energy contributed by the SF, and the ratio of the kinetic energy of the SF to the total kinetic energy. However, the averaged spanwise width of each counter-rotating vortex pair in the plane Couette flow is found, for the first time, within 4(1 ± 0.25)h despite the domain sizes. Project supported by the National Natural Science Foundation of China (Grant Nos. 11221061, 11272013, and 11302006).

  18. The reliability and stability of visual working memory capacity.

    PubMed

    Xu, Z; Adam, K C S; Fang, X; Vogel, E K

    2018-04-01

    Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).

  19. Landscape Simplification Constrains Adult Size in a Native Ground-Nesting Bee

    PubMed Central

    Renauld, Miles; Hutchinson, Alena; Loeb, Gregory; Poveda, Katja; Connelly, Heather

    2016-01-01

    Bees provide critical pollination services to 87% of angiosperm plants; however, the reliability of these services may become threatened as bee populations decline. Agricultural intensification, resulting in the simplification of environments at the landscape scale, greatly changes the quality and quantity of resources available for female bees to provision their offspring. These changes may alter or constrain the tradeoffs in maternal investment allocation between offspring size, number and sex required to maximize fitness. Here we investigate the relationship between landscape scale agricultural intensification and the size and number of individuals within a wild ground nesting bee species, Andrena nasonii. We show that agricultural intensification at the landscape scale was associated with a reduction in the average size of field collected A. nasonii adults in highly agricultural landscapes but not with the number of individuals collected. Small females carried significantly smaller (40%) pollen loads than large females, which is likely to have consequences for subsequent offspring production and fitness. Thus, landscape simplification is likely to constrain allocation of resources to offspring through a reduction in the overall quantity, quality and distribution of resources. PMID:26943127

  20. Landscape Simplification Constrains Adult Size in a Native Ground-Nesting Bee.

    PubMed

    Renauld, Miles; Hutchinson, Alena; Loeb, Gregory; Poveda, Katja; Connelly, Heather

    2016-01-01

    Bees provide critical pollination services to 87% of angiosperm plants; however, the reliability of these services may become threatened as bee populations decline. Agricultural intensification, resulting in the simplification of environments at the landscape scale, greatly changes the quality and quantity of resources available for female bees to provision their offspring. These changes may alter or constrain the tradeoffs in maternal investment allocation between offspring size, number and sex required to maximize fitness. Here we investigate the relationship between landscape scale agricultural intensification and the size and number of individuals within a wild ground nesting bee species, Andrena nasonii. We show that agricultural intensification at the landscape scale was associated with a reduction in the average size of field collected A. nasonii adults in highly agricultural landscapes but not with the number of individuals collected. Small females carried significantly smaller (40%) pollen loads than large females, which is likely to have consequences for subsequent offspring production and fitness. Thus, landscape simplification is likely to constrain allocation of resources to offspring through a reduction in the overall quantity, quality and distribution of resources.

  1. Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows

    NASA Astrophysics Data System (ADS)

    Mitroglou, N.; Lorenzi, M.; Santini, M.; Gavaises, M.

    2016-11-01

    The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions.

  2. The state of Hawking radiation is non-classical

    NASA Astrophysics Data System (ADS)

    Brustein, Ram; Medved, A. J. M.; Zigdon, Yoav

    2018-01-01

    We show that the state of the Hawking radiation emitted from a large Schwarzschild black hole (BH) deviates significantly from a classical state, in spite of its apparent thermal nature. For this state, the occupation numbers of single modes of massless asymptotic fields, such as photons, gravitons and possibly neutrinos, are small and, as a result, their relative fluctuations are large. The occupation numbers of massive fields are much smaller and suppressed beyond even the expected Boltzmann suppression. It follows that this type of thermal state cannot be viewed as classical or even semiclassical. We substantiate this claim by showing that, in a state with low occupation numbers, physical observables have large quantum fluctuations and, as such, cannot be faithfully described by a mean-field or by a WKB-like semiclassical state. Since the evolution of the BH is unitary, our results imply that the state of the BH interior must also be non-classical when described in terms of the asymptotic fields. We show that such a non-classical interior cannot be described in terms of a semiclassical geometry, even though the average curvature is sub-Planckian.

  3. Porcelain-metal bonding: part I. Effects of repeated baking process.

    PubMed

    Nagasawa, S; Yoshida, T; Mizoguchi, H; Ito, M; Oshida, Y

    2001-01-01

    A plurality of repeated porcelain-baking procedures are normally practiced in order to achieve the final adjustment metal-porcelain restorations. By increasing the number of baking cycles, the undesired internal strain would be built-up, causing the reduction of mechanical properties and deterioration of color characteristics. However the extensive studies on such deterioration have not been done. In this study, effects of numbers (up to 10 times) of repeated baking cycles on baking shrinkage, surface roughness, bend strength, color changes and internal microstructure were investigated when opaque, body and enamel was individually applied or when a triple-player comprising of these three porcelains was repeatedly applied. It was concluded that (1) the bend strengths increased by increasing baking cycles, (2) the average surface roughness decreased by increasing number of baking procedures, (3) changes in color characteristics was very small in the Vintage halo porcelain system, and (4) since the pores entrapped in the porcelain remained even by increasing baking cycles, it is recommended to remove the surface pores before forming the next layer when handling the high viscous opaque porcelain.

  4. PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Xie Zongxia; Hu Qinghua

    The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in themore » five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I{sub 0}.« less

  5. Impact of e-liquid flavors on e-cigarette vaping behavior.

    PubMed

    St Helen, Gideon; Shahid, Marian; Chu, Sherman; Benowitz, Neal L

    2018-05-31

    The primary objective of this pilot study was to describe the impact of e-cigarette liquid flavors on experienced e-cigarette users' vaping behavior. 11 males and 3 females participated in a 3-day inpatient crossover study using e-cigarettes with strawberry, tobacco, and their usual brand e-liquid. Nicotine levels were nominally 18 mg/mL in the strawberry and tobacco e-liquids and ranged between 3-18 mg/mL in the usual brands. On each day, participants had access to the study e-cigarette (KangerTech mini ProTank 3, 1.5 Ohms, 3.7 V) and the assigned e-liquid during a 90-minute videotaped ad libitum session. Average puff duration was significantly longer when using the strawberry e-liquid (3.2 ± 1.3 s, mean ± SD) compared to the tobacco e-liquid (2.8 ± 1.1 s) but the average number of puffs was not significantly different (strawberry, 73 ± 35; tobacco, 69 ± 46). Compared to the strawberry- and tobacco-flavored e-liquids, average puff duration was significantly longer (4.3 ± 1.6 s) and the average number of puffs was significantly higher (106 ± 67 puffs) when participants used their usual brand of e-liquid. Participants generally puffed more frequently in small groups of puffs (1-5 puffs) with the strawberry compared to the tobacco e-liquid and more frequently in larger groups (>10 puffs) with their usual brand. The strength of the relationship between vaping topography and nicotine intake and exposure were not consistent across e-liquids. Vaping behavior changes across e-liquids and influences nicotine intake. Research is needed to understand the mechanisms that underlie these behavioral changes, including e-liquid pH and related sensory effects, subjective liking, and nicotine effects. Copyright © 2018. Published by Elsevier B.V.

  6. The magnitude and colour of noise in genetic negative feedback systems.

    PubMed

    Voliotis, Margaritis; Bowsher, Clive G

    2012-08-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.

  7. Ecology and population status of Northern Fulmars (Fulmarus glacialis) of the North Pacific

    USGS Publications Warehouse

    Hatch, Scott A.; Vermeer, Kees; Briggs, K.T.; Morgan, K.H.; Siegel-Causey, D.

    1993-01-01

    In the North Pacific, the breeding distribution of Northern Fulmars (Fulmarus glacialis) includes about equal numbers of very large colonies (50,000-500,000 individuals) and relatively small ones (5-5,000 individuals). The almost complete segregation of light and dark colour phases between adjacent colonies in the Bering Sea and Sea of Okhotsk suggests there is little gene flow among the major colonies. Annual productivity averaged 0.42 chicks per breeding pair in 10 years at one colony in the Gulf of Alaska; adult survival was 0.97 per year over five years at the same location. There is no clear indication of population change at either of two large colonies studied, but several small colonies in the western Aleutians and northern Gulf of Alaska have increased since the mid-1970s. Fulmars appear to have low vulnerability to oil pollution and drifting gill nets, but they are relatively heavy consumers of plastic debris. Introduced predators probably reduced fulmar populations in the past. Population monitoring is recommended for one or more of the large Pacific colonies and several of the smaller ones. Small colonies may provide early indications of changing population status.

  8. A Small World of Neuronal Synchrony

    PubMed Central

    Yu, Shan; Huang, Debin; Singer, Wolf

    2008-01-01

    A small-world network has been suggested to be an efficient solution for achieving both modular and global processing—a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population's activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of “hubs” in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding. PMID:18400792

  9. Average properties of bidisperse bubbly flows

    NASA Astrophysics Data System (ADS)

    Serrano-García, J. C.; Mendez-Díaz, S.; Zenit, R.

    2018-03-01

    Experiments were performed in a vertical channel to study the properties of a bubbly flow composed of two distinct bubble size species. Bubbles were produced using a capillary bank with tubes with two distinct inner diameters; the flow through each capillary size was controlled such that the amount of large or small bubbles could be controlled. Using water and water-glycerin mixtures, a wide range of Reynolds and Weber number ranges were investigated. The gas volume fraction ranged between 0.5% and 6%. The measurements of the mean bubble velocity of each species and the liquid velocity variance were obtained and contrasted with the monodisperse flows with equivalent gas volume fractions. We found that the bidispersity can induce a reduction of the mean bubble velocity of the large species; for the small size species, the bubble velocity can be increased, decreased, or remain unaffected depending of the flow conditions. The liquid velocity variance of the bidisperse flows is, in general, bound by the values of the small and large monodisperse values; interestingly, in some cases, the liquid velocity fluctuations can be larger than either monodisperse case. A simple model for the liquid agitation for bidisperse flows is proposed, with good agreement with the experimental measurements.

  10. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2014-01-01

    Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250

  11. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity

    NASA Astrophysics Data System (ADS)

    Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-12-01

    Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  12. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2013-12-01

    Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.

  13. Counterintuitive effects of substrate roughness on PDCs

    NASA Astrophysics Data System (ADS)

    Andrews, B. J.; Manga, M.

    2012-12-01

    We model dilute pyroclastic density currents (PDCs) using scaled, warm, particle-laden density currents in a 6 m long, 0.6 m wide, 1.8 m tall air-filled tank. In this set of experiments, we run currents over substrates with characteristic roughness scales, hr, ranging over ~3 orders of magnitude from smooth, through 250 μm sandpaper, 0.1-, 1-, 2-, 5-, and 10 cm hemispheres. As substrate roughness increases, runout distance increases until a critical roughness height, hrc, is reached; further increases in roughness height decrease runout. The critical roughness height appears to be 0.25-0.5 htb, the thickness of the turbulent lower layer of the density currents. The dependence of runout on hr is most likely the result of increases in substrate roughness decreasing the average current velocity and converting that energy into increased turbulence intensity. Small values of hr thus result in increased runout as sedimentation is inhibited by the increased turbulence intensity. At larger values of hr current behavior is controlled by much larger decreases in average current velocity, even though sedimentation decreases. Scaling our experiments up to the size of real volcanic eruptions suggests that landscapes must have characteristic roughness hr>10 m to reduce the runout of natural PDCs, smaller roughness scales can increase runout. Comparison of relevant bulk (Reynolds number, densimetric and thermal Richardson numbers, excess buoyant thermal energy density) and turbulent (Stokes and settling numbers) between our experiments and natural dilute PDCs indicates that we are accurately modeling at least the large scale behaviors and dynamics of dilute PDCs.

  14. Relevance of Bose-Einstein condensation to the interference of two independent Bose gases

    NASA Astrophysics Data System (ADS)

    Iazzi, Mauro; Yuasa, Kazuya

    2011-03-01

    Interference of two independently prepared ideal Bose gases is discussed, on the basis of the idea of measurement-induced interference. It is known that, even if the number of atoms in each gas is individually fixed finite and the symmetry of the system is not broken, an interference pattern is observed on each single snapshot. The key role is played by the Hanbury Brown and Twiss effect, which leads to an oscillating pattern of the cloud of identical atoms. Then, how essential is the Bose-Einstein condensation to the interference? In this work, we describe two ideal Bose gases trapped in two separate three-dimensional harmonic traps at a finite temperature T, using the canonical ensembles (with fixed numbers of atoms). We compute the full statistics of the snapshot profiles of the expanding and overlapping gases released from the traps. We obtain a simple formula valid for finite T, which shows that the average fringe spectrum (average fringe contrast) is given by the purity of each gas. The purity is known to be a good measure of condensation, and the formula clarifies the relevance of the condensation to the interference. The results for T=0, previously known in the literature, can be recovered from our analysis. The fluctuation of the interference spectrum is also studied, and it is shown that the fluctuation is vanishingly small only below the critical temperature Tc, meaning that interference pattern is certainly observed on every snapshot below Tc. The fact that the number of atoms is fixed in the canonical ensemble is crucial to this vanishing fluctuation.

  15. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  16. Infants' Auditory Enumeration: Evidence for Analog Magnitudes in the Small Number Range

    ERIC Educational Resources Information Center

    vanMarle, Kristy; Wynn, Karen

    2009-01-01

    Vigorous debate surrounds the issue of whether infants use different representational mechanisms to discriminate small and large numbers. We report evidence for ratio-dependent performance in infants' discrimination of small numbers of auditory events, suggesting that infants can use analog magnitudes to represent small values, at least in the…

  17. A cohort cost analysis of lumbar laminectomy--current trends in surgeon and hospital fees distribution.

    PubMed

    Molina, Camilo A; Zadnik, Patricia L; Gokaslan, Ziya L; Witham, Timothy F; Bydon, Ali; Wolinsky, Jean-Paul; Sciubba, Daniel M

    2013-11-01

    Spine-related health-care expenditures accounted for $86 billion dollars in 2005, a 65% increase from 1997. However, when adjusting for inflation, surgeons have seen decreased reimbursement rates over the last decade. To assess contribution of surgeon fees to overall procedure cost, we reviewed the charges and reimbursements for a noninstrumented lumbar laminectomy and compared the amounts reimbursed to the hospital and to the surgeon at a major academic institution. Retrospective review of costs associated with lumbar laminectomies. Seventy-seven patients undergoing lumbar laminectomy for spinal stenosis throughout an 18-month period at a single academic medical center were included in this study. Cost and number of laminectomy levels. The reimbursement schedule of six academic spine surgeons was collected over 18 months for performed noninstrumented lumbar laminectomy procedures. Bills and collections by the hospital and surgeon professional fees were comparatively analyzed and substratified by number of laminectomy levels and patient insurance status. Unpaired two-sample Student t test was used for analysis of significant differences. During an 18-month period, patients underwent a lumbar laminectomy involving on average three levels and stayed in the hospital on average 3.5 days. Complications were uncommon (13%). Average professional fee billing for the surgeon was $6,889±$2,882, and collection was $1,848±$1,433 (28% overall, 30% for private insurance, and 23% for Medicare/Medicaid insurance). Average hospital billing for the inpatient hospital stay minus professional fees from the surgeon was $14,766±$7,729, and average collection on such bills was $13,391±$7,256 (92% overall, 91% for private insurance, and 85% for Medicare/Medicaid insurance). Based on this analysis, the proportion of overall costs allocated to professional fees for a noninstrumented lumbar laminectomy is small, whereas those allocated to hospital costs are far greater. These findings suggest that the current focus on decreasing physician reimbursement as the principal cost saving strategy will lead to minimal reimbursement for surgeons without a substantial drop in the overall cost of procedures performed. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    PubMed

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  19. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases

    PubMed Central

    Forbes, Jessica L.; Kim, Regina E. Y.; Paulsen, Jane S.; Johnson, Hans J.

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%. PMID:27536233

  20. An Open-Source Label Atlas Correction Tool and Preliminary Results on Huntingtons Disease Whole-Brain MRI Atlases.

    PubMed

    Forbes, Jessica L; Kim, Regina E Y; Paulsen, Jane S; Johnson, Hans J

    2016-01-01

    The creation of high-quality medical imaging reference atlas datasets with consistent dense anatomical region labels is a challenging task. Reference atlases have many uses in medical image applications and are essential components of atlas-based segmentation tools commonly used for producing personalized anatomical measurements for individual subjects. The process of manual identification of anatomical regions by experts is regarded as a so-called gold standard; however, it is usually impractical because of the labor-intensive costs. Further, as the number of regions of interest increases, these manually created atlases often contain many small inconsistently labeled or disconnected regions that need to be identified and corrected. This project proposes an efficient process to drastically reduce the time necessary for manual revision in order to improve atlas label quality. We introduce the LabelAtlasEditor tool, a SimpleITK-based open-source label atlas correction tool distributed within the image visualization software 3D Slicer. LabelAtlasEditor incorporates several 3D Slicer widgets into one consistent interface and provides label-specific correction tools, allowing for rapid identification, navigation, and modification of the small, disconnected erroneous labels within an atlas. The technical details for the implementation and performance of LabelAtlasEditor are demonstrated using an application of improving a set of 20 Huntingtons Disease-specific multi-modal brain atlases. Additionally, we present the advantages and limitations of automatic atlas correction. After the correction of atlas inconsistencies and small, disconnected regions, the number of unidentified voxels for each dataset was reduced on average by 68.48%.

  1. The evolution of the complex sensory and motor systems of the human brain

    PubMed Central

    Kaas, Jon H.

    2008-01-01

    Inferences about how the complex sensory and motor systems of the human brain evolved are based on the results of comparative studies of brain organization across a range of mammalian species, and evidence from the endocasts of fossil skulls of key extinct species. The endocasts of the skulls of early mammals indicate that they had small brains with little neocortex. Evidence from comparative studies of cortical organization from small-brained mammals of the six major branches of mammalian evolution supports the conclusion that the small neocortex of early mammals was divided into roughly 20–25 cortical areas, including primary and secondary sensory fields. In early primates, vision was the dominant sense, and cortical areas associated with vision in temporal and occipital cortex underwent a significant expansion. Comparative studies indicate that early primates had 10 or more visual areas, and somatosensory areas with expanded representations of the forepaw. Posterior parietal cortex was also expanded, with a caudal half dominated by visual inputs, and a rostral half dominated by somatosensory inputs with outputs to an array of seven or more motor and visuomotor areas of the frontal lobe. Somatosensory areas and posterior parietal cortex became further differentiated in early anthropoid primates. As larger brains evolved in early apes and in our hominin ancestors, the number of cortical areas increased to reach an estimated 200 or so in present day humans, and hemispheric specializations emerged. The large human brain grew primarily by increasing neuron number rather than increasing average neuron size. PMID:18331903

  2. Social structure, contraceptive availability, and Black fertility decline in Mmabatho / Mafikeng in the Northwest region.

    PubMed

    Amoateng, A Y

    1994-10-01

    Data were obtained from a survey of clients of family planning clinics in the city of Mmabatho/Mafikeng, Bophuthatswana, which was undertaken in 1992-93. 500 Black female clients were interviewed at four government and two family planning clinics between June 1992 and February 1993 regarding the practice of family planning and the formation and dissolution of families. There was a clear tendency for women to have small families, especially urban residents. They averaged 1.5 children. In addition, there was a drastic intergenerational fertility decline of 72.85% compared with their mothers. There was a positive association between age and fertility: women aged 34-52 had a mean of 3.85 children ever born, while the mean number of children was 0.50 for women aged 14-23. Education was negatively associated with fertility even when socioeconomic factors were controlled for, as women with more than middle school education had fewer children than their counterparts with education up to middle school. Women earning over R1000/month had an average of 0.75 fewer children than women who earned below R1000/month. Single, never married women had the fewest number of children ever born, followed by those who were cohabiting. Women who were widowed had the highest mean number of children ever born (3.50), followed by women currently in their first marriages (2.83) and those who were remarried (2.75). Women adhering to traditional African religions had the highest number of children ever born (2.20), followed by believers of the Zionist, Apostolic, and other Pentecostal churches. The knowledge of contraception was negatively associated with the number of children ever born. The injection (99.6%), the pill (95.4%), and the condom (90.1%) were the three most commonly known methods, while vaginal methods (20.6%) and the rhythm method (24.4%) were the least known. There is clear evidence that in African societies transformations are taking place in the family.

  3. [Effects of forest gap size on the architecture of Quercus variablis seedlings on the south slope of Qinling Mountains, west China].

    PubMed

    Yu, Bi-yun; Zhang, Wen-hui; He, Ting; You, Jian-jian; Li, Gang

    2014-12-01

    Typical sampling method was conducted to survey the effects of forest gap size on branch architecture, leaf characteristics and their vertical distribution of Quercus variablis seedlings from different size gaps in natural secondary Q. variablis thinning forest, on the south slope of Qinling Mountains. The results showed that gap size significantly affected the diameter, crown area of Q. variablis seedlings. The gap size positively correlated with diameter and negatively correlated with crown area, while it had no significant impact on seedling height, crown length and crown rates. The overall bifurcation ratio, stepwise bifurcation ratio, and ratio of branch diameter followed as large gap > middle gap > small gap > understory. The vertical distribution of first-order branches under different size gaps mainly concentrated at the middle and upper part of trunk, larger diameter first-order branches were mainly distributed at the lower part of trunk, and the angle of first-order branch increased at first and then declined with the increasing seedling height. With the increasing forest gap size, the leaf length, leaf width and average leaf area of seedlings all gradually declined, while the average leaf number per plant and relative total leaf number increased, the leaf length-width ratio kept stable, the relative leaf number was mainly distributed at the middle and upper parts of trunk, the changes of leaf area index was consistent with the change of the relative total number of leaves. There was no significant difference between the diameters of middle gap and large gap seedlings, but the diameter of middle gap seedlings was higher than that of large gap, suggesting the middle gap would benefit the seedlings regeneration and high-quality timber cultivation. To promote the regeneration of Q. variabilis seedlings, and to cultivate high-quality timber, appropriate thinning should be taken to increase the number of middle gaps in the management of Q. variabilis forest.

  4. Isozyme variation and linkage in six conifer species

    Treesearch

    M. Thompson Conkle

    1981-01-01

    Isozymes of female gametophyte tissue were analyzed for allelic variation in knobcone, lodgepole, loblolly, Jeffrey, and sugar pines and in Douglas-fir. Linkage was studied in the five pines. The average number of alleles and average herozygosity per enzyme locus were estimated. Knobcone pine ranked lowest among the six species in number of alleles and average...

  5. Small river plumes off the northeastern coast of the Black Sea under average climatic and flooding discharge conditions

    NASA Astrophysics Data System (ADS)

    Osadchiev, Alexander; Korshenko, Evgeniya

    2017-06-01

    This study focuses on the impact of discharges of small rivers on the delivery and fate of fluvial water and suspended matter at the northeastern part of the Black Sea under different local precipitation conditions. Several dozens of mountainous rivers flow into the sea at the study region, and most of them, except for several of the largest, have little annual runoff and affect adjacent coastal waters to a limited extent under average climatic conditions. However, the discharges of these small rivers are characterized by a quick response to precipitation events and can significantly increase during and shortly after heavy rains, which are frequent in the considered area. The delivery and fate of fluvial water and terrigenous sediments at the study region, under average climatic and rain-induced flooding conditions, were explored and compared using in situ data, satellite imagery, and numerical modeling. It was shown that the point-source spread of continental discharge dominated by several large rivers under average climatic conditions can change to the line-source discharge from numerous small rivers situated along the coast in response to heavy rains. The intense line-source runoff of water and suspended sediments forms a geostrophic alongshore current of turbid and freshened water, which induces the intense transport of suspended and dissolved constituents discharged with river waters in a northwestern direction. This process significantly influences water quality and causes active sediment load at large segments of the narrow shelf at the northeastern part of the Black Sea compared to average climatic discharge conditions.

  6. Using the NEMA NU 4 PET image quality phantom in multipinhole small-animal SPECT.

    PubMed

    Harteveld, Anita A; Meeuwis, Antoi P W; Disselhorst, Jonathan A; Slump, Cornelis H; Oyen, Wim J G; Boerman, Otto C; Visser, Eric P

    2011-10-01

    Several commercial small-animal SPECT scanners using multipinhole collimation are presently available. However, generally accepted standards to characterize the performance of these scanners do not exist. Whereas for small-animal PET, the National Electrical Manufacturers Association (NEMA) NU 4 standards have been defined in 2008, such standards are still lacking for small-animal SPECT. In this study, the image quality parameters associated with the NEMA NU 4 image quality phantom were determined for a small-animal multipinhole SPECT scanner. Multiple whole-body scans of the NEMA NU 4 image quality phantom of 1-h duration were performed in a U-SPECT-II scanner using (99m)Tc with activities ranging between 8.4 and 78.2 MBq. The collimator contained 75 pinholes of 1.0-mm diameter and had a bore diameter of 98 mm. Image quality parameters were determined as a function of average phantom activity, number of iterations, postreconstruction spatial filter, and scatter correction. In addition, a mouse was injected with (99m)Tc-hydroxymethylene diphosphonate and was euthanized 6.5 h after injection. Multiple whole-body scans of this mouse of 1-h duration were acquired for activities ranging between 3.29 and 52.7 MBq. An increase in the number of iterations was accompanied by an increase in the recovery coefficients for the small rods (RC(rod)), an increase in the noise in the uniform phantom region, and a decrease in spillover ratios for the cold-air- and water-filled scatter compartments (SOR(air) and SOR(wat)). Application of spatial filtering reduced image noise but lowered RC(rod). Filtering did not influence SOR(air) and SOR(wat). Scatter correction reduced SOR(air) and SOR(wat). The effect of total phantom activity was primarily seen in a reduction of image noise with increasing activity. RC(rod), SOR(air), and SOR(wat) were more or less constant as a function of phantom activity. The relation between acquisition and reconstruction settings and image quality was confirmed in the (99m)Tc-hydroxymethylene diphosphonate mouse scans. Although developed for small-animal PET, the NEMA NU 4 image quality phantom was found to be useful for small-animal SPECT as well, allowing for objective determination of image quality parameters and showing the trade-offs between several of these parameters on variation of acquisition and reconstruction settings.

  7. Immediate Small Number Perception : Evidence from a New Numerical Carry-Over Procedure

    ERIC Educational Resources Information Center

    Demeyere, Nele; Humphreys, Glyn W.

    2012-01-01

    Evidence is presented for the immediate apprehension of exact small quantities. Participants performed a quantification task (are the number of items greater or smaller than?), and carry-over effects were examined between numbers requiring the same response. Carry-over effects between small numbers were strongly affected by repeats of pattern and…

  8. Growth, chamber building rate and reproduction time of Palaeonummulites venosus under natural conditions.

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-04-01

    Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.

  9. Clinical services provided by staff pharmacists in a community hospital.

    PubMed

    Garrelts, J C; Smith, D F

    1990-09-01

    A program for developing staff pharmacists' clinical skills and documenting pharmacists' clinical interventions in a large community teaching hospital is described. A coordinator hired in 1984 to develop clinical pharmacy services began a didactic and experiential program for baccalaureate-level staff pharmacists. Fourteen educational modules are supplemented by journal and textbook articles and small-group discussions of clinical cases, and the clinical coordinator provides individual training on the patient-care units for each pharmacist. Monitoring of clinical pharmacy services began in June 1987; each intervention provided by a pharmacist is recorded on a specially designed form. A target-drug program is used to document cost avoidance achieved through clinical services. Information collected through these monitoring activities is used to educate the pharmacy staff, shared with the pharmacy and therapeutics committee, and used to monitor prescribing patterns of individual physicians. The data are used in the hospital's productivity-monitoring system. All pharmacists who were on staff in 1984 have completed the educational modules, and all new employees are in the process. Since monitoring began, the number of clinical interventions has averaged 2098 per month. Cost avoidance has averaged $9306 per month. Over a five-year period, the development of staff pharmacists' clinical services raised the level of professional practice, produced substantial cost avoidance, and increased the number of pharmacist interventions in medication use.

  10. Laser-induced speckle scatter patterns in Bacillus colonies

    PubMed Central

    Kim, Huisung; Singh, Atul K.; Bhunia, Arun K.; Bae, Euiwon

    2014-01-01

    Label-free bacterial colony phenotyping technology called BARDOT (Bacterial Rapid Detection using Optical scattering Technology) provided successful classification of several different bacteria at the genus, species, and serovar level. Recent experiments with colonies of Bacillus species provided strikingly different characteristics of elastic light scatter (ELS) patterns, which were comprised of random speckles compared to other bacteria, which are dominated by concentric rings and spokes. Since this laser-based optical sensor interrogates the whole volume of the colony, 3-D information of micro- and macro-structures are all encoded in the far-field scatter patterns. Here, we present a theoretical model explaining the underlying mechanism of the speckle formation by the colonies from Bacillus species. Except for Bacillus polymyxa, all Bacillus spp. produced random bright spots on the imaging plane, which presumably dependent on the cellular and molecular organization and content within the colony. Our scatter model-based analysis revealed that colony spread resulting in variable surface roughness can modify the wavefront of the scatter field. As the center diameter of the Bacillus spp. colony grew from 500 to 900 μm, average speckles area decreased two-fold and the number of small speckles increased seven-fold. In conclusion, as Bacillus colony grows, the average speckle size in the scatter pattern decreases and the number of smaller speckle increases due to the swarming growth characteristics of bacteria within the colony. PMID:25352840

  11. Magnetothermal Convection of Water with the Presence or Absence of a Magnetic Force Acting on the Susceptibility Gradient.

    PubMed

    Maki, Syou

    2016-01-01

    Heat transfer of magnetothermal convection with the presence or absence of the magnetic force acting on the susceptibility gradient (fsc) was examined by three-dimensional numerical computations. Thermal convection of water enclosed in a shallow cylindrical vessel (diameter over vessel height = 6.0) with the Rayleigh-Benard model was adopted as the model, under the conditions of Prandtl number 6.0 and Ra number 7000, respectively. The momentum equations of convection were nondimensionalized, which involved the term of fsc and the term of magnetic force acting on the magnetic field gradient (fb). All the computations resulted in axisymmetric steady rolls. The values of the averaged Nu, the averaged velocity components U, V, and W, and the isothermal distributions and flow patterns were almost completely the same, regardless of the presence or absence of the term of fsc. As a result, we found that the effect of fsc was extremely small, although much previous research emphasized the effect with paramagnetic solutions under an unsteady state. The magnitude of fsc depends not only on magnetic conditions (magnitudes of magnetic susceptibility and magnetic flux density), but also on the thermal properties of the solution (thermal conductivity, thermal diffusivity, and viscosity). Therefore the effect of fb becomes dominant on the magnetothermal convection. Active control over the density gradient with temperature will be required to advance heat transfer with the effect of fsc.

  12. Growth and Flowering Responses of Cut Chrysanthemum Grown under Restricted Root Volume to Irrigation Frequency

    PubMed Central

    Taweesak, Viyachai; Lee Abdullah, Thohirah; Hassan, Siti Aishah; Kamarulzaman, Nitty Hirawaty; Wan Yusoff, Wan Abdullah

    2014-01-01

    Influences of irrigation frequency on the growth and flowering of chrysanthemum grown under restricted root volume were tested. Chrysanthemum cuttings (Chrysanthemum morifolium “Reagan White”) were grown in seedling tray which contained coconut peat in volumes of 73 and 140 cm3. Plants were irrigated with drip irrigation at irrigation frequencies of 4 (266 mL), 6 (400 mL), and 8 (533 mL) times/day to observe their growth and flowering performances. There was interaction between irrigation frequency and substrate volume on plant height of chrysanthemum. Plants grown in 140 cm3 substrates and irrigated 6 times/day produced the tallest plant of 109.25 cm. Plants irrigated 6 and 8 times/day had significantly higher level of phosphorus content in their leaves than those plants irrigated 4 times/day. The total leaf area, number of internodes, leaf length, and leaf width of chrysanthemums grown in 140 cm3 substrate were significantly higher than those grown in 73 cm3 substrate. The numbers of flowers were affected by both irrigation frequencies and substrate volumes. Chrysanthemums irrigated 8 times/day had an average of 19.56 flowers while those irrigated 4 times/day had an average of 16.63 flowers. Increasing irrigation frequency can improve the growth and flowering of chrysanthemums in small substrate volumes. PMID:25478586

  13. Growth, chamber building rate and reproduction time of Palaeonummulites venosus (Foraminifera) under natural conditions

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-12-01

    We investigated the symbiont-bearing benthic foraminifer Palaeonummulites venosus to determine the chamber building rate (CBR), test diameter increase rate (DIR), reproduction time and longevity using the `natural laboratory' approach. This is based on the decomposition of monthly obtained frequency distributions of chamber number and test diameter into normally distributed components. Test measurements were taken using MicroCT. The shift of the mean and standard deviation of component parameters during the 15-month investigation period was used to calculate Michaelis-Menten functions applied to estimate the averaged CBR and DIR under natural conditions. The individual dates of birth were estimated using the inverse averaged CBR and the inverse DIR fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e., frequency divided by sediment weight) based on both CBR and DIR revealed continuous reproduction throughout the year with two peaks, a stronger one in June determined as the onset of the summer generation (generation 1) and a weaker one in November determined as the onset of the winter generation (generation 2). This reproduction scheme explains the presence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date, is approximately 1.5 yr, an estimation obtained by using both CBR and DIR.

  14. Relationship between road traffic accidents and conflicts recorded by drive recorders.

    PubMed

    Lu, Guangquan; Cheng, Bo; Kuzumaki, Seigo; Mei, Bingsong

    2011-08-01

    Road traffic conflicts can be used to estimate the probability of accident occurrence, assess road safety, or evaluate road safety programs if the relationship between road traffic accidents and conflicts is known. To this end, we propose a model for the relationship between road traffic accidents and conflicts recorded by drive recorders (DRs). DRs were installed in 50 cars in Beijing to collect records of traffic conflicts. Data containing 1366 conflicts were collected in 193 days. The hourly distributions of conflicts and accidents were used to model the relationship between accidents and conflicts. To eliminate time series and base number effects, we defined and used 2 parameters: average annual number of accidents per 10,000 vehicles per hour and average number of conflicts per 10,000 vehicles per hour. A model was developed to describe the relationship between the two parameters. If A(i) = average annual number of accidents per 10,000 vehicles per hour at hour i, and E(i) = average number of conflicts per 10,000 vehicles per hour at hour i, the relationship can be expressed as [Formula in text] (α>0, β>0). The average number of traffic accidents increases as the number of conflicts rises, but the rate of increase decelerates as the number of conflicts increases further. The proposed model can describe the relationship between road traffic accidents and conflicts in a simple manner. According to our analysis, the model fits the present data.

  15. Food web structure and the evolution of ecological communities

    NASA Astrophysics Data System (ADS)

    Quince, Christopher; Higgs, Paul G.; McKane, Alan J.

    Simulations of the coevolution of many interacting species are performed using the Webworld model. The model has a realistic set of predator-prey equations that describe the population dynamics of the species for any structure of the food web. The equations account for competition between species for the same resources, and for the diet choice of predators between alternative prey according to an evolutionarily stable strategy. The set of species present undergoes long-term evolution d ue to speciation and extinction events. We summarize results obtained on the macro-evolutionary dynamics of speciations and extinctions, and on the statistical properties of the food webs that are generated by the model. Simulations begin from small numbers of species and build up to larger webs with relatively constant species number on average. The rate of origination and extinction of species are relatively high, but remain roughly balanced throughout the simulations. When a 'parent' species undergoes sp eciation, the 'child' species usually adds to the same trophic level as the parent. The chance of the child species surviving is significantly higher if the parent is on the second or third trophic level than if it is on the first level, most likely due to a wider choice of possible prey for species on higher levels. Addition of a new species sometimes causes extinction of existing species. The parent species has a high probability of extinction because it has strong competition with the new species. Non-pa rental competitors of the new species also have a significantly higher extinction probability than average, as do prey of the new species. Predators of the new species are less likely than average to become extinct.

  16. Unemployment rate as predictor of rate of psychiatric admission.

    PubMed

    Kammerling, R M; O'Connor, S

    1993-12-11

    To examine the relation between rates of psychiatric admissions and both the rate of unemployment and the underprivileged area score within small areas. Calculation of correlation coefficients and explanatory power by using data on psychiatric admissions from April 1990 to March 1992. Crude and age standardised rates were used based on all admissions and also on the number of people admitted regardless of the number of times each person was admitted. Sectors with an average population of 45,000 consisting of aggregations of neighbouring wards in Bristol and District Health Authority and electoral wards with an average population of 9400 in the city of Bristol. Unemployment rates explained 93% of the variation in the crude person based admission rates standardised for age for those aged under 65 in the sectors. Person based rates correlated more strongly with unemployment than did rates based on all separate admissions. Inclusion of people aged 65 and over weakened the relation. Within electoral wards unemployment rates explained only about 50-60% of the variation but were still more powerful than the underprivileged area score. There was a significant negative correlation between average length of stay and readmission rates--that is, sectors with short lengths of stay were more likely to have patients readmitted (r = -0.64, 95% confidence interval -0.25 to -0.85). Unemployment rates are an extremely powerful indicator of the rates of serious mental illness that will need treatment in hospital in those aged under 65. This should be considered in the process of resource allocation, particularly to fundholders in general practice, or people with serious mental illness living in areas of high unemployment could be considerably disadvantaged.

  17. [Poisonings with paracetamol, salicylates and dextromethorphan – problem evaluation based on data from Toxicological Laboratory and Poison Information Center in Krakow in 2010-2015].

    PubMed

    Gomółka, Ewa; Hydzik, Piotr; Szkolnicka, Beata

    The aim of the paper was to study frequency of laboratory determinations and toxicological information related to over-the-counter drugs (OTC): paracetamol (acetaminophen), salicylates and dextromethorphan. The research was based on data from Toxicological Laboratory and Poison Information Center UJ CM in Krakow in years 2010-2015. Paracetamol was determined averagely 102 times a year, more than 50% (57 cases) were positive with confirmation of poisoning. The least number of paracetamol poisoning was noted in 2011 (35 cases), the most were in 2015 (98 cases). In the time span there were averagely 40 salicylates check measurements a year, less than 50% (15 cases) were positive. Dextromethorphane was confirmed averagely in 31 patients a year, decrease of the drug intoxications was noted in 2013-2015. Paracetamol and dextromethorphan were the most often the cause of poisoning in group of patients 13-18 years old, salicylates – more than 30 years. In the group of small children there were only a few poisonings with paracetamol. Toxicological information data related to paracetamol, salicylates and dextromethorphan were similar to data from toxicological laboratory. Mean year numbers of drug poisoning information were: 90 (paracetamol), 14 (salicylates), 30 (dextromethorphan). The differences were in patients age distribution. Acute poisonings with OTC were related mainly to paracetamol, young patients (13- 18 years) and young adults (19-29 years). Salicylates poisoning information were related mainly to the group of adult patients (> 30 years), dextromethorphan was abused mainly by oung patients (13-18 years). There were no observed poisonings with salicylates and dextromethorphan in children, but there were toxicological information about paracetamol and salicylates poisoning and overdose in group of children (1-6 years).

  18. Neurology training in sub-Saharan Africa: A survey of people in training from 19 countries.

    PubMed

    Mateen, Farrah J; Clark, Sarah J; Borzello, Mia; Kabore, Jean; Seidi, Osheik

    2016-06-01

    To provide a comprehensive understanding of neurology training from the sub-Saharan African perspective. A 40-question survey was distributed to attendees of the 7th annual sub-Saharan African neurology teaching course in Khartoum, Sudan (2015). Themes included the student body, faculty, curriculum, assessment and examinations, technology, and work hours and compensation. Of 19 responding countries, 10 had no formal neurology training programs; Burkina Faso, Cameroon, Republic of the Congo, and Mozambique had an adult neurology program; Ethiopia, Madagascar, Nigeria, Senegal, and South Africa had adult and pediatric neurology programs (training duration range = 3-6 years). There was a median of 2.5 full-time neurologists on the teaching faculty at the respondents' training institutions (neurologists on-faculty:in-country ratio = 0.48), with the lowest ratios in Sudan and Nigeria. Neurology was perceived to be a competitive specialty for entrance in 57% of countries, with 78% of respondents reporting a requisite entrance examination. Ninety-five percent had access to a personal smartphone, 62% used the Internet more than occasionally, and 60% had access to online neurology journals. The average number of weekly work hours was 51 (range = 40-75), and average monthly salary among those earning income was 1,191 USD (range = 285-3,560). Twenty percent of respondents reported paying for training. The most common barriers to neurology postgraduate education were few training programs and lack of training in neurodiagnostic tests. Among 17 reporting countries, there is an estimated average of 0.6 neurologists per million people. Neurology training programs in sub-Saharan Africa are relatively limited in number and have several unmet needs including a small cadre of faculty and an opportunity to standardize curricula and financing of programs. Ann Neurol 2016;79:871-881. © 2016 American Neurological Association.

  19. The effects of sweep numbers per average and protocol type on the accuracy of the p300-based concealed information test.

    PubMed

    Dietrich, Ariana B; Hu, Xiaoqing; Rosenfeld, J Peter

    2014-03-01

    In the first of two experiments, we compared the accuracy of the P300 concealed information test protocol as a function of numbers of trials experienced by subjects and ERP averages analyzed by investigators. Contrary to Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), we found no evidence that 100 trial based averages are more accurate than 66 or 33 trial based averages (all numbers led to accuracies of 84-94 %). There was actually a trend favoring the lowest trial numbers. The second study compared numbers of irrelevant stimuli recalled and recognized in the 3-stimulus protocol versus the complex trial protocol (Rosenfeld in Memory detection: theory and application of the concealed information test, Cambridge University Press, New York, pp 63-89, 2011). Again, in contrast to expectations from Farwell et al. (Cogn Neurodyn 6(2):115-154, 2012), there were no differences between protocols, although there were more irrelevant stimuli recognized than recalled, and irrelevant 4-digit number group stimuli were neither recalled nor recognized as well as irrelevant city name stimuli. We therefore conclude that stimulus processing in the P300-based complex trial protocol-with no more than 33 sweep averages-is adequate to allow accurate detection of concealed information.

  20. Copy Counts

    ERIC Educational Resources Information Center

    Beaumont, Lee R.

    1970-01-01

    The level of difficulty of straight copy, which is used to measure typewriting speed, is influenced by syllable intensity (the average number of syllables per word), stroke intensity (average number of strokes per word), and high-frequency words. (CH)

  1. Compositional patterns in the genomes of unicellular eukaryotes.

    PubMed

    Costantini, Maria; Alvarez-Valin, Fernando; Costantini, Susan; Cammarano, Rosalia; Bernardi, Giorgio

    2013-11-05

    The genomes of multicellular eukaryotes are compartmentalized in mosaics of isochores, large and fairly homogeneous stretches of DNA that belong to a small number of families characterized by different average GC levels, by different gene concentration (that increase with GC), different chromatin structures, different replication timing in the cell cycle, and other different properties. A question raised by these basic results concerns how far back in evolution the compartmentalized organization of the eukaryotic genomes arose. In the present work we approached this problem by studying the compositional organization of the genomes from the unicellular eukaryotes for which full sequences are available, the sample used being representative. The average GC levels of the genomes from unicellular eukaryotes cover an extremely wide range (19%-60% GC) and the compositional patterns of individual genomes are extremely different but all genomes tested show a compositional compartmentalization. The average GC range of the genomes of unicellular eukaryotes is very broad (as broad as that of prokaryotes) and individual compositional patterns cover a very broad range from very narrow to very complex. Both features are not surprising for organisms that are very far from each other both in terms of phylogenetic distances and of environmental life conditions. Most importantly, all genomes tested, a representative sample of all supergroups of unicellular eukaryotes, are compositionally compartmentalized, a major difference with prokaryotes.

  2. Research on trend of warm-humid climate in Central Asia

    NASA Astrophysics Data System (ADS)

    Gong, Zhi; Peng, Dailiang; Wen, Jingyi; Cai, Zhanqing; Wang, Tiantian; Hu, Yuekai; Ma, Yaxin; Xu, Junfeng

    2017-07-01

    Central Asia is a typical arid area, which is sensitive and vulnerable part of climate changes, at the same time, Central Asia is the Silk Road Economic Belt of the core district, the warm-humid climate change will affect the production and economic development of neighboring countries. The average annual precipitation, average anneal temperature and evapotranspiration are the important indexes to weigh the climate change. In this paper, the annual precipitation, annual average temperature and evapotranspiration data of every pixel point in Central Asia are analyzed by using long-time series remote sensing data to analyze the trend of warm and humid conditions. Finally, using the model to analyzed the distribution of warm-dry trend, the warm-wet trend, the cold-dry trend and the cold-wet trend in Central Asia and Xinjiang area. The results showed that most of the regions of Central Asia were warm-humid and warm-dry trends, but only a small number of regions showed warm-dry and cold-dry trends. It is of great significance to study the climatic change discipline and guarantee the ecological safety and improve the ability to cope with climate change in the region. It also provide scientific basis for the formulation of regional climate change program. The first section in your paper

  3. Parcellations and Hemispheric Asymmetries of Human Cerebral Cortex Analyzed on Surface-Based Atlases

    PubMed Central

    Glasser, Matthew F.; Dierker, Donna L.; Harwell, John; Coalson, Timothy

    2012-01-01

    We report on surface-based analyses that enhance our understanding of human cortical organization, including its convolutions and its parcellation into many distinct areas. The surface area of human neocortex averages 973 cm2 per hemisphere, based on cortical midthickness surfaces of 2 cohorts of subjects. We implemented a method to register individual subjects to a hybrid version of the FreeSurfer “fsaverage” atlas whose left and right hemispheres are in precise geographic correspondence. Cortical folding patterns in the resultant population-average “fs_LR” midthickness surfaces are remarkably similar in the left and right hemispheres, even in regions showing significant asymmetry in 3D position. Both hemispheres are equal in average surface area, but hotspots of surface area asymmetry are present in the Sylvian Fissure and elsewhere, together with a broad pattern of asymmetries that are significant though small in magnitude. Multiple cortical parcellation schemes registered to the human atlas provide valuable reference data sets for comparisons with other studies. Identified cortical areas vary in size by more than 2 orders of magnitude. The total number of human neocortical areas is estimated to be ∼150 to 200 areas per hemisphere, which is modestly larger than a recent estimate for the macaque. PMID:22047963

  4. Respiratory hospitalizations in association with fine PM and its ...

    EPA Pesticide Factsheets

    Despite observed geographic and temporal variation in particulate matter (PM)-related health morbidities, only a small number of epidemiologic studies have evaluated the relation between PM2.5 chemical constituents and respiratory disease. Most assessments are limited by inadequate spatial and temporal resolution of ambient PM measurements and/or by their approaches to examine the role of specific PM components on health outcomes. In a case-crossover analysis using daily average ambient PM2.5 total mass and species estimates derived from the Community Multiscale Air Quality (CMAQ) model and available observations, we examined the association between the chemical components of PM (including elemental and organic carbon, sulfate, nitrate, ammonium, and other remaining) and respiratory hospitalizations in New York State. We evaluated relationships between levels (low, medium, high) of PM constituent mass fractions, and assessed modification of the PM2.5–hospitalization association via models stratified by mass fractions of both primary and secondary PM components. In our results, average daily PM2.5 concentrations in New York State were generally lower than the 24-hr average National Ambient Air Quality Standard (NAAQS). Year-round analyses showed statistically significant positive associations between respiratory hospitalizations and PM2.5 total mass, sulfate, nitrate, and ammonium concentrations at multiple exposure lags (0.5–2.0% per interquartile range [IQR

  5. A digital boxcar integrator for IMS spectra

    NASA Technical Reports Server (NTRS)

    Cohen, Martin J.; Stimac, Robert M.; Wernlund, Roger F.; Parker, Donald C.

    1995-01-01

    When trying to detect or quantify a signal at or near the limit of detectability, it is invariably embeded in the noise. This statement is true for nearly all detectors of any physical phenomena and the limit of detectability, hopefully, occurs at very low signal-to-noise levels. This is particularly true of IMS (Ion Mobility Spectrometers) spectra due to the low vapor pressure of several chemical compounds of great interest and the small currents associated with the ionic detection process. Gated Integrators and Boxcar Integrators or Averagers are designed to recover fast, repetitive analog signals. In a typical application, a time 'Gate' or 'Window' is generated, characterized by a set delay from a trigger or gate pulse and a certain width. A Gated Integrator amplifies and integrates the signal that is present during the time the gate is open, ignoring noise and interference that may be present at other times. Boxcar Integration refers to the practice of averaging the output of the Gated Integrator over many sweeps of the detector. Since any signal present during the gate will add linearly, while noise will add in a 'random walk' fashion as the square root of the number of sweeps, averaging N sweeps will improve the 'Signal-to-Noise Ratio' by a factor of the square root of N.

  6. Lagrangian-averaged model for magnetohydrodynamic turbulence and the absence of bottlenecks.

    PubMed

    Pietarila Graham, Jonathan; Mininni, Pablo D; Pouquet, Annick

    2009-07-01

    We demonstrate that, for the case of quasiequipartition between the velocity and the magnetic field, the Lagrangian-averaged magnetohydrodynamics (LAMHD) alpha model reproduces well both the large-scale and the small-scale properties of turbulent flows; in particular, it displays no increased (superfilter) bottleneck effect with its ensuing enhanced energy spectrum at the onset of the subfilter scales. This is in contrast to the case of the neutral fluid in which the Lagrangian-averaged Navier-Stokes alpha model is somewhat limited in its applications because of the formation of spatial regions with no internal degrees of freedom and subsequent contamination of superfilter-scale spectral properties. We argue that, as the Lorentz force breaks the conservation of circulation and enables spectrally nonlocal energy transfer (associated with Alfvén waves), it is responsible for the absence of a viscous bottleneck in magnetohydrodynamics (MHD), as compared to the fluid case. As LAMHD preserves Alfvén waves and the circulation properties of MHD, there is also no (superfilter) bottleneck found in LAMHD, making this method capable of large reductions in required numerical degrees of freedom; specifically, we find a reduction factor of approximately 200 when compared to a direct numerical simulation on a large grid of 1536;{3} points at the same Reynolds number.

  7. Electromechanical, acoustical and thermodynamical characterization of a low-frequency sonotrode-type transducer in a small sonoreactor at different excitation levels and loading conditions.

    PubMed

    Petošić, Antonio; Horvat, Marko; Režek Jambrak, Anet

    2017-11-01

    The paper reports and compares the results of the electromechanical, acoustical and thermodynamical characterization of a low-frequency sonotrode-type ultrasonic device inside a small sonoreactor, immersed in three different loading media, namely, water, juice and milk, excited at different excitation levels, both below and above the cavitation threshold. The electroacoustic efficiency factor determined at system resonance through electromechanical characterization in degassed water as the reference medium is 88.7% for the device in question. This efficiency can be reduced up to three times due to the existence of a complex sound field in the reactor in linear driving conditions below the cavitation threshold. The behaviour of the system is more stable at higher excitation levels than in linear operating conditions. During acoustical characterization, acoustic pressure is spatially averaged, both below and above the cavitation threshold. The standing wave patterns inside the sonoreactor have a stronger influence on the variation of the spatially distributed RMS pressure in linear operating conditions. For these conditions, the variation of ±1.7dB was obtained, compared to ±1.4dB obtained in highly nonlinear regime. The acoustic power in the sonoreactor was estimated from the magnitude of the averaged RMS pressure, and from the reverberation time of the sonoreactor as the representation of the losses. The electroacoustic efficiency factors obtained through acoustical and electromechanical characterization are in a very good agreement at low excitation levels. The irradiated acoustic power estimated in nonlinear conditions differs from the dissipated acoustic power determined with the calorimetric method by several orders of magnitude. The number of negative pressure peaks that represent transient cavitation decreases over time during longer treatments of a medium with high-power ultrasound. The number of negative peaks decreases faster when the medium and the vessel are allowed to heat up. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. On the structural properties of small-world networks with range-limited shortcut links

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Kulkarni, Rahul V.

    2013-12-01

    We explore a new variant of Small-World Networks (SWNs), in which an additional parameter (r) sets the length scale over which shortcuts are uniformly distributed. When r=0 we have an ordered network, whereas r=1 corresponds to the original Watts-Strogatz SWN model. These limited range SWNs have a similar degree distribution and scaling properties as the original SWN model. We observe the small-world phenomenon for r≪1, indicating that global shortcuts are not necessary for the small-world effect. For limited range SWNs, the average path length changes nonmonotonically with system size, whereas for the original SWN model it increases monotonically. We propose an expression for the average path length for limited range SWNs based on numerical simulations and analytical approximations.

  9. Why do vulnerability cycles matter in financial networks?

    NASA Astrophysics Data System (ADS)

    Silva, Thiago Christiano; Tabak, Benjamin Miranda; Guerra, Solange Maria

    2017-04-01

    We compare two widely employed models that estimate systemic risk: DebtRank and Differential DebtRank. We show that not only network cyclicality but also the average vulnerability of banks are essential concepts that contribute to widening the gap in the systemic risk estimates of both approaches. We find that systemic risk estimates are the same whenever the network has no cycles. However, in case the network presents cyclicality, then we need to inspect the average vulnerability of banks to estimate the underestimation gap. We find that the gap is small regardless of the cyclicality of the network when its average vulnerability is large. In contrast, the observed gap follows a quadratic behavior when the average vulnerability is small or intermediate. We show results using an econometric exercise and draw guidelines both on artificial and real-world financial networks.

  10. Age at marriage, fertility and infant-child mortality in a Lahore suburb.

    PubMed

    Afzal, M; Khan, Z; Chaudhry, N A

    1976-01-01

    An exploratory study of 700 ever-married women was conducted in a suburb of Lahore city in 1973 in an attempt to gather knowledge concerning the major determinants of fertility and family size. A retrospective sample survey was used for collecting the data. From the sample of 700, average age at marriage has been computed for the females getting married during different periods of time. The mean age at marriage shows an increasing trend from decade to decade, and these increases have been found to be statistically significant. Of the 700 women, 674 have actually been pregnant and in this group there were 3691 pregnancies, an average of 5.5 pregnancies per woman. The total number of pregnancy wastages among these women was 174 or 47/1000 pregnancies. The total number of children ever born to these mothers was 3531 or an average of 5.2 children per mother. The overall average number of children born alive to the mothers whose age at marriage was under 15 is 5.4; for those married at the average age of 15, the number of children is 5; and for those with higher ages at marriage, the number of children ranges from 4.4 to 4.6. The marital age-specific fertility rate is slightly higher for the age group 15-19 but is lower for the ages 30 and above. The relationship of cumulative number of pregnancies and the number of pregnancy wastages experienced shows that the pregnancies of m others increase with more pregnancy wastages, but the average number for those who never experienced a loss was also high. This suggests that the use of contraception has not been sufficiently extensive to help limit the incidence of pregnancies. The patterns of cumulative fertility also suggest that the family planning program has done little to reduce fertility. A comparison of the average number of children ever born to the 700 women with that provided by the National Impact Survey shows little change over the 5 years intervening between the 2 studies.

  11. An allometric analysis of the number of muscle spindles in mammalian skeletal muscles

    PubMed Central

    Banks, R W

    2006-01-01

    An allometric analysis of the number of muscle spindles in relation to muscle mass in mammalian (mouse, rat, guinea-pig, cat, human) skeletal muscles is presented. It is shown that the trend to increasing number as muscle mass increases follows an isometric (length) relationship between species, whereas within a species, at least for the only essentially complete sample (human), the number of spindles scales, on average, with the square root rather than the cube root of muscle mass. An attempt is made to reconcile these apparently discrepant relationships. Use of the widely accepted spindle density (number of spindles g−1 of muscle) as a measure of relative abundance of spindles in different muscles is shown to be grossly misleading. It is replaced with the residuals of the linear regression of ln spindle number against ln muscle mass. Significant differences in relative spindle abundance as measured by residuals were found between regional groups of muscles: the greatest abundance is in axial muscles, including those concerned with head position, whereas the least is in muscles of the shoulder girdle. No differences were found between large and small muscles operating in parallel, or between antigravity and non-antigravity muscles. For proximal vs. distal muscles, spindles were significantly less abundant in the hand than the arm, but there was no difference between the foot and the leg. PMID:16761976

  12. 13 CFR 121.106 - How does SBA calculate number of employees?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... average number of employees is used for each of the pay periods during which it has been in business. (4... includes the following principles: (1) The average number of employees of the concern is used (including... pay periods for the preceding completed 12 calendar months. (2) Part-time and temporary employees are...

  13. 13 CFR 121.106 - How does SBA calculate number of employees?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... average number of employees is used for each of the pay periods during which it has been in business. (4... includes the following principles: (1) The average number of employees of the concern is used (including... pay periods for the preceding completed 12 calendar months. (2) Part-time and temporary employees are...

  14. Intercomparison study and optical asphericity measurements of small ice particles in the CERN CLOUD experiment

    NASA Astrophysics Data System (ADS)

    Nichman, Leonid; Järvinen, Emma; Dorsey, James; Connolly, Paul; Duplissy, Jonathan; Fuchs, Claudia; Ignatius, Karoliina; Sengupta, Kamalika; Stratmann, Frank; Möhler, Ottmar; Schnaiter, Martin; Gallagher, Martin

    2017-09-01

    Optical probes are frequently used for the detection of microphysical cloud particle properties such as liquid and ice phase, size and morphology. These properties can eventually influence the angular light scattering properties of cirrus clouds as well as the growth and accretion mechanisms of single cloud particles. In this study we compare four commonly used optical probes to examine their response to small cloud particles of different phase and asphericity. Cloud simulation experiments were conducted at the Cosmics Leaving OUtdoor Droplets (CLOUD) chamber at European Organisation for Nuclear Research (CERN). The chamber was operated in a series of multi-step adiabatic expansions to produce growth and sublimation of ice particles at super- and subsaturated ice conditions and for initial temperatures of -30, -40 and -50 °C. The experiments were performed for ice cloud formation via homogeneous ice nucleation. We report the optical observations of small ice particles in deep convection and in situ cirrus simulations. Ice crystal asphericity deduced from measurements of spatially resolved single particle light scattering patterns by the Particle Phase Discriminator mark 2 (PPD-2K, Karlsruhe edition) were compared with Cloud and Aerosol Spectrometer with Polarisation (CASPOL) measurements and image roundness captured by the 3View Cloud Particle Imager (3V-CPI). Averaged path light scattering properties of the simulated ice clouds were measured using the Scattering Intensity Measurements for the Optical detectioN of icE (SIMONE) and single particle scattering properties were measured by the CASPOL. We show the ambiguity of several optical measurements in ice fraction determination of homogeneously frozen ice in the case where sublimating quasi-spherical ice particles are present. Moreover, most of the instruments have difficulties of producing reliable ice fraction if small aspherical ice particles are present, and all of the instruments cannot separate perfectly spherical ice particles from supercooled droplets. Correlation analysis of bulk averaged path depolarisation measurements and single particle measurements of these clouds showed higher R2 values at high concentrations and small diameters, but these results require further confirmation. We find that none of these instruments were able to determine unambiguously the phase of the small particles. These results have implications for the interpretation of atmospheric measurements and parametrisations for modelling, particularly for low particle number concentration clouds.

  15. Inferring Cirrus Size Distributions Through Satellite Remote Sensing and Microphysical Databases

    NASA Technical Reports Server (NTRS)

    Mitchell, David; D'Entremont, Robert P.; Lawson, R. Paul

    2010-01-01

    Since cirrus clouds have a substantial influence on the global energy balance that depends on their microphysical properties, climate models should strive to realistically characterize the cirrus ice particle size distribution (PSD), at least in a climatological sense. To date, the airborne in situ measurements of the cirrus PSD have contained large uncertainties due to errors in measuring small ice crystals (D<60 m). This paper presents a method to remotely estimate the concentration of the small ice crystals relative to the larger ones using the 11- and 12- m channels aboard several satellites. By understanding the underlying physics producing the emissivity difference between these channels, this emissivity difference can be used to infer the relative concentration of small ice crystals. This is facilitated by enlisting temperature-dependent characterizations of the PSD (i.e., PSD schemes) based on in situ measurements. An average cirrus emissivity relationship between 12 and 11 m is developed here using the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument and is used to retrieve the PSD based on six different PSD schemes. The PSDs from the measurement-based PSD schemes are compared with corresponding retrieved PSDs to evaluate differences in small ice crystal concentrations. The retrieved PSDs generally had lower concentrations of small ice particles, with total number concentration independent of temperature. In addition, the temperature dependence of the PSD effective diameter De and fall speed Vf for these retrieved PSD schemes exhibited less variability relative to the unmodified PSD schemes. The reduced variability in the retrieved De and Vf was attributed to the lower concentrations of small ice crystals in the retrieved PSD.

  16. 40 CFR Table Jj-1 to Subpart Jj of... - Animal Population Threshold Level Below Which Facilities Are Not Required To Report Emissions...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Emissions Under Subpart JJ 1 2 Animal group Average annual animal population (Head) 3 Beef 29,300 Dairy 3... groups except dairy, the average annual animal population represents the total number of animals present at the facility. For dairy facilities, the average annual animal population represents the number of...

  17. 40 CFR Table Jj-1 to Subpart Jj of... - Animal Population Threshold Level Below Which Facilities Are Not Required To Report Emissions...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Emissions Under Subpart JJ 1,2 Animal group Average annual animal population (Head) 3 Beef 29,300 Dairy 3... groups except dairy, the average annual animal population represents the total number of animals present at the facility. For dairy facilities, the average annual animal population represents the number of...

  18. 76 FR 32069 - Safety Zone; Lorain Independence Day Fireworks, Black River, Lorain, OH

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ... grant or loan recipients, and will not raise any novel legal or policy issues. Small Entities Under the... economic impact on a substantial number of small entities. The term ``small entities'' comprises small... number of small entities. This rule will affect the following entities, some of which may be small...

  19. A Statistical Characterization of Reflection and Refraction in the Atmospheres of sub-Saturn Kepler Planet Candidates

    NASA Astrophysics Data System (ADS)

    Sheets, Holly A.; Deming, Drake; Arney, Giada; Meadows, Victoria

    2016-01-01

    We present the results of our method to detect small atmospheric signals in Kepler's close-in, sub-Saturn planet candidate light curves. We detect an average secondary eclipse for groups of super-Earth, Neptune-like, and other sub-Saturn-sized candidates by scaling and combining photometric data of the groups of candidates such that the eclipses add constructively. This greatly increases the signal-to-noise compared to combining eclipses for individual planets. We have modified our method for averaging short cadence light curves of multiple planet candidates (2014, ApJ, 794, 133), and have applied it to long cadence data, accounting for the broadening of the eclipse due to the 30 minute cadence. We then use the secondary eclipse depth to determine the average albedo for the group. In the short cadence data, we found that a group of close-in sub-Saturn candidates (1 to 6 Earth radii) was more reflective (geometric A ~ 0.22) than typical hot Jupiters (geometric A ~ 0.06 to 0.11: Demory 2014, ApJL, 789, L20). With the larger number of candidates available in long cadence, we improve the resolution in radius and consider groups of candidates with radii between 1 and 2, 2 and 4, and 4 and 6 Earth radii. We also modify our averaging technique to search for refracted light just before and after transit in the Kepler candidate light curves, as modelled by Misra and Meadows (2014, ApJL, 795, L14).

  20. Ecological Implications of a Flower Size/Number Trade-Off in Tropical Forest Trees

    PubMed Central

    Kettle, Chris J.; Maycock, Colin R.; Ghazoul, Jaboury; Hollingsworth, Pete M.; Khoo, Eyen; Sukri, Rahayu Sukmaria Haji; Burslem, David F. R. P.

    2011-01-01

    Background In angiosperms, flower size commonly scales negatively with number. The ecological consequences of this trade-off for tropical trees remain poorly resolved, despite their potential importance for tropical forest conservation. We investigated the flower size number trade-off and its implications for fecundity in a sample of tree species from the Dipterocarpaceae on Borneo. Methodology/Principal Findings We combined experimental exclusion of pollinators in 11 species, with direct and indirect estimates of contemporary pollen dispersal in two study species and published estimates of pollen dispersal in a further three species to explore the relationship between flower size, pollinator size and mean pollen dispersal distance. Maximum flower production was two orders of magnitude greater in small-flowered than large-flowered species of Dipterocarpaceae. In contrast, fruit production was unrelated to flower size and did not differ significantly among species. Small-flowered species had both smaller-sized pollinators and lower mean pollination success than large-flowered species. Average pollen dispersal distances were lower and frequency of mating between related individuals was higher in a smaller-flowered species than a larger-flowered confamilial. Our synthesis of pollen dispersal estimates across five species of dipterocarp suggests that pollen dispersal scales positively with flower size. Conclusions and Their Significance Trade-offs embedded in the relationship between flower size and pollination success contribute to a reduction in the variance of fecundity among species. It is therefore plausible that these processes could delay competitive exclusion and contribute to maintenance of species coexistence in this ecologically and economically important family of tropical trees. These results have practical implications for tree species conservation and restoration. Seed collection from small-flowered species may be especially vulnerable to cryptic genetic erosion. Our findings also highlight the potential for differential vulnerability of tropical tree species to the deleterious consequences of forest fragmentation. PMID:21408110

Top