Science.gov

Sample records for large scale spreading

  1. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  2. Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading

    PubMed Central

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932

  3. Large-scale organization of rat sensorimotor cortex based on a motif of large activation spreads.

    PubMed

    Frostig, Ron D; Xiong, Ying; Chen-Bee, Cynthia H; Kvasnák, Eugen; Stehberg, Jimmy

    2008-12-01

    Parcellation according to function (e.g., visual, somatosensory, auditory, motor) is considered a fundamental property of sensorimotor cortical organization, traditionally defined from cytoarchitectonics and mapping studies relying on peak evoked neuronal activity. In the adult rat, stimulation of single whiskers evokes peak activity at topographically appropriate locations within somatosensory cortex and provides an example of cortical functional specificity. Here, we show that single whisker stimulation also evokes symmetrical areas of suprathreshold and subthreshold neuronal activation that spread extensively away from peak activity, effectively ignoring cortical borders by spilling deeply into multiple cortical territories of different modalities (auditory, visual and motor), where they were blocked by localized neuronal activity blocker injections and thus ruled out as possibly caused by "volume conductance." These symmetrical activity spreads were supported by underlying border-crossing, long-range horizontal connections as confirmed with transection experiments and injections of anterograde neuronal tracer experiments. We found such large evoked activation spreads and their underlying connections regardless of whisker identity, cortical layer, or axis of recorded responses, thereby revealing a large scale nonspecific organization of sensorimotor cortex based on a motif of large symmetrical activation spreads. Because the large activation spreads and their underlying horizontal connections ignore anatomical borders between cortical modalities, sensorimotor cortex could therefore be viewed as a continuous entity rather than a collection of discrete, delineated unimodal regions, an organization that could coexist with established specificity of cortical organization and that could serve as a substrate for associative learning, direct multimodal integration and recovery of function after injury. PMID:19052219

  4. Large scale organization of rat sensorimotor cortex based on a motif of large activation spreads

    PubMed Central

    Frostig, Ron D.; Xiong, Ying; Chen-Bee, Cynthia H.; Kvašňák, Eugen; Stehberg, Jimmy

    2008-01-01

    Parcellation according to function (e.g., visual, somatosensory, auditory, motor) is considered a fundamental property of sensorimotor cortical organization, traditionally defined from cytoarchitectonics and mapping studies relying on peak evoked neuronal activity. In the adult rat, stimulation of single whiskers evokes peak activity at topographically appropriate locations within somatosensory cortex and provides an example of cortical functional specificity. Here, we show that single whisker stimulation also evokes symmetrical areas of supra- and sub-threshold neuronal activation that spread extensively away from peak activity, effectively ignoring cortical borders by spilling deeply into multiple cortical territories of different modalities (auditory, visual and motor), where they were blocked by localized neuronal activity blocker injections and thus ruled out as possibly due to ‘volume conductance’. These symmetrical activity spreads were supported by underlying border-crossing, long-range horizontal connections as confirmed with transection experiments and injections of anterograde neuronal tracer experiments. We found such large evoked activation spreads and their underlying connections irrespective of whisker identity, cortical layer, or axis of recorded responses, thereby revealing a large scale nonspecific organization of sensorimotor cortex based on a motif of large symmetrical activation spreads. Because the large activation spreads and their underlying horizontal connections ignore anatomical borders between cortical modalities, sensorimotor cortex could therefore be viewed as a continuous entity rather than a collection of discrete, delineated unimodal regions – an organization that could co-exist with established specificity of cortical organization and that could serve as a substrate for associative learning, direct multimodal integration and recovery of function following injury. PMID:19052219

  5. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  6. Hurricane activity and the large-scale pattern of spread of an invasive plant species.

    PubMed

    Bhattarai, Ganesh P; Cronin, James T

    2014-01-01

    Disturbances are a primary facilitator of the growth and spread of invasive species. However, the effects of large-scale disturbances, such as hurricanes and tropical storms, on the broad geographic patterns of invasive species growth and spread have not been investigated. We used historical aerial imagery to determine the growth rate of invasive Phragmites australis patches in wetlands along the Atlantic and Gulf Coasts of the United States. These were relatively undisturbed wetlands where P. australis had room for unrestricted growth. Over the past several decades, invasive P. australis stands expanded in size by 6-35% per year. Based on tropical storm and hurricane activity over that same time period, we found that the frequency of hurricane-force winds explained 81% of the variation in P. australis growth over this broad geographic range. The expansion of P. australis stands was strongly and positively correlated with hurricane frequency. In light of the many climatic models that predict an increase in the frequency and intensity of hurricanes over the next century, these results suggest a strong link between climate change and species invasion and a challenging future ahead for the management of invasive species. PMID:24878928

  7. Hurricane Activity and the Large-Scale Pattern of Spread of an Invasive Plant Species

    PubMed Central

    Bhattarai, Ganesh P.; Cronin, James T.

    2014-01-01

    Disturbances are a primary facilitator of the growth and spread of invasive species. However, the effects of large-scale disturbances, such as hurricanes and tropical storms, on the broad geographic patterns of invasive species growth and spread have not been investigated. We used historical aerial imagery to determine the growth rate of invasive Phragmites australis patches in wetlands along the Atlantic and Gulf Coasts of the United States. These were relatively undisturbed wetlands where P. australis had room for unrestricted growth. Over the past several decades, invasive P. australis stands expanded in size by 6–35% per year. Based on tropical storm and hurricane activity over that same time period, we found that the frequency of hurricane-force winds explained 81% of the variation in P. australis growth over this broad geographic range. The expansion of P. australis stands was strongly and positively correlated with hurricane frequency. In light of the many climatic models that predict an increase in the frequency and intensity of hurricanes over the next century, these results suggest a strong link between climate change and species invasion and a challenging future ahead for the management of invasive species. PMID:24878928

  8. Oil droplets transport due to irregular waves: Development of large-scale spreading coefficients.

    PubMed

    Geng, Xiaolong; Boufadel, Michel C; Ozgokmen, Tamay; King, Thomas; Lee, Kenneth; Lu, Youyu; Zhao, Lin

    2016-03-15

    The movement of oil droplets due to waves and buoyancy was investigated by assuming an irregular sea state following a JONSWAP spectrum and four buoyancy values. A technique known as Wheeler stretching was used to model the movement of particles under the moving water surface. In each simulation, 500 particles were released and were tracked for a real time of 4.0 h. A Monte Carlo approach was used to obtain ensemble properties. It was found that small eddy diffusivities that decrease rapidly with depth generated the largest horizontal spreading of the plume. It was also found that large eddy diffusivities that decrease slowly with depth generated the smallest horizontal spreading coefficient of the plume. The increase in buoyancy resulted in a decrease in the horizontal spreading coefficient, which suggests that two-dimensional (horizontal) models that predict the transport of surface oil could be overestimating the spreading of oil. PMID:26795121

  9. Importance of Large-Scale Wave Structure to Equatorial Spread F

    NASA Astrophysics Data System (ADS)

    Tsunoda, R. T.

    2008-12-01

    There is mounting evidence that large-scale wave structure (LSWS) is a more direct precursor of equatorial spread F (ESF) than the post-sunset rise (PSSR) of the equatorial F layer. Unambiguous experimental evidence, though limited, come from measurements by ALTAIR, a fully steerable incoherent-scatter radar, in situ measurements by low-altitude satellites in low-inclination orbits (AE-E, San Marco D), and total electron content measurements using satellites in low-inclination orbits. Less direct evidence is contained in seemingly extraneous traces in equatorial ionograms, which appear to be associated with LSWS and ESF. Clearly, a demonstration that these traces are indeed a direct consequence of LSWS is pivotal because such a demonstration would allow use of the extensive database of equatorial ionograms that exists to argue conclusively that LSWS is a central player in ESF generation. A demonstration of this kind will be presented, together with a description of experiments proposed for the Pacific sector, which involve the C/NOFS satellite, and how they will increase substantially our understanding of LSWS and ESF.

  10. Condor equatorial Spread F campaign. Overview and results of the large-scale measurements

    SciTech Connect

    Kelley, M.C.; LaBelle, J.; Kudeki, E.; Fejer, B.G.; Basu, S.

    1986-05-01

    During the Condor campaign a number of instruments were set up in Peru to support the rocket experiments. This overview paper summarizes the main results on the macroscopic developments of spread F as evidenced by data from backscatter radars, from scintillation observations, and from digital ionosonde measurements. In this regard, at least two factors other than the classical gravitational Rayleigh-Taylor plasma instability process must operate to yield the longest scales horizontal organization of spread F structures. The horizontal scale typical of plume separation distances can be explained by invoking the effect of a shear in the plasma flow, although detailed comparison with theory seems to require shear frequencies a bit higher than observations indicate. On the other hand, the largest-scale organization or modulation of the scattering layer cannot be explained by the shear theory and must be due to local time variations in the ionospheric drift or to gravity wave induced vertical motions. Using simultaneous rocket and radar data, it is hypothesized that rapid overhead height variations in the scattering region over Jicamarca are primarily spatial structures advecting overhead. The detailed rocket-radar comparison verified several other earlier results and speculations, particularly those made in the PLUMEX experiments.

  11. Condor equatorial spread F-italic Campaign: Overview and results of the large-scale measurements

    SciTech Connect

    Kelley, M.C.; LaBelle, J.; Kudeki, E.; Fejer, B.G.; Basu, S.; Basu, S.; Baker, K.D.; Hanuise, C.; Argo, P.; Woodman, R.F.; Swartz, W.E.; Farley, D.T.; Meriwether J.W. Jr.

    1986-05-01

    During the Condor campaign a number of instruments were set up in Peru to support the rocket experiments. In this series of papers we report on the results of the experiments designed to study the equatorial F-italic region. In this overview paper we summarize the main results as well as report upon the macroscopic developments of spread F-italic as evidenced by data from backscatter radars, from scintillation observations, and from digital ionosonde measurements. In this latter regard, we argue here that at least two factors other than the classical gravitational Rayleigh-Taylor plasma instability process must operate to yield the longest-scale horizontal organization of spread F-italic structures. The horizontal scale typical of plume separation distances can be explained by invoking the effect of a shear in the plasma flow, although detailed comparison with theory seems to require shear frequencies a bit higher than observations indicate. On the other hand, the largest-scale organization or modulation of the scattering layer cannot be explained by the shear theory and must be due to local time variations in the ionospheric drift or to gravity wave induced vertical motions. Using simultaneous rocket and radar data, we were also able to confirm the oft quoted hypothesis that rapid overhead height variations in the scattering region over Jicamarca are primarily spatial structures advecting overhead. The detailed rocket-radar comparison verified several other earlier results and speculations, particularly those made in the PLUMEX experiments.

  12. Message spreading in networks with stickiness and persistence: Large clustering does not always facilitate large-scale diffusion

    NASA Astrophysics Data System (ADS)

    Cui, Pengbi; Tang, Ming; Wu, Zhi-Xi

    2014-09-01

    Recent empirical studies have confirmed the key roles of complex contagion mechanisms such as memory, social reinforcement, and decay effects in information diffusion and behavior spreading. Inspired by this fact, we here propose a new agent-based model to capture the whole picture of the joint action of the three mechanisms in information spreading, by quantifying the complex contagion mechanisms as stickiness and persistence, and carry out extensive simulations of the model on various networks. By numerical simulations as well as theoretical analysis, we find that the stickiness of the message determines the critical dynamics of message diffusion on tree-like networks, whereas the persistence plays a decisive role on dense regular lattices. In either network, the greater persistence can effectively make the message more invasive. Of particular interest is that our research results renew our previous knowledge that messages can spread broader in networks with large clustering, which turns out to be only true when they can inform a non-zero fraction of the population in the limit of large system size.

  13. Satellite traces: An ionogram signature for large-scale wave structure and a precursor for equatorial spread F

    NASA Astrophysics Data System (ADS)

    Tsunoda, Roland T.

    2008-10-01

    Although the source that controls day-to-day variability in the occurrence of equatorial plasma structure (i.e., equatorial spread F, or ESF) remains to be identified, progress is being made. There is evidence that the appearance of large-scale wave structure (LSWS) in the bottomside F layer, around the time of its post-sunset rise (PSSR), is a more-direct precursor of ESF than the PSSR itself. The bulk of the evidence, however, is in the form of ``satellite'' F traces in ionograms, which may be viewed as less than convincing, because these signatures have not been shown to be causally related to LSWS. In this paper, incoherent-scatter radar and ionosonde data, both collected on 24 July 1979 from the Kwajalein atoll, Marshall Islands, are used to show that this is indeed the case.

  14. Investigation on F layer height rise and equatorial spread F onset time: Signature of standing large-scale wave

    NASA Astrophysics Data System (ADS)

    Joshi, Lalit Mohan; Balwada, S.; Pant, T. K.; Sumod, S. G.

    2015-04-01

    Equatorial spread F observations have been categorized into three categories based on ionograms recorded over Sriharikota. First category comprised cases where the onset of equatorial spread F (ESF) was concurrent with the peak h'F time. Second and third categories comprised cases where the onset of ESF happened with a delay of 30 min and more than 30 min, respectively, with reference to the peak h'F time. Average peak h'F in the first category was more than 35 km higher than that in the second and third categories. Also, the peak vertical (upward) plasma drift was higher in the first category. Assuming the genesis of F region irregularity to have happened at or before the time of F layer attaining the peak height, late onset of ESF indicates the genesis of irregularities to have happened westward of Sriharikota. The fact that the peak h'F values were remarkably different in the three categories indicates a zonal variation of eastward electric field and postsunset height rise of F layer. The relative magnitude of the F layer height rise in the three different categories over Sriharikota has also been found to be significantly different than that over Thumba, an equatorial (magnetic) station located ~360 km westward of Sriharikota longitude. This scenario points toward the existence of a large-scale zonal standing wave in the F layer and its important role in F region instability process. Results presented in the manuscript have been discussed in the light of current understanding on the large-scale wave structure.

  15. Large-Scale Mass Spectrometry Imaging Investigation of Consequences of Cortical Spreading Depression in a Transgenic Mouse Model of Migraine

    NASA Astrophysics Data System (ADS)

    Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M.; van Heiningen, Sandra H.; van Zeijl, Rene J.; Dijkstra, Jouke; Ferrari, Michel D.; Tolner, Else A.; McDonnell, Liam A.; van den Maagdenberg, Arn M. J. M.

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant ( t-test, P < 0.05) changes in the cortex, 146 and 377 Da, and in the thalamus, 1820 and 1834 Da, of the CSD-affected hemisphere of FHM1 R192Q mice. Our findings reveal CSD- and genotype-specific molecular changes in the brain of FHM1 transgenic mice that may further our understanding about the role of CSD in migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  16. Large-scale mass spectrometry imaging investigation of consequences of cortical spreading depression in a transgenic mouse model of migraine.

    PubMed

    Carreira, Ricardo J; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M; van Heiningen, Sandra H; van Zeijl, Rene J; Dijkstra, Jouke; Ferrari, Michel D; Tolner, Else A; McDonnell, Liam A; van den Maagdenberg, Arn M J M

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant (t-test, P < 0.05) changes in the cortex, 146 and 377 Da, and in the thalamus, 1820 and 1834 Da, of the CSD-affected hemisphere of FHM1 R192Q mice. Our findings reveal CSD- and genotype-specific molecular changes in the brain of FHM1 transgenic mice that may further our understanding about the role of CSD in migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations. PMID:25877011

  17. Concurrent observations at the magnetic equator of small-scale irregularities and large-scale depletions associated with equatorial spread F

    NASA Astrophysics Data System (ADS)

    Hickey, Dustin A.; Martinis, Carlos R.; Rodrigues, Fabiano S.; Varney, Roger H.; Milla, Marco A.; Nicolls, Michael J.; Strømme, Anja; Arratia, Juan F.

    2015-12-01

    In 2014 an all-sky imager (ASI) and an Advanced Modular Incoherent Scatter Radar consisting of 14 panels (AMISR-14) system were installed at the Jicamarca Radio Observatory. The ASI measures airglow depletions associated with large-scale equatorial spread F irregularities (10-500 km), while AMISR-14 detects small-scale irregularities (0.34 m). This study presents simultaneous observations of equatorial spread F (ESF) irregularities at 50-200 km scale sizes using the all-sky imager, at 3 m scale sizes using the JULIA (Jicamarca Unattended Long-term Investigations of the Ionosphere and Atmosphere) radar, and at 0.34 m scales using the AMISR-14 radar. We compare data from the three instruments on the night of 20-21 August 2014 by locating the radar scattering volume in the optical images. During this night no topside plumes were observed, and we only compare with bottomside ESF. AMISR-14 had five beams perpendicular to the magnetic field covering ~200 km in the east-west direction at 250 km altitude. Comparing the radar data with zenith ASI measurements, we found that most of the echoes occur on the western wall of the depletions with fewer echoes observed the eastern wall and center, contrary to previous comparisons of topside plumes that showed most of the echoes in the center of depleted regions. We attribute these differences to the occurrence of irregularities produced at submeter scales by the lower hybrid drift instability. Comparisons of the ASI observations with JULIA images show similar results to those found in the AMISR-14 and ASI comparison.

  18. South Atlantic Spreading Velocities and Time Scales

    NASA Astrophysics Data System (ADS)

    Clark, S. R.; Smethurst, M. A.; Bianchi, M. C.

    2013-12-01

    Plate reconstructions based on hierarchical spherical rotations have been around for many years. For the breakup of Pangea and Gondwana, these reconstructions are based on two major sources: magnetic isochrons and geological evidence for the onset of rifting and the tightness of the fit between continents. These reconstructions imply spreading velocities and it is the changes in velocities that can be used to probe questions of the forces moving plates around. In order to calculate the velocities correctly though, the importance of the choice of geologic time scale is often ignored. In this talk, we focus on the South Atlantic and calculate the spreading velocity errors implied by the choice of time scale for three major epochs: the Cenozoic and Late Mesozoic, the Cretaceous Quiet Zone and the Late Cretaceous to the Early Jurassic. In addition, we report the spreading velocities implied through these phases by various available magnetic isochron-derived reconstructions and the geological fits for South America and Africa used by large scale global reconstruction as well as in recent papers. Finally, we will highlight the implications for the choice of the mantle reference frame on African plate velocities.

  19. Potential spread of highly pathogenic avian influenza H5N1 by wildfowl: dispersal ranges and rates determined from large-scale satellite telemetry

    USGS Publications Warehouse

    Gaidet, Nicolas; Cappelle, Julien; Takekawa, John Y.; Prosser, Diann J.; Iverson, Samuel A.; Douglas, David C.; Perry, William M.; Mundkur, Taej; Newman, Scott H.

    2010-01-01

    1. Migratory birds are major candidates for long-distance dispersal of zoonotic pathogens. In recent years, wildfowl have been suspected of contributing to the rapid geographic spread of the highly pathogenic avian influenza (HPAI) H5N1 virus. Experimental infection studies reveal that some wild ducks, geese and swans shed this virus asymptomatically and hence have the potential to spread it as they move. 2. We evaluate the dispersive potential of HPAI H5N1 viruses by wildfowl through an analysis of the movement range and movement rate of birds monitored by satellite telemetry in relation to the apparent asymptomatic infection duration (AID) measured in experimental studies. We analysed the first large-scale data set of wildfowl movements, including 228 birds from 19 species monitored by satellite telemetry in 2006–2009, over HPAI H5N1 affected regions of Asia, Europe and Africa. 3. Our results indicate that individual migratory wildfowl have the potential to disperse HPAI H5N1 over extensive distances, being able to perform movements of up to 2900 km within timeframes compatible with the duration of asymptomatic infection. 4. However, the likelihood of such virus dispersal over long distances by individual wildfowl is low: we estimate that for an individual migratory bird there are, on average, only 5–15 days per year when infection could result in the dispersal of HPAI H5N1 virus over 500 km. 5. Staging at stopover sites during migration is typically longer than the period of infection and viral shedding, preventing birds from dispersing a virus over several consecutive but interrupted long-distance movements. Intercontinental virus dispersion would therefore probably require relay transmission between a series of successively infected migratory birds. 6. Synthesis and applications. Our results provide a detailed quantitative assessment of the dispersive potential of HPAI H5N1 virus by selected migratory birds. Such dispersive potential rests on the

  20. Small-Scale Irregularities in Equatorial Spread-F

    NASA Astrophysics Data System (ADS)

    Dimant, Yakov; Oppenheim, Meers

    2014-10-01

    Equatorial Spread-F is a spectacular plasma phenomenon that reshapes the nighttime ionosphere and disrupts GPS navigation and radio communication. Current computer models simulate the evolution of large-scale spread-F phenomena (1000km-to-kilometer), but they do not explain what causes the meter-scale irregularities observed by radars and space-borne instruments. Our recent particle-in-cell (PIC) simulations of weakly collisional plasma have demonstrated that large-scale plasma density gradients and related electric fields may drive local plasma instabilities, although only for a limited set of parameters. Motivated by these PIC simulations, we have revisited the linear theory of this instability, employing a novel and sophisticated eigenmode analysis. This method identified eigenmode wave structures in regions having strong plasma density gradients. These wave structures are not linearly unstable, but are not damped either. This means that small-scale fluctuations provided by an external source (e.g., by a nonlinear spectral cascade from longer-wavelength spread-F turbulence) can be resonantly amplified and may explain radar observations without invoking linear instability. Work supported by NASA LWS Grant 10-LWSTRT10-0078.

  1. Multiscale analysis of spreading in a large communication network

    NASA Astrophysics Data System (ADS)

    Kivelä, Mikko; Pan, Raj Kumar; Kaski, Kimmo; Kertész, János; Saramäki, Jari; Karsai, Márton

    2012-03-01

    In temporal networks, both the topology of the underlying network and the timings of interaction events can be crucial in determining how a dynamic process mediated by the network unfolds. We have explored the limiting case of the speed of spreading in the SI model, set up such that an event between an infectious and a susceptible individual always transmits the infection. The speed of this process sets an upper bound for the speed of any dynamic process that is mediated through the interaction events of the network. With the help of temporal networks derived from large-scale time-stamped data on mobile phone calls, we extend earlier results that indicate the slowing-down effects of burstiness and temporal inhomogeneities. In such networks, links are not permanently active, but dynamic processes are mediated by recurrent events taking place on the links at specific points in time. We perform a multiscale analysis and pinpoint the importance of the timings of event sequences on individual links, their correlations with neighboring sequences, and the temporal pathways taken by the network-scale spreading process. This is achieved by studying empirically and analytically different characteristic relay times of links, relevant to the respective scales, and a set of temporal reference models that allow for removing selected time-domain correlations one by one. Our analysis shows that for the spreading velocity, time-domain inhomogeneities are as important as the network topology, which indicates the need to take time-domain information into account when studying spreading dynamics. In particular, results for the different characteristic relay times underline the importance of the burstiness of individual links.

  2. Wetting and spreading at the molecular scale

    NASA Technical Reports Server (NTRS)

    Koplik, Joel; Banavar, Jayanth R.

    1994-01-01

    We have studied the microscopic aspects of the spreading of liquid drops on a solid surface by molecular dynamics simulations of coexisting three-phase Lennard-Jones systems of liquid, vapor and solid. We consider both spherically symmetric atoms and chain-like molecules, and a range of interaction strengths. As the attraction between liquid and solid increases we observed a smooth transition in spreading regimes, from partial to complete to terraced wetting. In the terraced case, where distinct monomolecular layers spread with different velocities, the layers are ordered but not solid, with qualitative behavior resembling recent experimental findings, but with interesting differences in the spreading rate.

  3. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  4. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  5. Large-scale whole genome sequencing identifies country-wide spread of an emerging G9P[8] rotavirus strain in Hungary, 2012.

    PubMed

    Dóró, Renáta; Mihalov-Kovács, Eszter; Marton, Szilvia; László, Brigitta; Deák, Judit; Jakab, Ferenc; Juhász, Ágnes; Kisfali, Péter; Martella, Vito; Melegh, Béla; Molnár, Péter; Sántha, Ildikó; Schneider, Ferenc; Bányai, Krisztián

    2014-12-01

    With the availability of rotavirus vaccines routine strain surveillance has been launched or continued in many countries worldwide. In this study relevant information is provided from Hungary in order to extend knowledge about circulating rotavirus strains. Direct sequencing of the RT-PCR products obtained by VP7 and VP4 genes specific primer sets was utilized as routine laboratory method. In addition we explored the advantage of random primed RT-PCR and semiconductor sequencing of the whole genome of selected strains. During the study year, 2012, we identified an increase in the prevalence of G9P[8] strains across the country. This genotype combination predominated in seven out of nine study sites (detection rates, 45-83%). In addition to G9P[8]s, epidemiologically major strains included genotypes G1P[8] (34.2%), G2P[4] (13.5%), and G4P[8] (7.4%), whereas unusual and rare strains were G3P[8] (1%), G2P[8] (0.5%), G1P[4] (0.2%), G3P[4] (0.2%), and G3P[9] (0.2%). Whole genome analysis of 125 Hungarian human rotaviruses identified nine major genotype constellations and uncovered both intra- and intergenogroup reassortment events in circulating strains. Intergenogroup reassortment resulted in several unusual genotype constellations, including mono-reassortant G1P[8] and G9P[8] strains whose genotype 1 (Wa-like) backbone gene constellations contained DS1-like NSP2 and VP3 genes, respectively, as well as, a putative bovine-feline G3P[9] reassortant strain. The conserved genomic constellations of epidemiologically major genotypes suggested the clonal spread of the re-emerging G9P[8] genotype and several co-circulating strains (e.g., G1P[8] and G2P[4]) in many study sites during 2012. Of interest, medically important G2P[4] strains carried bovine-like VP1 and VP6 genes in their genotype constellation. No evidence for vaccine associated selection, or, interaction between wild-type and vaccine strains was obtained. In conclusion, this study reports the reemergence of G9P[8

  6. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  7. Large energy-spread beam diagnostics through quadrupole scans

    SciTech Connect

    Frederico, Joel; Adli, Erik; Hogan, Mark; Raubenheimer, Tor

    2012-12-21

    The Facility for Advanced Accelerator and Experimental Tests (FACET) is a new user facility at the SLAC National Accelerator Laboratory, servicing next-generation accelerator experiments. The 1.5% RMS energy spread of the FACET beam causes large chromatic aberrations in optics. These aberrations necessitate updated quadrupole scan fits to remain accurate.

  8. Scale-free correlations in the geographical spreading of obesity

    NASA Astrophysics Data System (ADS)

    Gallos, Lazaros; Barttfeld, Pablo; Havlin, Shlomo; Sigman, Mariano; Makse, Hernan

    2012-02-01

    Obesity levels have been universally increasing. A crucial problem is to determine the influence of global and local drivers behind the obesity epidemic, to properly guide effective policies. Despite the numerous factors that affect the obesity evolution, we show a remarkable regularity expressed in a predictable pattern of spatial long-range correlations in the geographical spreading of obesity. We study the spatial clustering of obesity and a number of related health and economic indicators, and we use statistical physics methods to characterize the growth of the resulting clusters. The resulting scaling exponents allow us to broadly classify these indicators into two separate universality classes, weakly or strongly correlated. Weak correlations are found in generic human activity such as population distribution and the growth of the whole economy. Strong correlations are recovered, among others, for obesity, diabetes, and the food industry sectors associated with food consumption. Obesity turns out to be a global problem where local details are of little importance. The long-range correlations suggest influence that extends to large scales, hinting that the physical model of obesity clustering can be mapped to a long-range correlated percolation process.

  9. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  10. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  11. Transverse Gradient Undulators and FEL operating with large energy spread

    NASA Astrophysics Data System (ADS)

    Ciocci, F.; Dattoli, G.; Sabia, E.

    2015-12-01

    Undulators exhibiting a gradient of the field in the transverse direction have been proposed to mitigate the effects of the gain dilution in Free Electron Laser devices operating with large energy spread. The actual use of the device depends on the realization of a field distribution with quasi-vanishing quadrupolar terms in the tapering directions. We analyze the effect of a Transverse Gradient Undulator on the FEL operation and critically review the possibility of an appropriate field implementation.

  12. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  13. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  14. Large deviations of spread measures for Gaussian matrices

    NASA Astrophysics Data System (ADS)

    Deelan Cunden, Fabio; Vivo, Pierpaolo

    2016-04-01

    For a large n× m Gaussian matrix, we compute the joint statistics, including large deviation tails, of generalized and total variance—the scaled log-determinant H and trace T of the corresponding n× n covariance matrix. Using a Coulomb gas technique, we find that the Laplace transform of their joint distribution {{P}n}(h,t) decays for large n, m (with c=m/n≥slant 1 fixed) as {{\\hat{P}}n}(s,w)≈ \\exp ≤ft(-β {{n}2}J(s,w)\\right) , where β is the Dyson index of the ensemble and J(s, w) is a β-independent large deviation function, which we compute exactly for any c. The corresponding large deviation functions in real space are worked out and checked with extensive numerical simulations. The results are complemented with a finite n, m treatment based on the Laguerre–Selberg integral. The statistics of atypically small log-determinants is shown to be driven by the split-off of the smallest eigenvalue, leading to an abrupt change in the large deviation speed.

  15. Turbulence Spreading into Linearly Stable Zone and Transport Scaling

    SciTech Connect

    T.S. Hahm; P.H. Diamond; Z. Lin; K. Itoh; S.-I. Itoh

    2003-10-20

    We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region.

  16. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  17. Transient super-ballistic spreading of wave packets with large spreading exponents in some hybrid ordered-quasiperiodic lattices

    NASA Astrophysics Data System (ADS)

    Nguyen, Ba Phi; Ngo, Quang Minh; Kim, Kihong

    2016-02-01

    We consider the spreading of an initially localized wave packet in one-dimensional hybrid ordered-quasiperiodic lattices. We consider two diffrent kinds of quasiperiodic sequences, which are the Cantor and the period-doubling sequences. From numerical calculations based on the discrete Schrödinger equation, we demonstrate that hybrid ordered-quasiperiodic lattices can support the super-ballistic spreading of a wave packet with very large spreading exponents for certain transient time windows. Remarkably, in the case of the sublattice with the on-site potential obeying the period-doubling quasiperiodic sequence, we find that the super-ballistic exponent can be larger than six. We also point out that previous explanations of this phenomenon based on a generalized version of the point source model are incorrect.

  18. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  19. Subaerial Seafloor Spreading in Iceland: Segment-Scale Processes and Analogs for Fast-Spreading Mid-Ocean Ridge Spreading Centers

    NASA Astrophysics Data System (ADS)

    Karson, Jeffrey; Varga, Robert; Siler, Drew; Horst, Andrew

    2010-05-01

    The nature of oceanic crust and spreading center processes are derived from direct observations of surface features and geophysics at active spreading centers as well as from deep crustal drilling, tectonic windows into the upper oceanic crust, and ophiolites. Integrating active spreading processes with deeply eroded crustal structures in Iceland provides an additional perspective on subsurface processes that are likely to be important at mid-ocean ridge spreading centers. Spreading in Iceland strongly resembles second-order segment-scale processes of the fast-spreading centers. Along axis, major processes including subsidence, magmatic construction, and hydrothermal activity vary systematically over tens of kilometers from segment centers to ends. Near spreading segment centers ("central volcanoes") subsidence and crustal thickening are greatest. The intrusion of high-level sill and cone sheet complexes and small gabbroic plutons contribute substantially to upper crustal thickening. Both magma supply and tectonic movements have a very strong vertical component. In contrast, near segment ends (fissure swarms in active spreading areas) subsidence is limited, most thickening occurs in the lava units and lateral dike injection is likely to dominate. In both Iceland and fast-spread crust, where the magma supply is relatively high, subaxial subsidence is the key process that controls the construction and modification of the crust during spreading. Seafloor studies on fast-spreading ridge show lava flows fed by dike intrusion events focused along a narrow (<1 km) axial region with very limited relief. However, subsurface structures reveal that axial lavas must subside hundreds of meters immediately beneath the axis as the overlying lava pile thickens. Similar relationships occur in Iceland but over a wider region of active magmatism (neovolcanic zone tens of kilometers wide) and building a much thicker upper crust (~5 km). For both cases, in order for the lava units to

  20. Modeling the coupled return-spread high frequency dynamics of large tick assets

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  1. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  2. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  3. Modelling of paratuberculosis spread between dairy cattle farms at a regional scale.

    PubMed

    Beaunée, Gaël; Vergu, Elisabeta; Ezanno, Pauline

    2015-01-01

    Mycobacterium avium subsp. paratuberculosis (Map) causes Johne's disease, with large economic consequences for dairy cattle producers worldwide. Map spread between farms is mainly due to animal movements. Locally, herd size and management are expected to influence infection dynamics. To provide a better understanding of Map spread between dairy cattle farms at a regional scale, we describe the first spatio-temporal model accounting simultaneously for population and infection dynamics and indirect local transmission within dairy farms, and between-farm transmission through animal trade. This model is applied to Brittany, a French region characterized by a high density of dairy cattle, based on data on animal trade, herd size and farm management (birth, death, renewal, and culling) from 2005 to 2013 for 12,857 dairy farms. In all simulated scenarios, Map infection highly persisted at the metapopulation scale. The characteristics of initially infected farms strongly impacted the regional Map spread. Network-related features of incident farms influenced their ability to contaminate disease-free farms. At the herd level, we highlighted a balanced effect of the number of animals purchased: when large, it led to a high probability of farm infection but to a low persistence. This effect was reduced when prevalence in initially infected farms increased. Implications of our findings in the current enzootic situation are that the risk of infection quickly becomes high for farms buying more than three animals per year. Even in regions with a low proportion of infected farms, Map spread will not fade out spontaneously without the use of effective control strategies. PMID:26407894

  4. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  5. Aerial dispersal and multiple-scale spread of epidemics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Disease spread has traditionally been described as a traveling wave of constant velocity. However, aerially dispersed pathogens capable of long distance dispersal (LDD) often have dispersal gradients with extended tails that could result in acceleration of the epidemic front over time and space. W...

  6. Damage spreading and opinion dynamics on scale-free networks

    NASA Astrophysics Data System (ADS)

    Fortunato, Santo

    2005-03-01

    We study damage spreading among the opinions of a system of agents, subjected to the dynamics of the Krause-Hegselmann consensus model. The damage consists in a sharp change of the opinion of one or more agents in the initial random opinion configuration, supposedly due to some external factors and/or events. This may help to understand for instance under which conditions special shocking events or targeted propaganda are able to influence the results of elections. For agents lying on the nodes of a Barabási-Albert network, there is a damage spreading transition at a low value εd of the confidence bound parameter. Interestingly, we find as well that there is some critical value εs above which the initial perturbation manages to propagate to all other agents.

  7. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  8. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  9. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  10. Scaling of gain with energy spread and energy in the PEP FEL

    SciTech Connect

    Fisher, A.S.

    1992-07-13

    The Sag Harbor paper on the PEP FEL discusses the scaling of various FEL parameters with energy spread {sigma}{sub {var_epsilon}}. I will repeat some of this material here and then examine the benefit of increasing the energy spread. How much energy spread can be achieved with damping wigglers is the next topic. Finally, I consider the dependence of gain and saturation length on beam energy and undulator field.

  11. Scaling of gain with energy spread and energy in the PEP FEL

    SciTech Connect

    Fisher, A.S.

    1992-07-13

    The Sag Harbor paper on the PEP FEL discusses the scaling of various FEL parameters with energy spread {sigma}{sub {var epsilon}}. I will repeat some of this material here and then examine the benefit of increasing the energy spread. How much energy spread can be achieved with damping wigglers is the next topic. Finally, I consider the dependence of gain and saturation length on beam energy and undulator field.

  12. The Initial Dispersal and Spread of an Intentional Invader at Three Spatial Scales

    PubMed Central

    Kristensen, Nadiah P.; De Barro, Paul J.; Schellhorn, Nancy A.

    2013-01-01

    The way an invasion progresses through space is a theme of interest common to invasion ecology and biological pest control. Models and mark-release studies of arthropods have been used extensively to extend and inform invasion processes of establishment and spread. However, the extremely common single-scale approach of monitoring initial spread leads to misinterpretation of rate and mode. Using the intentional release of a novel biological control agent (a parasitic hymenoptera, Eretmocerus hayati Zolnerowich & Rose (Hymenoptera: Aphelinidae), we studied its initial dispersal and spread at three different spatial scales, the local scale (tens of metres), field scale (hundreds of metres) and landscape scale (kilometres) around the release point. We fit models to each observed spread pattern at each spatial scale. We show that E. hayati exhibits stratified dispersal; moving further, faster and by a different mechanism than would have been concluded with a single local-scale post-release sampling design. In fact, interpretation of each scale independent of other scales gave three different models of dispersal, and three different impressions of the dominant dispersal mechanisms. Our findings demonstrate that using a single-scale approach may lead to quite erroneous conclusions, hence the necessity of using a multiple-scale hierarchical sampling design for inferring spread and the dominant dispersal mechanism of either human intended or unintended invasions. PMID:23671595

  13. Variation of dorsal horn cell dendritic spread with map scale.

    PubMed

    Brown, P B; Millecchia, R; Culberson, J L; Gladfelter, W; Covalt-Dunning, D

    1996-10-21

    Cells in laminae III, IV, and V of cat dorsal horn were injected with horseradish peroxidase or neurobiotin. Dorsal views of the dendritic domains were constructed in order to measure their lengths, widths, areas, and length/width ratios in the horizontal plane (the plane of the somatotopic map). Dendritic domain width and area in the horizontal plane were negatively correlated with fractional distance between the medial and lateral edges of the dorsal horn. These results are consistent with the hypothesis that dendritic domain width varies with map scale, which is maximal in the medial dorsal horn. This is similar to the variation in widths of primary afferent bouton distributions. The parallel variation of dorsal horn cell dendritic domain width and primary afferent bouton distribution width with map scale suggests that there is a causal relation between morphology and map scale in the dorsal horn representation of the hindlimb. This variation of adult morphology with map scale must reflect mechanisms responsible for the assembly of receptive fields. PMID:8906504

  14. Opinion Spreading with Mobility on Scale-Free Networks

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Liu, Jian-Guo; Wang, Bing-Hong; Zhou, Tao; Chen, Xing-Wen; Yao, Yu-Hua

    2008-02-01

    A continuum opinion dynamic model is presented based on two rules. The first one considers the mobilities of the individuals, the second one supposes that the individuals update their opinions independently. The results of the model indicate that the bounded confidence ∈c, separating consensus and incoherent states, of a scale-free network is much smaller than the one of a lattice. If the system can reach the consensus state, the sum of all individuals' opinion change Oc(t) quickly decreases in an exponential form, while if it reaches the incoherent state finally Oc(t) decreases slowly and has the punctuated equilibrium characteristic.

  15. Observation of artificial spread-F and large region ionization enhancement in an HF heating experiment at HAARP

    NASA Astrophysics Data System (ADS)

    Kuo, Spencer; Snyder, Arnold

    2010-04-01

    A large-scale ionospheric modification by HF heaters was explored via HAARP digisonde operated in a fast mode. The results show that the ionogram virtual heights and the height spread of the ordinary-wave sounding echoes were changed significantly by the O-mode heater; the X-mode heater imposed no noticeable effect on the ionograms. The enhanced virtual height spread exceeds 40 km, more than 15% of sounding echo's average virtual height. The heater downshifted/upshifted the virtual height in the low/high frequency region around the heater frequency by as much as 15 and 7.5 km. The modifications were developing to last more than 10 seconds after the heater was turned off. The perturbed ionosphere took more than 60 seconds to recover. The modified electron density distribution indicates that the electron density and temperature increases exceed 10% and 25% over a large altitude region (>30 Km) from below to above the HF reflection height.

  16. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  17. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  18. Deciphering the impact of uncertainty on the accuracy of large wildfire spread simulations.

    PubMed

    Benali, Akli; Ervilha, Ana R; Sá, Ana C L; Fernandes, Paulo M; Pinto, Renata M S; Trigo, Ricardo M; Pereira, José M C

    2016-11-01

    Predicting wildfire spread is a challenging task fraught with uncertainties. 'Perfect' predictions are unfeasible since uncertainties will always be present. Improving fire spread predictions is important to reduce its negative environmental impacts. Here, we propose to understand, characterize, and quantify the impact of uncertainty in the accuracy of fire spread predictions for very large wildfires. We frame this work from the perspective of the major problems commonly faced by fire model users, namely the necessity of accounting for uncertainty in input data to produce reliable and useful fire spread predictions. Uncertainty in input variables was propagated throughout the modeling framework and its impact was evaluated by estimating the spatial discrepancy between simulated and satellite-observed fire progression data, for eight very large wildfires in Portugal. Results showed that uncertainties in wind speed and direction, fuel model assignment and typology, location and timing of ignitions, had a major impact on prediction accuracy. We argue that uncertainties in these variables should be integrated in future fire spread simulation approaches, and provide the necessary data for any fire model user to do so. PMID:27333574

  19. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  20. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  1. Use of incomplete energy recovery for the energy compression of large energy spread charged particle beams

    DOEpatents

    Douglas, David R.; Benson, Stephen V.

    2007-01-23

    A method of energy recovery for RF-base linear charged particle accelerators that allows energy recovery without large relative momentum spread of the particle beam involving first accelerating a waveform particle beam having a crest and a centroid with an injection energy E.sub.o with the centroid of the particle beam at a phase offset f.sub.o from the crest of the accelerating waveform to an energy E.sub.full and then recovering the beam energy centroid a phase f.sub.o+Df relative to the crest of the waveform particle beam such that (E.sub.full-E.sub.o)(1+cos(f.sub.o+Df))>dE/2 wherein dE=the full energy spread, dE/2=the full energy half spread and Df=the wave form phase distance.

  2. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  3. Comparison of jet Mach number decay data with a correlation and jet spreading contours for a large variety of nozzles

    NASA Technical Reports Server (NTRS)

    Groesbeck, D. E.; Huff, R. G.; Vonglahn, U. H.

    1977-01-01

    Small-scale circular, noncircular, single- and multi-element nozzles with flow areas as large as 122 sq cm were tested with cold airflow at exit Mach numbers from 0.28 to 1.15. The effects of multi-element nozzle shape and element spacing on jet Mach number decay were studied in an effort to reduce the noise caused by jet impingement on externally blown flap (EBF) STOL aircraft. The jet Mach number decay data are well represented by empirical relations. Jet spreading and Mach number decay contours are presented for all configurations tested.

  4. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  5. Jet noise generated by large-scale coherent motion

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1991-01-01

    The noise generated by large scale turbulence structures and instability waves of jets is discussed. Emphasis is placed on supersonic jets with moderate to high Reynolds numbers. This is because it is in these jets that unambiguous experimental and theoretical evidence is found indicating that large turbulence structures and instability waves are directly responsible for generating the dominant part of the noise. For subsonic jets similar large turbulence structures and instability waves do play a crucial role in the dynamics, spread, and mixing of the jet fluid. However, at subsonic convection speeds, they do not appear to be efficient noise generators. Many investigators believe that the dominant noise source of subsonic jets is, in fact, the small scale turbulence. As yet, this belief has not yet received universal acceptance. The issues involved are complicated and are not easy to resolve.

  6. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  7. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  8. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  9. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  10. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  11. An epidemic spreading model on adaptive scale-free networks with feedback mechanism

    NASA Astrophysics Data System (ADS)

    Li, Tao; Liu, Xiongding; Wu, Jie; Wan, Chen; Guan, Zhi-Hong; Wang, Yuanmei

    2016-05-01

    A SIRS epidemic model with feedback mechanism on adaptive scale-free networks is presented. Using the mean field theory the spreading dynamics of the epidemic is studied in detail. The basic reproductive number and equilibriums are derived. Theoretical results indicate that the basic reproductive number is significantly dependent on the topology of the underlying networks. The existence of equilibriums is determined by the basic reproductive number. The global stability of disease-free equilibrium and the epidemic permanence are proved in detail. The feedback mechanism cannot change the basic reproductive number, but it can reduce the endemic level and weaken the epidemic spreading. Numerical simulations confirmed the analytical results.

  12. MASW on the standard seismic prospective scale using full spread recording

    NASA Astrophysics Data System (ADS)

    Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej

    2015-04-01

    The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.

  13. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  14. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  15. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  16. Large-scale spatial population databases in infectious disease research

    PubMed Central

    2012-01-01

    Modelling studies on the spatial distribution and spread of infectious diseases are becoming increasingly detailed and sophisticated, with global risk mapping and epidemic modelling studies now popular. Yet, in deriving populations at risk of disease estimates, these spatial models must rely on existing global and regional datasets on population distribution, which are often based on outdated and coarse resolution data. Moreover, a variety of different methods have been used to model population distribution at large spatial scales. In this review we describe the main global gridded population datasets that are freely available for health researchers and compare their construction methods, and highlight the uncertainties inherent in these population datasets. We review their application in past studies on disease risk and dynamics, and discuss how the choice of dataset can affect results. Moreover, we highlight how the lack of contemporary, detailed and reliable data on human population distribution in low income countries is proving a barrier to obtaining accurate large-scale estimates of population at risk and constructing reliable models of disease spread, and suggest research directions required to further reduce these barriers. PMID:22433126

  17. Large-scale spatial population databases in infectious disease research.

    PubMed

    Linard, Catherine; Tatem, Andrew J

    2012-01-01

    Modelling studies on the spatial distribution and spread of infectious diseases are becoming increasingly detailed and sophisticated, with global risk mapping and epidemic modelling studies now popular. Yet, in deriving populations at risk of disease estimates, these spatial models must rely on existing global and regional datasets on population distribution, which are often based on outdated and coarse resolution data. Moreover, a variety of different methods have been used to model population distribution at large spatial scales. In this review we describe the main global gridded population datasets that are freely available for health researchers and compare their construction methods, and highlight the uncertainties inherent in these population datasets. We review their application in past studies on disease risk and dynamics, and discuss how the choice of dataset can affect results. Moreover, we highlight how the lack of contemporary, detailed and reliable data on human population distribution in low income countries is proving a barrier to obtaining accurate large-scale estimates of population at risk and constructing reliable models of disease spread, and suggest research directions required to further reduce these barriers. PMID:22433126

  18. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  19. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  20. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  1. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  2. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  3. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  4. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  5. Multi-scale influence of vapor pressure deficit on fire ignition and spread in boreal forest ecosystems

    NASA Astrophysics Data System (ADS)

    Sedano, F.; Randerson, J. T.

    2014-07-01

    Climate-driven changes in the fire regime within boreal forest ecosystems are likely to have important effects on carbon cycling and species composition. In the context of improving fire management options and developing more realistic scenarios of future change, it is important to understand how meteorology regulates different aspects of fire dynamics, including ignition, daily fire spread, and cumulative annual burned area. Here we combined Moderate-Resolution Imaging Spectroradiometer (MODIS) active fires (MCD14ML), MODIS imagery (MOD13A1) and ancillary historic fire perimeter information to produce a data set of daily fire spread maps for Alaska during 2002-2011. This approach provided a spatial and temporally continuous representation of fire progression and a precise identification of ignition and extinction locations and dates for each wildfire. The fire-spread maps were analyzed with daily vapor pressure deficit (VPD) observations from the North American Regional Reanalysis (NARR) and lightning strikes from the Alaska Lightning Detection Network (ALDN). We found a significant relationship between daily VPD and likelihood that a lightning strike would develop into a fire ignition. In the first week after ignition, above average VPD increased the probability that fires would grow to large or very large sizes. Strong relationships also were identified between VPD and burned area at several levels of temporal and spatial aggregation. As a consequence of regional coherence in meteorology, ignition, daily fire spread, and fire extinction events were often synchronized across different fires in interior Alaska. At a regional scale, the sum of positive VPD anomalies during the fire season was positively correlated with annual burned area during the NARR era (1979-2011; R2 = 0.45). Some of the largest fires we mapped had slow initial growth, indicating opportunities may exist for suppression efforts to adaptively manage these forests for climate change. The results

  6. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  7. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  8. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  9. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  10. Large-scale polarimetry of large optical galaxies

    NASA Astrophysics Data System (ADS)

    Sholomitskii, G. B.; Maslov, I. A.; Vitrichenko, E. A.

    1999-11-01

    We present preliminary results of wide-field visual CCD polarimetry for large optical galaxies through a concentric multisector radial-tangential polaroid analyzer mounted at the intermediate focus of a Zeiss-1000 telescope. The mean degree of tangential polarization in a 13-arcmin field, which was determined by processing images with imprinted ``orthogonal'' sectors, ranges from several percent (M 82) and 0.51% (the spirals M 51, M 81) to lower values for elliptical galaxies (M 49, M 87). It is emphasized that the parameters of large-scale polarization can be properly determined by using physical models for galaxies; inclination and azimuthal dependences of the degree of polarization are given for spirals.

  11. Modelling the spread of sexually transmitted diseases on scale-free networks

    NASA Astrophysics Data System (ADS)

    Liu, Mao-Xing; Ruan, Jiong

    2009-06-01

    In this paper a new model for the spread of sexually transmitted diseases (STDs) is presented. The dynamic behaviors of the model on a heterogenous scale-free (SF) network are considered, where the absence of a threshold on the SF network is demonstrated, and the stability of the disease-free equilibrium is obtained. Three immunization strategies, uniform immunization, proportional immunization and targeted immunization, are applied in this model. Analytical and simulated results are given to show that the proportional immunization strategy in the model is effective on SF networks.

  12. Multi-scale model of epidemic fade-out: Will local extirpation events inhibit the spread of white-nose syndrome?

    PubMed

    O'Reagan, Suzanne M; Magori, Krisztian; Pulliam, J Tomlin; Zokan, Marcus A; Kaul, RajReni B; Barton, Heather D; Drake, John M

    2015-04-01

    White-nose syndrome (WNS) is an emerging infectious disease that has resulted in severe declines of its hibernating bat hosts in North America. The ongoing epidemic of white-nose syndrome is a multi-scale phenomenon becau.se it causes hibernaculum-level extirpations, while simultaneously spreading over larger spatial scales. We investigate a neglected topic in ecological epidemiology: how local pathogen-driven extirpations impact large-scale pathogen spread. Previous studies have identified risk factors for propagation of WNS over hibernaculum and landscape scales but none of these have tested the hypothesis that separation of spatial scales and disease-induced mortality at the hibernaculum level might slow or halt its spread. To test this hypothesis, we developed a mechanistic multi-scale model parameterized using white-nose syndrome.county and site incidence data that connects hibernaculum-level susceptible-infectious-removed (SIR) epidemiology to the county-scale contagion process. Our key result is that hibernaculum-level extirpations will not inhibit county-scale spread of WNS. We show that over 80% of counties of the contiguous USA are likely to become infected before the current epidemic is over and that geometry of habitat connectivity is such that host refuges are exceedingly rare. The macroscale spatiotemporal infection pattern that emerges from local SIR epidemiological processes falls within a narrow spectrum of possible outcomes, suggesting that recolonization, rescue effects, and multi-host complexities at local scales are not important to forward propagation of WNS at large spatial scales. If effective control measures are not implemented, precipitous declines in bat populations are likely, particularly in cave-dense regions that constitute the main geographic corridors of the USA, a serious concern for bat conservation. PMID:26214909

  13. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  14. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  15. Exploring scale-up, spread, and sustainability: an instrumental case study tracing an innovation to enhance dysphagia care

    PubMed Central

    2013-01-01

    Background Adoption, adaptation, scale-up, spread, and sustainability are ill-defined, undertheorised, and little-researched implementation science concepts. An instrumental case study will track the adoption and adaptation, or not, of a locally developed innovation about dysphagia as a patient safety issue. The case study will examine a conceptual framework with a continuum of spread comprising hierarchical control or ‘making it happen’, participatory adaptation or ‘help it happen’, and facilitated evolution or ‘let it happen’. Methods This case study is a prospective, longitudinal design using mixed methods. The fifteen-month (October 2012 to December 2013) instrumental case study is set in large, healthcare organisation in England. The innovation refers to introducing a nationally recognised, inter-disciplinary dysphagia competency framework to guide workforce development about fundamental aspects of care. Adoption and adaptation will be examined at an organisational level and along two, contrasting care pathways: stroke and fractured neck of femur. A number of educational interventions will be deployed, including training a cadre of trainers to cascade the essentials of dysphagia management and developing a Dysphagia Toolkit as a learning resource. Mixed methods will be used to investigate scale-up, spread, and sustainability in acute and community settings. A purposive sample of senior managers and clinical leaders will be interviewed to identify path dependency or the context specific particularities of implementation. A pre- and post-evaluation, using mealtime observations and a survey, will investigate the learning effect on staff adherence to patient specific dysphagia recommendations and attitudes towards dysphagia, respectively. Official documents and an ethnographic field journal allow critical junctures, temporal aspects and confounding factors to be explored. Discussion Researching spread and sustainability presents methodological and

  16. Precision Measurement of Large Scale Structure

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2001-01-01

    The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.

  17. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  18. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  19. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Marriage, Tobias; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    Some of the most compelling inflation models predict a background of primordial gravitational waves (PGW) detectable by their imprint of a curl-like "B-mode" pattern in the polarization of the Cosmic Microwave Background (CMB). The Cosmology Large Angular Scale Surveyor (CLASS) is a novel array of telescopes to measure the B-mode signature of the PGW. By targeting the largest angular scales (>2°) with a multifrequency array, novel polarization modulation and detectors optimized for both control of systematics and sensitivity, CLASS sets itself apart in the field of CMB polarization surveys and opens an exciting new discovery space for the PGW and inflation. This poster presents an overview of the CLASS project.

  20. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  1. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  2. Scaling relations for large Martian valleys

    NASA Astrophysics Data System (ADS)

    Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.

    2009-02-01

    The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.

  3. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  4. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  5. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  6. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  7. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  8. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  9. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.; Rouvreau, Sebastien; Toth, Balazs; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  10. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  11. Taking Teacher Learning to Scale: Sharing Knowledge and Spreading Ideas across Geographies

    ERIC Educational Resources Information Center

    Klein, Emily J.; Jaffe-Walter, Reva; Riordan, Megan

    2016-01-01

    This research reports data from case studies of three intermediary organizations facing the challenge of scaling up teacher learning. The turn of the century launched scaling-up efforts of all three intermediaries, growing from intimate groups, where founding teachers and staff were key supports for teacher learning, to large multistate…

  12. The spread of computer viruses over a reduced scale-free network

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-02-01

    Due to the high dimensionality of an epidemic model of computer viruses over a general scale-free network, it is difficult to make a close study of its dynamics. In particular, it is extremely difficult, if not impossible, to prove the global stability of its viral equilibrium, if any. To overcome this difficulty, we suggest to simplify a general scale-free network by partitioning all of its nodes into two classes: higher-degree nodes and lower-degree nodes, and then equating the degrees of all higher-degree nodes and all lower-degree nodes, respectively, yielding a reduced scale-free network. We then propose an epidemic model of computer viruses over a reduced scale-free network. A theoretical analysis reveals that the proposed model is bound to have a globally stable viral equilibrium, implying that any attempt to eradicate network viruses would prove unavailing. As a result, the next best thing we can do is to restrain virus prevalence. Based on an analysis of the impact of different model parameters on virus prevalence, some practicable measures are recommended to contain virus spreading. The work in this paper adequately justifies the idea of reduced scale-free networks.

  13. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  14. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  15. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  16. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  17. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  18. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  19. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  20. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  1. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  2. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  3. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  4. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  5. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  6. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  7. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  8. Self-synchronization for spread spectrum audio watermarks after time scale modification

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Sharma, Gaurav

    2014-02-01

    De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.

  9. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  10. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  11. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  12. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  13. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  14. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  15. The French Connection: The First Large Population-Based Contact Survey in France Relevant for the Spread of Infectious Diseases

    PubMed Central

    Béraud, Guillaume; Kazmercziak, Sabine; Beutels, Philippe; Levy-Bruhl, Daniel; Lenne, Xavier; Mielcarek, Nathalie; Yazdanpanah, Yazdan; Boëlle, Pierre-Yves; Hens, Niel; Dervaux, Benoit

    2015-01-01

    Background Empirical social contact patterns are essential to understand the spread of infectious diseases. To date, no such data existed for France. Although infectious diseases are frequently seasonal, the temporal variation of contact patterns has not been documented hitherto. Methods COMES-F is the first French large-scale population survey, carried out over 3 different periods (February-March, April, April-May) with some participants common to the first and the last period. Participants described their contacts for 2 consecutive days, and reported separately on professional contacts when typically over 20 per day. Results 2033 participants reported 38 881 contacts (weighted median [first quartile-third quartile]: 8[5–14] per day), and 54 378 contacts with supplementary professional contacts (9[5–17]). Contrary to age, gender, household size, holidays, weekend and occupation, period of the year had little influence on the number of contacts or the mixing patterns. Contact patterns were highly assortative with age, irrespective of the location of the contact, and gender, with women having 8% more contacts than men. Although most contacts occurred at home and at school, the inclusion of professional contacts modified the structure of the mixing patterns. Holidays and weekends reduced dramatically the number of contacts, and as proxies for school closure, reduced R0 by 33% and 28%, respectively. Thus, school closures could have an important impact on the spread of close contact infections in France. Conclusions Despite no clear evidence for temporal variation, trends suggest that more studies are needed. Age and gender were found important determinants of the mixing patterns. Gender differences in mixing patterns might help explain gender differences in the epidemiology of infectious diseases. PMID:26176549

  16. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  17. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  18. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  19. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  20. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  1. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  2. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  3. Genome Scale Evolution of Myxoma Virus Reveals Host-Pathogen Adaptation and Rapid Geographic Spread

    PubMed Central

    Kerr, Peter J.; Rogers, Matthew B.; Fitch, Adam; DePasse, Jay V.; Cattadori, Isabella M.; Twaddle, Alan C.; Hudson, Peter J.; Tscharke, David C.; Read, Andrew F.; Holmes, Edward C.

    2013-01-01

    The evolutionary interplay between myxoma virus (MYXV) and the European rabbit (Oryctolagus cuniculus) following release of the virus in Australia in 1950 as a biological control is a classic example of host-pathogen coevolution. We present a detailed genomic and phylogeographic analysis of 30 strains of MYXV, including the Australian progenitor strain Standard Laboratory Strain (SLS), 24 Australian viruses isolated from 1951 to 1999, and three isolates from the early radiation in Britain from 1954 and 1955. We show that in Australia MYXV has spread rapidly on a spatial scale, with multiple lineages cocirculating within individual localities, and that both highly virulent and attenuated viruses were still present in the field through the 1990s. In addition, the detection of closely related virus lineages at sites 1,000 km apart suggests that MYXV moves freely in geographic space, with mosquitoes, fleas, and rabbit migration all providing means of transport. Strikingly, despite multiple introductions, all modern viruses appear to be ultimately derived from the original introductions of SLS. The rapidity of MYXV evolution was also apparent at the genomic scale, with gene duplications documented in a number of viruses. Duplication of potential virulence genes may be important in increasing the expression of virulence proteins and provides the basis for the evolution of novel functions. Mutations leading to loss of open reading frames were surprisingly frequent and in some cases may explain attenuation, but no common mutations that correlated with virulence or attenuation were identified. PMID:24067966

  4. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  5. Design advanced for large-scale, economic, floating LNG plant

    SciTech Connect

    Naklie, M.M.

    1997-06-30

    A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

  6. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  7. Autonomic Computing Paradigm For Large Scale Scientific And Engineering Applications

    NASA Astrophysics Data System (ADS)

    Hariri, S.; Yang, J.; Zhang, Y.

    2005-12-01

    Large-scale distributed scientific applications are highly adaptive and heterogeneous in terms of their computational requirements. The computational complexity associated with each computational region or domain varies continuously and dramatically both in space and time throughout the whole life cycle of the application execution. Furthermore, the underlying distributed computing environment is similarly complex and dynamic in the availabilities and capacities of the computing resources. These challenges combined together make the current paradigms, which are based on passive components and static compositions, ineffectual. Autonomic Computing paradigm is an approach that efficiently addresses the complexity and dynamism of large scale scientific and engineering applications and realizes the self-management of these applications. In this presentation, we present an Autonomic Runtime Manager (ARM) that supports the development of autonomic applications. The ARM includes two modules: online monitoring and analysis module and autonomic planning and scheduling module. The ARM behaves as a closed-loop control system that dynamically controls and manages the execution of the applications at runtime. It regularly senses the state changes of both the applications and the underlying computing resources. It then uses these runtime information and prior knowledge about the application behavior and its physics to identify the appropriate solution methods as well as the required computing and storage resources. Consequently this approach enables us to develop autonomic applications, which are capable of self-management and self-optimization. We have developed and implemented the autonomic computing paradigms for several large scale applications such as wild fire simulations, simulations of flow through variably saturated geologic formations, and life sciences. The distributed wildfire simulation models the wildfire spread behavior by considering such factors as fuel

  8. Geometric origin of scaling in large traffic networks.

    PubMed

    Popović, Marko; Štefančić, Hrvoje; Zlatić, Vinko

    2012-11-16

    Large scale traffic networks are an indispensable part of contemporary human mobility and international trade. Networks of airport travel and cargo ship movements are invaluable for the understanding of human mobility patterns [R. Guimera et al., Proc. Natl. Acad. Sci. U.S.A. 102, 7794 (2005))], epidemic spreading [V. Colizza et al., Proc. Natl. Acad. Sci. U.S.A. 103, 2015 (2006)], global trade [International Maritime Organization, http://www.imo.org/], and spread of invasive species [G. M. Ruiz et al., Nature (London) 408, 49 (2000)]. Different studies [M. Barthelemy, Phys. Rept. 499, 1 (2011)] point to the universal character of some of the exponents measured in such networks. Here we show that exponents which relate (i) the strength of nodes to their degree and (ii) weights of links to degrees of nodes that they connect have a geometric origin. We present a simple robust model which exhibits the observed power laws and relates exponents to the dimensionality of 2D space in which traffic networks are embedded. We show that the relation between weight strength and degree is s(k)~k(3/2), the relation between distance strength and degree is s(d)(k)~k(3/2), and the relation between weight of link and degrees of linked nodes is w(ij)~(k(i)k(j))(1/2) on the plane 2D surface. We further analyze the influence of spherical geometry, relevant for the whole planet, on exact values of these exponents. Our model predicts that these exponents should be found in future studies of port networks and it imposes constraints on more refined models of port networks. PMID:23215527

  9. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  10. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  11. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  12. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  13. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  14. The Role of Postoperative Radiotherapy for Large Nerve Perineural Spread of Cancer of the Head and Neck.

    PubMed

    Gorayski, Peter; Foote, Matthew; Porceddu, Sandro; Poulsen, Michael

    2016-04-01

    Large nerve perineural spread (LNPNS) is an uncommon but serious sequelae of cutaneous and salivary gland malignancies arising in the head and neck. This distinct clinical entity is caused by malignant cell spread along the course of larger (named) cranial nerves in a bidirectional pattern of spread toward the origins of the nerve in the brainstem and/or its most distal branches residing in the dermis. Untreated, LNPNS causes multiple cranial neuropathies that significantly impact on quality of life and ultimately is fatal. Curative treatment involves en bloc surgical resection of all known involved sites of gross disease followed by risk-adapted postoperative radiotherapy (PORT) to improve local control. We review the evidence for contemporary practice and outline the processes involved in the delivery of PORT using the zonal anatomical classification. PMID:27123394

  15. Scale-space point spread function based framework to boost infrared target detection algorithms

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2016-07-01

    Small target detection is one of the major concern in the development of infrared surveillance systems. Detection algorithms based on Gaussian target modeling have attracted most attention from researchers in this field. However, the lack of accurate target modeling limits the performance of this type of infrared small target detection algorithms. In this paper, signal to clutter ratio (SCR) improvement mechanism based on the matched filter is described in detail and effect of Point Spread Function (PSF) on the intensity and spatial distribution of the target pixels is clarified comprehensively. In the following, a new parametric model for small infrared targets is developed based on the PSF of imaging system which can be considered as a matched filter. Based on this model, a new framework to boost model-based infrared target detection algorithms is presented. In order to show the performance of this new framework, the proposed model is adopted in Laplacian scale-space algorithms which is a well-known algorithm in the small infrared target detection field. Simulation results show that the proposed framework has better detection performance in comparison with the Gaussian one and improves the overall performance of IRST system. By analyzing the performance of the proposed algorithm based on this new framework in a quantitative manner, this new framework shows at least 20% improvement in the output SCR values in comparison with Laplacian of Gaussian (LoG) algorithm.

  16. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  17. CFD model for large hazardous dense cloud spread predictions, with particular reference to Bhopal disaster

    NASA Astrophysics Data System (ADS)

    Mishra, Kirti Bhushan

    2015-09-01

    A volumetric source based CFD (Computational Fluid Dynamics) model for estimating the wind and gravity driven spread of an elevated released dense hazardous cloud on a flat terrain without and with obstacles is demonstrated. The model considers the development of a worst-case scenario similar to that occurred at Bhopal. Fully developed clouds of a dense gas having different densities, under ABL (Atmospheric Boundary Layer) with calm ground wind conditions are first obtained. These clouds are then allowed to spread under ABL with different ground wind speeds and gravity conditions. The developed model is validated by performing the grid independent study, the fluid dynamical evidences, post-disaster facts, the downwind MIC (Methyl Isocynate) concentrations estimated by earlier models and experiments on dense plume trajectories. It is shown that in case of an active dispersion under calm wind conditions the lateral spread would prevail over the downwind spread. The presence of a dense medium behaves like a weak porous media and initiates turbulence at much smaller downwind distances than that normally would occur without the dense medium. The safety distances from toxic exposures of MIC are predicted by specifying an isosurface of a minimum concentration above the ground surface. Discrepancies in near-field predictions still exist. However, the far-field predictions agree well with data published before.

  18. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  19. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  20. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  1. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  2. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  3. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  4. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  5. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  6. Spreading of Polymer Films at the Molecular Scale: Conformation, Orientation, and Fractionation.

    NASA Astrophysics Data System (ADS)

    Barrett, Michael; Nese, Alper; Matyjaszewski, Krzysztof; Sheiko, Sergei

    2009-03-01

    Previously, we have reported that comb-like polymer macromolecules undergo a plug-flow with an insignificant contribution of molecular diffusion (Phys. Rev. Lett. 93, 206103, 2004). It was also suggested that the composition of the flowing polymer melt was the same both inside the fluid reservoir (drop) and in the precursor film. This work called into question the macroscopic picture of polymer spreading. Through molecular imaging by AFM, we observe that macromolecules spread at different velocities depending on their size. We show that flow causes the molecules to align perpendicular to the flow direction We have also identified specific molecular conformations, such as hairpins, that become more abundant in spreading films. Lastly, we demonstrate that chain entanglements hinder permeation of long macromolecules from the drop to precursor film. These findings shed light on the molecular mechanism of spreading of polymer melts on natural, i.e. heterogeneous, substrates.

  7. Light propagation and large-scale inhomogeneities

    SciTech Connect

    Brouzakis, Nikolaos; Tetradis, Nikolaos; Tzavara, Eleftheria E-mail: ntetrad@phys.uoa.gr

    2008-04-15

    We consider the effect on the propagation of light of inhomogeneities with sizes of order 10 Mpc or larger. The Universe is approximated through a variation of the Swiss-cheese model. The spherical inhomogeneities are void-like, with central underdensities surrounded by compensating overdense shells. We study the propagation of light in this background, assuming that the source and the observer occupy random positions, so that each beam travels through several inhomogeneities at random angles. The distribution of luminosity distances for sources with the same redshift is asymmetric, with a peak at a value larger than the average one. The width of the distribution and the location of the maximum increase with increasing redshift and length scale of the inhomogeneities. We compute the induced dispersion and bias of cosmological parameters derived from the supernova data. They are too small to explain the perceived acceleration without dark energy, even when the length scale of the inhomogeneities is comparable to the horizon distance. Moreover, the dispersion and bias induced by gravitational lensing at the scales of galaxies or clusters of galaxies are larger by at least an order of magnitude.

  8. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  9. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  10. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  11. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  12. Large-Scale Organizational Performance Improvement.

    ERIC Educational Resources Information Center

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  13. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  14. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776

  15. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    SciTech Connect

    Lenormand, R.; Thiele, M.R.

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  16. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  17. Infrasonic observations of large scale HE events

    SciTech Connect

    Whitaker, R.W.; Mutschlecner, J.P.; Davidson, M.B.; Noel, S.D.

    1990-01-01

    The Los Alamos Infrasound Program has been operating since about mid-1982, making routine measurements of low frequency atmospheric acoustic propagation. Generally, we work between 0.1 Hz to 10 Hz; however, much of our work is concerned with the narrower range of 0.5 to 5.0 Hz. Two permanent stations, St. George, UT, and Los Alamos, NM, have been operational since 1983, collecting data 24 hours a day. This discussion will concentrate on measurements of large, high explosive (HE) events at ranges of 250 km to 5330 km. Because the equipment is well suited for mobile deployments, it can easily establish temporary observing sites for special events. The measurements in this report are from our permanent sites, as well as from various temporary sites. In this short report will not give detailed data from all sites for all events, but rather will present a few observations that are typical of the full data set. The Defense Nuclear Agency sponsors these large explosive tests as part of their program to study airblast effects. A wide variety of experiments are fielded near the explosive by numerous Department of Defense (DOD) services and agencies. This measurement program is independent of this work; use is made of these tests as energetic known sources, which can be measured at large distances. Ammonium nitrate and fuel oil (ANFO) is the specific explosive used by DNA in these tests. 6 refs., 6 figs.

  18. Large scale surface heat fluxes. [through oceans

    NASA Technical Reports Server (NTRS)

    Sarachik, E. S.

    1984-01-01

    The heat flux through the ocean surface, Q, is the sum of the net radiation at the surface, the latent heat flux into the atmosphere, and the sensible heat flux into the atmosphere (all fluxes positive upwards). A review is presented of the geographical distribution of Q and its constituents, and the current accuracy of measuring Q by ground based measurements (both directly and by 'bulk formulae') is assessed. The relation of Q to changes of oceanic heat content, heat flux, and SST is examined and for each of these processes, the accuracy needed for Q is discussed. The needed accuracy for Q varies from process to process, varies geographically, and varies with the time and space scale considered.

  19. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  20. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  1. Compensation of large ion energy spreads by multigap grid reflectors in time-of-flight mass spectrometers

    NASA Astrophysics Data System (ADS)

    Pilyugin, I. I.

    2016-03-01

    The problem of compensation of the initial ion energy spread by a multigap grid reflector of the time-of-flight mass spectrometer is considered. It is shown mathematically that the problem can be reduced to analysis of properties of catastrophes A n under additional conditions of positive geometrical gaps of the reflector. Examples of design of reflectors corresponding to catastrophes A 2 and A 3 are analyzed. The advantage of a three-gap reflector over a two-gap reflector in the compensation of a large energy spread of ions for the same value of the resolution of the device is demonstrated. The application of the three-gap reflector improves the sensitivity of the time-of-flight mass spectrometer. The results of calculations are confirmed experimentally.

  2. Large-scale GW software development

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  3. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  4. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  5. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  6. Large-Scale Structures of Planetary Systems

    NASA Astrophysics Data System (ADS)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  7. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  8. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  9. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  10. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  11. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  12. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  13. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  14. Software for large scale tracking studies

    SciTech Connect

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references.

  15. Links between small-scale dynamics and large-scale averages and its implication to large-scale hydrology

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2012-04-01

    Changes to the hydrological cycle under a changing climate challenge our understanding of the interaction between hydrology and climate at various spatial and temporal scales. Traditional understanding of the climate-hydrology interaction were developed under a stationary climate and may not adequately summarize the interactions in a transient state when the climate is changing; for instance, opposite long-term temporal trend of precipitation and discharge has been observed in part of the world, as a result of significant warming and the nonlinear nature of the climate and hydrology system. The patterns of internal climate variability, ranging from monthly to multi-centennial time scales, largely determine the past and present climate. The response of these patterns of variability to human-induced climate change will determine much of the regional nature of climate change in the future. Therefore, understanding the basic patterns of variability is of vital importance for climate and hydrological modelers. This work showed that at the scale of large river basins or sub-continents, the temporal variation of climatic variables ranging from daily to inter-annual, could be well represented by multiple sets, each consists of limited number of points (when observations are used) or pixels (when gridded datasets are used), covering a small portion of the total domain area. Combined with hydrological response units, which divide the heterogeneity of the land surface into limited number of categories according to similarity in hydrological behavior, one could describe the climate-hydrology interaction and changes over a large domain with multiple small subsets of the domain area. Those points (when observations are used), or pixels (when gridded data are used), represent different patterns of the climate-hydrology interaction, and contribute uniquely to an averaged dynamic of the entire domain. Statistical methods were developed to identify the minimum number of points or

  16. Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids

    SciTech Connect

    Santhi, Nandakishore; Pan, Feng

    2010-10-19

    Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

  17. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  18. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  19. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  20. Dynamic scaling and large scale effects in turbulence in compressible stratified fluid

    NASA Astrophysics Data System (ADS)

    Pharasi, Hirdesh K.; Bhattacharjee, Jayanta K.

    2016-01-01

    We consider the propagation of sound in a turbulent fluid which is confined between two horizontal parallel plates, maintained at different temperatures. In the homogeneous fluid, Staroselsky et al. had predicted a divergent sound speed at large length scales. Here we find a divergent sound speed and a vanishing expansion coefficient at large length scales. Dispersion relation and the question of scale invariance at large distance scales lead to these results.

  1. Observational investigation of the possible correlation between medium-scale TIDs and mid-latitude spread F

    NASA Astrophysics Data System (ADS)

    Yu, Shimei; Xiao, Zuo; Aa, Ercha; Hao, Yongqiang; Zhang, Donghe

    2016-08-01

    Global navigation satellite system (GNSS) receivers at two stations, CHAN (43.83°N, 125.27°E) and URUM (43.70°N, 87.60°E) in China, are used to retrieve the total electron content (TEC) from 2001 to 2012, and detect medium-scale traveling ionospheric disturbances (MSTIDs) above the two stations. At either station, MSTIDs occurrence is found to depend on season and solar cycle, that is, lower during equinoctial seasons and low solar activity, in accordance with the general morphological features of mid-latitude spread F (MSF). More interestingly, a significant longitudinal difference exists between the two stations that, both MSTIDs and MSF are more prevalent at CHAN than at URUM over the 12 years. These results imply a strong connection between MSTIDs and MSF, and provide new observational evidences that gravity waves (GWs), manifested in MSTIDs here, might play an important role in triggering MSF in the overhead ionosphere. Since GWs at mid-latitude mostly originate in the lower atmosphere, we infer that the atmospheric background at CHAN is more favorable for generating GWs. This is probably true considering very different surface meteorological conditions for the two stations over such a large longitudinal span (∼38°). CHAN station is near the coast of western Pacific Ocean and URUM station is in the very center of the Europe-Asia continent, and they are in different climate zones. We therefore suggest that the surface meteorological condition is one of the significant factors to be considered in explaining and modeling MSF formation and variations.

  2. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  3. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  4. A comparison between coherent and noncoherent mobile systems in large Doppler shift, delay spread, and C/I environment

    NASA Technical Reports Server (NTRS)

    Feher, Kamilo

    1993-01-01

    The performance and implementation complexity of coherent and of noncoherent QPSK and GMSK modulation/demodulation techniques in a complex mobile satellite systems environment, including large Doppler shift, delay spread, and low C/I, are compared. We demonstrate that for large f(sub d)T(sub b) products, where f(sub d) is the Doppler shift and T(sub b) is the bit duration, noncoherent (discriminator detector or differential demodulation) systems have a lower BER floor than their coherent counterparts. For significant delay spreads, e.g., tau(sub rms) greater than 0.4 T(sub b), and low C/I, coherent systems outperform noncoherent systems. However, the synchronization time of coherent systems is longer than that of noncoherent systems. Spectral efficiency, overall capacity, and related hardware complexity issues of these systems are also analyzed. We demonstrate that coherent systems have a simpler overall architecture (IF filter implementation-cost versus carrier recovery) and are more robust in an RF frequency drift environment. Additionally, the prediction tools, computer simulations, and analysis of coherent systems is simpler. The threshold or capture effect in low C/I interference environment is critical for noncoherent discriminator based systems. We conclude with a comparison of hardware architectures of coherent and of noncoherent systems, including recent trends in commercial VLSI technology and direct baseband to RF transmit, RF to baseband (0-IF) receiver implementation strategies.

  5. Large-scale ocean circulation-cloud interactions reduce the pace of transient climate change

    NASA Astrophysics Data System (ADS)

    Trossman, D. S.; Palter, J. B.; Merlis, T. M.; Huang, Y.; Xia, Y.

    2016-04-01

    Changes to the large-scale oceanic circulation are thought to slow the pace of transient climate change due, in part, to their influence on radiative feedbacks. Here we evaluate the interactions between CO2-forced perturbations to the large-scale ocean circulation and the radiative cloud feedback in a climate model. Both the change of the ocean circulation and the radiative cloud feedback strongly influence the magnitude and spatial pattern of surface and ocean warming. Changes in the ocean circulation reduce the amount of transient global warming caused by the radiative cloud feedback by helping to maintain low cloud coverage in the face of global warming. The radiative cloud feedback is key in affecting atmospheric meridional heat transport changes and is the dominant radiative feedback mechanism that responds to ocean circulation change. Uncertainty in the simulated ocean circulation changes due to CO2 forcing may contribute a large share of the spread in the radiative cloud feedback among climate models.

  6. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  7. Strip layer method for analysis of the three-dimensional stresses and spread of large cylindrical shell rolling

    NASA Astrophysics Data System (ADS)

    Liu, Hongmin; Chen, Suwen; Peng, Yan; Sun, Jianliang

    2015-01-01

    As the traditional forging process has many problems such as low efficiency, high consumption of material and energy, large cylindrical shell rolling is introduced. Large cylindrical shell rolling is a typical rotary forming technology, and the upper and lower rolls have different radii and speeds. To quickly predict the three-dimensional stresses and eliminate fishtail defect, an improved strip layer method is developed, in which the asymmetry of the upper and lower rolls, non-uniform deformation and stress, as well as the asymmetrical spread on the end surface are considered. The deformation zone is divided into a certain number of layers and strips along the thickness and width, respectively. The transverse displacement model is constructed by polynomial function, in order to increase the computation speed greatly. From the metal plastic mechanics principle, the three-dimensional stress models are established. The genetic algorithm is used for optimization calculation in an industrial experiment example. The results show that the rolling pressure, the normal stresses, the upper and lower friction stress distributions are not similar with those of a general plate rolling. There are two relative maximum values in rolling pressure distribution. The upper and lower longitudinal friction stresses change direction nearby the upper and lower neutral points, respectively. The fishtail profile of spread on the end surface is predicted satisfactorily. The reduction could be helpful to eliminate fishtail defect. The large cylindrical shell rolling example illustrates the calculation results acquired rapidly are good agreements with the finite element simulation and experimental values of previous study. A highly effective and reliable three-dimensional simulation method is proposed for large cylindrical shell rolling and other asymmetrical rolling.

  8. Large-scale convective instability in an electroconducting medium with small-scale helicity

    SciTech Connect

    Kopp, M. I.; Tur, A. V.; Yanovsky, V. V.

    2015-04-15

    A large-scale instability occurring in a stratified conducting medium with small-scale helicity of the velocity field and magnetic fields is detected using an asymptotic many-scale method. Such a helicity is sustained by small external sources for small Reynolds numbers. Two regimes of instability with zero and nonzero frequencies are detected. The criteria for the occurrence of large-scale instability in such a medium are formulated.

  9. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  10. Large scale suppression of scalar power on a spatial condensation

    NASA Astrophysics Data System (ADS)

    Kouwn, Seyen; Kwon, O.-Kab; Oh, Phillial

    2015-03-01

    We consider a deformed single-field inflation model in terms of three SO(3) symmetric moduli fields. We find that spatially linear solutions for the moduli fields induce a phase transition during the early stage of the inflation and the suppression of scalar power spectrum at large scales. This suppression can be an origin of anomalies for large-scale perturbation modes in the cosmological observation.

  11. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  12. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  13. Large-scale V/STOL testing. [in wind tunnels

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1977-01-01

    Several facets of large-scale testing of V/STOL aircraft configurations are discussed with particular emphasis on test experience in the Ames 40- by 80-foot wind tunnel. Examples of powered-lift test programs are presented in order to illustrate tradeoffs confronting the planner of V/STOL test programs. It is indicated that large-scale V/STOL wind-tunnel testing can sometimes compete with small-scale testing in the effort required (overall test time) and program costs because of the possibility of conducting a number of different tests with a single large-scale model where several small-scale models would be required. The benefits of both high- and full-scale Reynolds numbers, more detailed configuration simulation, and number and type of onboard measurements increase rapidly with scale. Planning must be more detailed at large scale in order to balance the trade-offs between the increased costs, as number of measurements and model configuration variables increase and the benefits of larger amounts of information coming out of one test.

  14. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  15. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  16. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  17. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects. PMID:26551120

  18. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGESBeta

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  19. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    SciTech Connect

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  20. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  1. Large-scale ER-damper for seismic protection

    NASA Astrophysics Data System (ADS)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  2. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    PubMed

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  3. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  4. Effect of high velocity, large amplitude stimuli on the spread of Depolarization in S1 “Barrel” Cortex

    PubMed Central

    Davis, Douglas J.; Sachdev, Robert; Pieribone, Vincent A.

    2013-01-01

    We examined the effect of large, controlled whisker movements, delivered at a high speed, on the amplitude and spread of depolarization in the anesthetized mouse barrel cortex. The stimulus speed was varied between 1500 to 6000 degrees per second and the extent of movement was varied between 4–16 degrees. The rate of rise of the response was linearly related to the rate of rise of the stimulus. The initial spatial extent of cortical activation was also related to the rate of rise of the stimulus: that is the faster the stimulus onset, the faster the rate of rise of the response, the larger the extent of cortex activated initially. The spatial extent of the response and the rate of rise of the response were not correlated with changes in the deflection amplitude. But slower, longer lasting stimuli produced an Off response, making the actual extent of activation larger for the slowest rising stimuli. These results indicate that the spread of cortical activation depends on stimulus features. PMID:22150170

  5. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  6. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  7. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  8. Genetic signals of origin, spread, and introgression in a large sample of maize landraces.

    PubMed

    van Heerwaarden, Joost; Doebley, John; Briggs, William H; Glaubitz, Jeffrey C; Goodman, Major M; de Jesus Sanchez Gonzalez, Jose; Ross-Ibarra, Jeffrey

    2011-01-18

    The last two decades have seen important advances in our knowledge of maize domestication, thanks in part to the contributions of genetic data. Genetic studies have provided firm evidence that maize was domesticated from Balsas teosinte (Zea mays subspecies parviglumis), a wild relative that is endemic to the mid- to lowland regions of southwestern Mexico. An interesting paradox remains, however: Maize cultivars that are most closely related to Balsas teosinte are found mainly in the Mexican highlands where subspecies parviglumis does not grow. Genetic data thus point to primary diffusion of domesticated maize from the highlands rather than from the region of initial domestication. Recent archeological evidence for early lowland cultivation has been consistent with the genetics of domestication, leaving the issue of the ancestral position of highland maize unresolved. We used a new SNP dataset scored in a large number of accessions of both teosinte and maize to take a second look at the geography of the earliest cultivated maize. We found that gene flow between maize and its wild relatives meaningfully impacts our inference of geographic origins. By analyzing differentiation from inferred ancestral gene frequencies, we obtained results that are fully consistent with current ecological, archeological, and genetic data concerning the geography of early maize cultivation. PMID:21189301

  9. Genetic signals of origin, spread, and introgression in a large sample of maize landraces

    PubMed Central

    van Heerwaarden, Joost; Doebley, John; Briggs, William H.; Glaubitz, Jeffrey C.; Goodman, Major M.; de Jesus Sanchez Gonzalez, Jose; Ross-Ibarra, Jeffrey

    2011-01-01

    The last two decades have seen important advances in our knowledge of maize domestication, thanks in part to the contributions of genetic data. Genetic studies have provided firm evidence that maize was domesticated from Balsas teosinte (Zea mays subspecies parviglumis), a wild relative that is endemic to the mid- to lowland regions of southwestern Mexico. An interesting paradox remains, however: Maize cultivars that are most closely related to Balsas teosinte are found mainly in the Mexican highlands where subspecies parviglumis does not grow. Genetic data thus point to primary diffusion of domesticated maize from the highlands rather than from the region of initial domestication. Recent archeological evidence for early lowland cultivation has been consistent with the genetics of domestication, leaving the issue of the ancestral position of highland maize unresolved. We used a new SNP dataset scored in a large number of accessions of both teosinte and maize to take a second look at the geography of the earliest cultivated maize. We found that gene flow between maize and its wild relatives meaningfully impacts our inference of geographic origins. By analyzing differentiation from inferred ancestral gene frequencies, we obtained results that are fully consistent with current ecological, archeological, and genetic data concerning the geography of early maize cultivation. PMID:21189301

  10. Unsaturated Hydraulic Conductivity for Evaporation in Large scale Heterogeneous Soils

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2014-12-01

    In this study we aim to provide some practical guidelines of how the commonly used simple averaging schemes (arithmetic, geometric, or harmonic mean) perform in simulating large scale evaporation in a large scale heterogeneous landscape. Previous studies on hydraulic property upscaling focusing on steady state flux exchanges illustrated that an effective hydraulic property is usually more difficult to define for evaporation. This study focuses on upscaling hydraulic properties of large scale transient evaporation dynamics using the idea of the stream tube approach. Specifically, the two main objectives are: (1) if the three simple averaging schemes (i.e., arithmetic, geometric and harmonic means) of hydraulic parameters are appropriate in representing large scale evaporation processes, and (2) how the applicability of these simple averaging schemes depends on the time scale of evaporation processes in heterogeneous soils. Multiple realizations of local evaporation processes are carried out using HYDRUS-1D computational code (Simunek et al, 1998). The three averaging schemes of soil hydraulic parameters were used to simulate the cumulative flux exchange, which is then compared with the large scale average cumulative flux. The sensitivity of the relative errors to the time frame of evaporation processes is also discussed.

  11. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  12. Large-scale flow generation in turbulent convection

    PubMed Central

    Krishnamurti, Ruby; Howard, Louis N.

    1981-01-01

    In a horizontal layer of fluid heated from below and cooled from above, cellular convection with horizontal length scale comparable to the layer depth occurs for small enough values of the Rayleigh number. As the Rayleigh number is increased, cellular flow disappears and is replaced by a random array of transient plumes. Upon further increase, these plumes drift in one direction near the bottom and in the opposite direction near the top of the layer with the axes of plumes tilted in such a way that horizontal momentum is transported upward via the Reynolds stress. With the onset of this large-scale flow, the largest scale of motion has increased from that comparable to the layer depth to a scale comparable to the layer width. The conditions for occurrence and determination of the direction of this large-scale circulation are described. Images PMID:16592996

  13. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  14. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  15. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  16. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  17. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance.

    PubMed

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro; Lee, Sang Hee

    2015-10-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415

  18. Fast and Accurate Large-Scale Detection of β-Lactamase Genes Conferring Antibiotic Resistance

    PubMed Central

    Lee, Jae Jin; Lee, Jung Hun; Kwon, Dae Beom; Jeon, Jeong Ho; Park, Kwang Seung; Lee, Chang-Ro

    2015-01-01

    Fast detection of β-lactamase (bla) genes allows improved surveillance studies and infection control measures, which can minimize the spread of antibiotic resistance. Although several molecular diagnostic methods have been developed to detect limited bla gene types, these methods have significant limitations, such as their failure to detect almost all clinically available bla genes. We developed a fast and accurate molecular method to overcome these limitations using 62 primer pairs, which were designed through elaborate optimization processes. To verify the ability of this large-scale bla detection method (large-scaleblaFinder), assays were performed on previously reported bacterial control isolates/strains. To confirm the applicability of the large-scaleblaFinder, the assays were performed on unreported clinical isolates. With perfect specificity and sensitivity in 189 control isolates/strains and 403 clinical isolates, the large-scaleblaFinder detected almost all clinically available bla genes. Notably, the large-scaleblaFinder detected 24 additional unreported bla genes in the isolates/strains that were previously studied, suggesting that previous methods detecting only limited types of bla genes can miss unexpected bla genes existing in pathogenic bacteria, and our method has the ability to detect almost all bla genes existing in a clinical isolate. The ability of large-scaleblaFinder to detect bla genes on a large scale enables prompt application to the detection of almost all bla genes present in bacterial pathogens. The widespread use of the large-scaleblaFinder in the future will provide an important aid for monitoring the emergence and dissemination of bla genes and minimizing the spread of resistant bacteria. PMID:26169415

  19. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  20. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  1. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  2. Bothrops snake myotoxins induce a large efflux of ATP and potassium with spreading of cell damage and pain.

    PubMed

    Cintra-Francischinelli, Mariana; Caccin, Paola; Chiavegato, Angela; Pizzo, Paola; Carmignoto, Giorgio; Angulo, Yamileth; Lomonte, Bruno; Gutiérrez, José María; Montecucco, Cesare

    2010-08-10

    Myotoxins play a major role in the pathogenesis of the envenomations caused by snake bites in large parts of the world where this is a very relevant public health problem. We show here that two myotoxins that are major constituents of the venom of Bothrops asper, a deadly snake present in Latin America, induce the release of large amounts of K(+) and ATP from skeletal muscle. We also show that the released ATP amplifies the effect of the myotoxins, acting as a "danger signal," which spreads and causes further damage by acting on purinergic receptors. In addition, the release of ATP and K(+) well accounts for the pain reaction characteristic of these envenomations. As Bothrops asper myotoxins are representative of a large family of snake myotoxins with phospholipase A(2) structure, these findings are expected to be of general significance for snake bite envenomation. Moreover, they suggest potential therapeutic approaches for limiting the extent of muscle tissue damage based on antipurinergic drugs. PMID:20660736

  3. Effects of small scale energy injection on large scales in turbulent reaction flows

    NASA Astrophysics Data System (ADS)

    Xuan, Yuan

    2014-11-01

    Turbulence causes the generation of eddies of various length scales. In turbulent non-reacting flows, most of the kinetic energy is contained in large scale turbulent structures and dissipated at small scales. This energy cascade process from large scales to small scales provides the foundation of a lot of turbulence models, especially for Large Eddy Simulations. However, in turbulent reacting flows, chemical energy is converted locally to heat and therefore deploys energy at the smallest scales. As such, effects of small scale energy injection due to combustion on large scale turbulent motion may become important. These effects are investigated in the case of auto-ignition under homogeneous isotropic turbulence. Impact of small scale heat release is examined by comparing various turbulent statistics (e.g. energy spectrum, two-point correlation functions, and structure functions) in the reacting case to the non-reacting case. Emphasis is placed on the identification of the most relevant turbulent quantities in reflecting such small-large scale interactions.

  4. Response of Tradewind Cumuli to Large-Scale Processes.

    NASA Astrophysics Data System (ADS)

    Soong, S.-T.; Ogura, Y.

    1980-09-01

    The two-dimensional slab-symmetric numerical cloud model used by Soong and Ogura (1973) for studying the evolution of an isolated cumulus cloud is extended to investigate the statistical properties of cumulus clouds which would be generated under a given large-scale forcing composed of the horizontal advection of temperature and water vapor mixing ratio, vertical velocity, sea surface temperature and radiative cooling. Random disturbances of small amplitude are introduced into the model at low levels to provide random motion for cloud formation.The model is applied to a case of suppressed weather conditions during BOMEX for the period 22-23 June 1969 when a nearly steady state prevailed. The composited temperature and mixing ratio profiles of these two days are used as initial conditions and the time-independent large-scale forcing terms estimated from the observations are applied to the model. The result of numerical integration shows that a number of small clouds start developing after 1 h. Some of them decay quickly, but some of them develop and reach the tradewind inversion. After a few hours of simulation, the vertical profiles of the horizontally averaged temperature and moisture are found to deviate only slightly from the observed profiles, indicating that the large-scale effect and the feedback effects of clouds on temperature and mixing ratio reach an equilibrium state. The three major components of the cloud feedback effect, i.e., condensation, evaporation and vertical fluxes associated with the clouds, are determined from the model output. The vertical profiles of vertical heat and moisture fluxes in the subcloud layer in the model are found to be in general agreement with the observations.Sensitivity tests of the model are made for different magnitudes of the large-scale vertical velocity. The most striking result is that the temperature and humidity in the cloud layer below the inversion do not change significantly in spite of a relatively large

  5. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  6. Emergence of large cliques in random scale-free networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Marsili, Matteo

    2006-05-01

    In a network cliques are fully connected subgraphs that reveal which are the tight communities present in it. Cliques of size c > 3 are present in random Erdös and Renyi graphs only in the limit of diverging average connectivity. Starting from the finding that real scale-free graphs have large cliques, we study the clique number in uncorrelated scale-free networks finding both upper and lower bounds. Interestingly, we find that in scale-free networks large cliques appear also when the average degree is finite, i.e. even for networks with power law degree distribution exponents γin(2,3). Moreover, as long as γ < 3, scale-free networks have a maximal clique which diverges with the system size.

  7. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  8. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  9. Large-scale current systems in the dayside Venus ionosphere

    NASA Technical Reports Server (NTRS)

    Luhmann, J. G.; Elphic, R. C.; Brace, L. H.

    1981-01-01

    The occasional observation of large-scale horizontal magnetic fields within the dayside ionosphere of Venus by the flux gate magnetometer on the Pioneer Venus orbiter suggests the presence of large-scale current systems. Using the measured altitude profiles of the magnetic field and the electron density and temperature, together with the previously reported neutral atmosphere density and composition, it is found that the local ionosphere can be described at these times by a simple steady state model which treats the unobserved quantities, such as the electric field, as parameters. When the model is appropriate, the altitude profiles of the ion and electron velocities and the currents along the satellite trajectory can be inferred. These results elucidate the configurations and sources of the ionospheric current systems which produce the observed large-scale magnetic fields, and in particular illustrate the effect of ion-neutral coupling in the determination of the current system at low altitudes.

  10. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.