Science.gov

Sample records for large scale calcium

  1. Calcium-bismuth electrodes for large-scale energy storage (liquid metal batteries)

    SciTech Connect

    Kim, H; Boysen, DA; Ouchi, T; Sadoway, DR

    2013-11-01

    Calcium is an attractive electrode material for use in grid-scale electrochemical energy storage due to its low electronegativity, earth abundance, and low cost. The feasibility of combining a liquid Ca-Bi positive electrode with a molten salt electrolyte for use in liquid metal batteries at 500-700 degrees C was investigated. Exhibiting excellent reversibility up to current densities of 200 mA cm(-2), the calcium bismuth liquid alloy system is a promising positive electrode candidate for liquid metal batteries. The measurement of low self-discharge current suggests that the solubility of calcium metal in molten salt electrolytes can be sufficiently suppressed to yield high coulombic efficiencies >98%. The mechanisms giving rise to Ca-Bi electrode overpotentials were investigated in terms of associated charge transfer and mass transport resistances. The formation of low density Ca11Bi10 intermetallics at the electrode electrolyte interface limited the calcium deposition rate capability of the electrodes; however, the co-deposition of barium into bismuth from barium-containing molten salts suppressed Ca-Bi intermetallic formation thereby improving the discharge capacity. (C) 2013 Elsevier B.V. All rights reserved.

  2. Zolpidem Reduces Hippocampal Neuronal Activity in Freely Behaving Mice: A Large Scale Calcium Imaging Study with Miniaturized Fluorescence Microscope

    PubMed Central

    Berdyyeva, Tamara; Otte, Stephani; Aluisio, Leah; Ziv, Yaniv; Burns, Laurie D.; Dugovic, Christine; Yun, Sujin; Ghosh, Kunal K.; Schnitzer, Mark J.; Lovenberg, Timothy; Bonaventure, Pascal

    2014-01-01

    Therapeutic drugs for cognitive and psychiatric disorders are often characterized by their molecular mechanism of action. Here we demonstrate a new approach to elucidate drug action on large-scale neuronal activity by tracking somatic calcium dynamics in hundreds of CA1 hippocampal neurons of pharmacologically manipulated behaving mice. We used an adeno-associated viral vector to express the calcium sensor GCaMP3 in CA1 pyramidal cells under control of the CaMKII promoter and a miniaturized microscope to observe cellular dynamics. We visualized these dynamics with and without a systemic administration of Zolpidem, a GABAA agonist that is the most commonly prescribed drug for the treatment of insomnia in the United States. Despite growing concerns about the potential adverse effects of Zolpidem on memory and cognition, it remained unclear whether Zolpidem alters neuronal activity in the hippocampus, a brain area critical for cognition and memory. Zolpidem, when delivered at a dose known to induce and prolong sleep, strongly suppressed CA1 calcium signaling. The rate of calcium transients after Zolpidem administration was significantly lower compared to vehicle treatment. To factor out the contribution of changes in locomotor or physiological conditions following Zolpidem treatment, we compared the cellular activity across comparable epochs matched by locomotor and physiological assessments. This analysis revealed significantly depressive effects of Zolpidem regardless of the animal’s state. Individual hippocampal CA1 pyramidal cells differed in their responses to Zolpidem with the majority (∼65%) significantly decreasing the rate of calcium transients, and a small subset (3%) showing an unexpected and significant increase. By linking molecular mechanisms with the dynamics of neural circuitry and behavioral states, this approach has the potential to contribute substantially to the development of new therapeutics for the treatment of CNS disorders. PMID:25372144

  3. Large-scale expression of recombinant cardiac sodium-calcium exchange in insect larvae.

    PubMed

    Hale, C C; Zimmerschied, J A; Bliler, S; Price, E M

    1999-02-01

    Recombinant bovine cardiac sodium-calcium exchange (NCX1) in a baculovirus construct was used to infect cabbage looper larvae (Trichoplusia ni). Infected larvae were homogenized and larvae membrane vesicles were purified. Western blot analysis indicated the presence of recombinant NCX1 protein in vesicles from infected larvae but not in controls. Vesicles from infected larvae expressed high levels of NCX1 activity (1.7 nmol Ca2+/mg protein/s) while vesicles from control larvae had no activity. NCX1 in larvae vesicles was bidirectional. Kinetic analysis yielded a Vmax of 3.6 nmol Ca2+/mg protein/s and a Km for Ca of 4.2 microM. NCX1 activity was inhibited by the exchange inhibitory peptide with an IC50 of 4 microM. These data demonstrate a novel and efficient method for the expression of large amounts of active recombinant NCX1 protein that has general application for expression and analysis of recombinant membrane proteins. PMID:10024479

  4. Large-scale geographical variation in eggshell metal and calcium content in a passerine bird (Ficedula hypoleuca).

    PubMed

    Ruuskanen, Suvi; Laaksonen, Toni; Morales, Judith; Moreno, Juan; Mateo, Rafael; Belskii, Eugen; Bushuev, Andrey; Järvinen, Antero; Kerimov, Anvar; Krams, Indrikis; Morosinotto, Chiara; Mänd, Raivo; Orell, Markku; Qvarnström, Anna; Slate, Fred; Tilgar, Vallo; Visser, Marcel E; Winkel, Wolfgang; Zang, Herwig; Eeva, Tapio

    2014-03-01

    Birds have been used as bioindicators of pollution, such as toxic metals. Levels of pollutants in eggs are especially interesting, as developing birds are more sensitive to detrimental effects of pollutants than adults. Only very few studies have monitored intraspecific, large-scale variation in metal pollution across a species' breeding range. We studied large-scale geographic variation in metal levels in the eggs of a small passerine, the pied flycatcher (Ficedula hypoleuca), sampled from 15 populations across Europe. We measured 10 eggshell elements (As, Cd, Cr, Cu, Ni, Pb, Zn, Se, Sr, and Ca) and several shell characteristics (mass, thickness, porosity, and color). We found significant variation among populations in eggshell metal levels for all metals except copper. Eggshell lead, zinc, and chromium levels decreased from central Europe to the north, in line with the gradient in pollution levels over Europe, thus suggesting that eggshell can be used as an indicator of pollution levels. Eggshell lead levels were also correlated with soil lead levels and pH. Most of the metals were not correlated with eggshell characteristics, with the exception of shell mass, or with breeding success, which may suggest that birds can cope well with the current background exposure levels across Europe. PMID:24234761

  5. Large-scale imaging of subcellular calcium dynamics of cortical neurons with G-CaMP6-actin.

    PubMed

    Kobayashi, Chiaki; Ohkura, Masamichi; Nakai, Junichi; Matsuki, Norio; Ikegaya, Yuji; Sasaki, Takuya

    2014-05-01

    Understanding the information processing performed by a single neuron requires the monitoring of physiological dynamics from a variety of subcellular compartments including dendrites and axons. In this study, we showed that the expression of a fusion protein, consisting of a Ca²⁺ indicator protein (G-CaMP6) and a cytoskeleton protein (actin), enabled large-scale recording of Ca²⁺ dynamics from hundreds of postsynaptic spines and presynaptic boutons in a cortical pyramidal cell. At dendritic spines, G-CaMP6-actin had the potential to detect localized Ca²⁺ activity triggered by subthreshold synaptic inputs. Back-propagating action potentials reliably induced Ca²⁺ fluorescent increases in all spines. At axonal boutons, G-CaMP6-actin reported action potential trains propagating along axonal collaterals. The detectability of G-CaMP6-actin should contribute toward a deeper understanding of neural network architecture and dynamics at the level of individual synapses. PMID:24468806

  6. Large-Scale Fluorescence Calcium-Imaging Methods for Studies of Long-Term Memory in Behaving Mammals.

    PubMed

    Jercog, Pablo; Rogerson, Thomas; Schnitzer, Mark J

    2016-01-01

    During long-term memory formation, cellular and molecular processes reshape how individual neurons respond to specific patterns of synaptic input. It remains poorly understood how such changes impact information processing across networks of mammalian neurons. To observe how networks encode, store, and retrieve information, neuroscientists must track the dynamics of large ensembles of individual cells in behaving animals, over timescales commensurate with long-term memory. Fluorescence Ca(2+)-imaging techniques can monitor hundreds of neurons in behaving mice, opening exciting avenues for studies of learning and memory at the network level. Genetically encoded Ca(2+) indicators allow neurons to be targeted by genetic type or connectivity. Chronic animal preparations permit repeated imaging of neural Ca(2+) dynamics over multiple weeks. Together, these capabilities should enable unprecedented analyses of how ensemble neural codes evolve throughout memory processing and provide new insights into how memories are organized in the brain. PMID:27048190

  7. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  8. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  9. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  10. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  11. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  12. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  13. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  14. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  15. Calcium carbonate scale control, effect of material and inhibitors.

    PubMed

    Macadam, J; Parsons, S A

    2004-01-01

    This paper focuses on developing a reproducible method for reducing calcium carbonate scale formation on heated surfaces where scaling can cause serious problems. It is known that calcium carbonate precipitation is sensitive to impurity ions, such as iron and zinc, even at trace concentration levels. In this paper two sets of experiments are reported. The first experiments were undertaken to investigate the effect of zinc, copper and iron dosing on CaCO3 nucleation and precipitation. Results from the experiments showed that the most effective inhibitor of CaCO3 precipitation was zinc and the effect was linked to dose levels and temperature. Copper and iron had little effect on precipitation in the dose range investigated. The second trial was undertaken to translate the precipitation data to scale formation. These tests were undertaken at 70 degrees C. 5 mg x L(-1) zinc dose reduced the scale formation by 35%. The effect of iron on calcium carbonate scaling rate was not significant. The physical nature of the material on which the scale is formed also influences the scaling. The scaling experiment was also used to investigate the effect of different surface material (stainless steel, copper and aluminium) on CaCO3 scale formation. Copper surface scaled the most. PMID:14982176

  16. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  17. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  18. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  19. Unexpectedly large charge radii of neutron-rich calcium isotopes

    NASA Astrophysics Data System (ADS)

    Garcia Ruiz, R. F.; Bissell, M. L.; Blaum, K.; Ekström, A.; Frömmgen, N.; Hagen, G.; Hammen, M.; Hebeler, K.; Holt, J. D.; Jansen, G. R.; Kowalska, M.; Kreim, K.; Nazarewicz, W.; Neugart, R.; Neyens, G.; Nörtershäuser, W.; Papenbrock, T.; Papuga, J.; Schwenk, A.; Simonis, J.; Wendt, K. A.; Yordanov, D. T.

    2016-06-01

    Despite being a complex many-body system, the atomic nucleus exhibits simple structures for certain `magic’ numbers of protons and neutrons. The calcium chain in particular is both unique and puzzling: evidence of doubly magic features are known in 40,48Ca, and recently suggested in two radioactive isotopes, 52,54Ca. Although many properties of experimentally known calcium isotopes have been successfully described by nuclear theory, it is still a challenge to predict the evolution of their charge radii. Here we present the first measurements of the charge radii of 49,51,52Ca, obtained from laser spectroscopy experiments at ISOLDE, CERN. The experimental results are complemented by state-of-the-art theoretical calculations. The large and unexpected increase of the size of the neutron-rich calcium isotopes beyond N = 28 challenges the doubly magic nature of 52Ca and opens new intriguing questions on the evolution of nuclear sizes away from stability, which are of importance for our understanding of neutron-rich atomic nuclei.

  20. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  1. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  2. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  3. Calcium absorption in rat large intestine in vivo: availability of dietary calcium

    SciTech Connect

    Ammann, P.; Rizzoli, R.; Fleisch, H.

    1986-07-01

    Calcium absorption in the large intestine of the rat was investigated in vivo. After a single injection of /sup 45/CaCl/sub 2/ into the cecum, 26.0 +/- 2.5% (mean +/- SE, n = 9) of the /sup 45/CaCl/sub 2/ injected disappeared. This absorption was modulated by 1,25-dihydroxyvitamin D3, increased to 64.0 +/- 4.2% under a low-Ca diet, and increased under low-Pi diet. In contrast, when the difference of nonradioactive Ca in the cecal content and the feces was measured, only 4.1 +/- 4.6% (not significant) was absorbed. Secretion of intravenously injected /sup 45/Ca into the lumen was small and not altered by any of the conditions tested. When cecum contents were placed into duodenal tied loops, 14 +/- 6.2% were absorbed in situ when /sup 45/Ca was given orally, whereas when /sup 45/Ca was directly added to the content 35.6 +/- 4.6% were absorbed (P less than 0.02). These results indicate that the large intestine has an important vitamin D-dependent Ca absorptive system detectable if /sup 45/Ca is injected into the cecum. However, it is not effective in vivo because the Ca arriving in the large intestine appears to be no longer in an absorbable form.

  4. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  5. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  6. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  7. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  8. The large-scale landslide risk classification in catchment scale

    NASA Astrophysics Data System (ADS)

    Liu, Che-Hsin; Wu, Tingyeh; Chen, Lien-Kuang; Lin, Sheng-Chi

    2013-04-01

    The landslide disasters caused heavy casualties during Typhoon Morakot, 2009. This disaster is defined as largescale landslide due to the casualty numbers. This event also reflects the survey on large-scale landslide potential is so far insufficient and significant. The large-scale landslide potential analysis provides information about where should be focused on even though it is very difficult to distinguish. Accordingly, the authors intend to investigate the methods used by different countries, such as Hong Kong, Italy, Japan and Switzerland to clarify the assessment methodology. The objects include the place with susceptibility of rock slide and dip slope and the major landslide areas defined from historical records. Three different levels of scales are confirmed necessarily from country to slopeland, which are basin, catchment, and slope scales. Totally ten spots were classified with high large-scale landslide potential in the basin scale. The authors therefore focused on the catchment scale and employ risk matrix to classify the potential in this paper. The protected objects and large-scale landslide susceptibility ratio are two main indexes to classify the large-scale landslide risk. The protected objects are the constructions and transportation facilities. The large-scale landslide susceptibility ratio is based on the data of major landslide area and dip slope and rock slide areas. Totally 1,040 catchments are concerned and are classified into three levels, which are high, medium, and low levels. The proportions of high, medium, and low levels are 11%, 51%, and 38%, individually. This result represents the catchments with high proportion of protected objects or large-scale landslide susceptibility. The conclusion is made and it be the base material for the slopeland authorities when considering slopeland management and the further investigation.

  9. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  10. Calcium

    MedlinePlus

    ... of calcium dietary supplements are carbonate and citrate. Calcium carbonate is inexpensive, but is absorbed best when taken ... antacid products, such as Tums® and Rolaids®, contain calcium carbonate. Each pill or chew provides 200–400 mg ...

  11. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  12. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  13. Biogenic Calcium Phosphate Transformation in Soils over Millennium Time Scales

    SciTech Connect

    Sato, S.; Neves, E; Solomon, D; Liang, B; Lehmann, J

    2009-01-01

    Changes in bioavailability of phosphorus (P) during pedogenesis and ecosystem development have been shown for geogenic calcium phosphate (Ca-P). However, very little is known about long-term changes of biogenic Ca-P in soil. Long-term transformation characteristics of biogenic Ca-P were examined using anthropogenic soils along a chronosequence from centennial to millennial time scales. Phosphorus fractionation of Anthrosols resulted in overall consistency with the Walker and Syers model of geogenic Ca-P transformation during pedogenesis. The biogenic Ca-P (e.g., animal and fish bones) disappeared to 3% of total P within the first ca. 2,000 years of soil development. This change concurred with increases in P adsorbed on metal-oxides surfaces, organic P, and occluded P at different pedogenic time. Phosphorus K-edge X-ray absorption near-edge structure (XANES) spectroscopy revealed that the crystalline and therefore thermodynamically most stable biogenic Ca-P was transformed into more soluble forms of Ca-P over time. While crystalline hydroxyapatite (34% of total P) dominated Ca-P species after about 600-1,000 years, {Beta}-tricalcium phosphate increased to 16% of total P after 900-1,100 years, after which both Ca-P species disappeared. Iron-associated P was observable concurrently with Ca-P disappearance. Soluble P and organic P determined by XANES maintained relatively constant (58-65%) across the time scale studied. Conclusions - Disappearance of crystalline biogenic Ca-P on a time scale of a few thousand years appears to be ten times faster than that of geogenic Ca-P.

  14. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  15. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  16. The influence of temperature on the fluorine and calcium composition of fish scales.

    PubMed

    Gauldie, R W; Coote, G; West, I F; Radtke, R L

    1990-01-01

    Proton microprobe studies of the scales of the kahawai (Arripis trutta) and the snapper (Chrysophrys auratus) showed non-linear changes in the fluorine to calcium ratio that increase with increasing temperature, but both species showed a different temperature sensitivity. Fluorine and calcium levels vary within years from summer to winter by up to 1000%, making fluorine and calcium levels good markers of seasonal events. PMID:18620324

  17. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  18. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  19. Calcium carbonate scaling kinetics determined from radiotracer experiments with calcium-47

    SciTech Connect

    Turner, C.W.; Smith, D.W.

    1998-02-01

    The deposition of calcium carbonate is one of the principal modes of fouling of the heat-transfer surface of a fresh-water-cooled heat exchanger. The deposition rate of calcium carbonate on a heat-transfer surface has been measured using a calcium-47 radiotracer and compared to the measured rate of thermal fouling. The crystalline phase of calcium carbonate that precipitates depends on the degree of supersaturation at the heat-transfer surface, with aragonite precipitating at higher supersaturations and calcite precipitating at lower supersaturations. Whereas the mass deposition rates were constant with time, the thermal fouling rates decreased throughout the course of each experiment as a result of densification of the deposit. It is proposed that the densification was driven by the temperature gradient across the deposit together with the retrograde solubility of calcium carbonate. The temperature dependence of the deposition rate yielded an activation energy of 79 {+-} 4 kJ/mol for the precipitation of calcium carbonate on a heat-transfer surface.

  20. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  1. Calcium isotope fractionation in groundwater: Molecular scale processes influencing field scale behavior

    NASA Astrophysics Data System (ADS)

    Druhan, Jennifer L.; Steefel, Carl I.; Williams, Kenneth H.; DePaolo, Donald J.

    2013-10-01

    It is the purpose of this study to demonstrate that the molecular scale reaction mechanisms describing calcite precipitation and calcium isotope fractionations under highly controlled laboratory conditions also reproduce field scale measurements of δ44Ca in groundwater systems. We present data collected from an aquifer during active carbonate mineral precipitation and develop a reactive transport model capturing the observed chemical and isotopic variations. Carbonate mineral precipitation and associated fluid δ44Ca data were measured in multiple clogged well bores during organic carbon amended biogenic reduction of a uranium contaminated aquifer in western Colorado, USA. Secondary mineral formation induced by carbonate alkalinity generated during the biostimulation process lead to substantial permeability reduction in multiple electron-donor injection wells at the field site. These conditions resulted in removal of aqueous calcium from a background concentration of 6 mM to <1 mM while δ44Ca enrichment ranged from 1‰ to greater than 2.5‰. The relationship between aqueous calcium removal and isotopic enrichment did not conform to Rayleigh model behavior. Explicit treatment of the individual isotopes of calcium within the CrunchFlow reactive transport code demonstrates that the system did not achieve isotopic reequilibration over the time scale of sample collection. Measured fluid δ44Ca values are accurately reproduced by a linear rate law when the Ca2+:CO32- activity ratio remains substantially greater than unity. Variation in the measured δ44Ca between wells is shown to originate from a difference in carbonate alkalinity generated in each well bore. The influence of fluid Ca2+:CO32- ratio on the precipitation rate and δ44Ca is modeled by coupling the CrunchFlow reactive transport code to an ion by ion growth model. This study presents the first coupled ion-by-ion and reactive transport model for isotopic enrichment and demonstrates that reproducing field-scale

  2. Calcium

    MedlinePlus

    ... body stores more than 99 percent of its calcium in the bones and teeth to help make and keep them ... in the foods you eat. Foods rich in calcium include Dairy products such as milk, cheese, and yogurt Leafy, green vegetables Fish with soft bones that you eat, such as canned sardines and ...

  3. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  4. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  5. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  6. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  7. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  8. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  9. The influence of scale inhibitors on calcium oxalate

    SciTech Connect

    Gill, J.S.

    1999-11-01

    Precipitation of calcium oxalate is a common occurrence in mammalian urinary tract deposits and in various industrial processes such as paper making, brewery fermentation, sugar evaporation, and tannin concentration. Between pH 3.5 to 4.5 the driving force for calcium oxalate precipitation increases almost by three fold. It is a complicated process to predict both the nature of a deposit and at which stage of a multi-effect evaporator a particular mineral will deposit, as this depends on temperature, pH, total solids, and kinetics of mineralization. It is quite a challenge to inhibit calcium oxalate precipitation in the pH range of 4--6. Al{sup 3+} ions provide excellent threshold inhibition in this pH range and can be used to augment traditional inhibitors such as polyphosphates and polycarboxylates.

  10. Calcium

    MedlinePlus

    ... milligrams) of calcium each day. Get it from: Dairy products. Low-fat milk, yogurt, cheese, and cottage ... lactase that helps digest the sugar (lactose) in dairy products, and may have gas, bloating, cramps, or ...

  11. Large-scale extraction of proteins.

    PubMed

    Cunha, Teresa; Aires-Barros, Raquel

    2002-01-01

    The production of foreign proteins using selected host with the necessary posttranslational modifications is one of the key successes in modern biotechnology. This methodology allows the industrial production of proteins that otherwise are produced in small quantities. However, the separation and purification of these proteins from the fermentation media constitutes a major bottleneck for the widespread commercialization of recombinant proteins. The major production costs (50-90%) for typical biological product resides in the purification strategy. There is a need for efficient, effective, and economic large-scale bioseparation techniques, to achieve high purity and high recovery, while maintaining the biological activity of the molecule. Aqueous two-phase systems (ATPS) allow process integration as simultaneously separation and concentration of the target protein is achieved, with posterior removal and recycle of the polymer. The ease of scale-up combined with the high partition coefficients obtained allow its potential application in large-scale downstream processing of proteins produced by fermentation. The equipment and the methodology for aqueous two-phase extraction of proteins on a large scale using mixer-settlerand column contractors are described. The operation of the columns, either stagewise or differential, are summarized. A brief description of the methods used to account for mass transfer coefficients, hydrodynamics parameters of hold-up, drop size, and velocity, back mixing in the phases, and flooding performance, required for column design, is also provided. PMID:11876297

  12. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  13. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  14. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  15. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  16. Large-scale polarimetry of large optical galaxies

    NASA Astrophysics Data System (ADS)

    Sholomitskii, G. B.; Maslov, I. A.; Vitrichenko, E. A.

    1999-11-01

    We present preliminary results of wide-field visual CCD polarimetry for large optical galaxies through a concentric multisector radial-tangential polaroid analyzer mounted at the intermediate focus of a Zeiss-1000 telescope. The mean degree of tangential polarization in a 13-arcmin field, which was determined by processing images with imprinted ``orthogonal'' sectors, ranges from several percent (M 82) and 0.51% (the spirals M 51, M 81) to lower values for elliptical galaxies (M 49, M 87). It is emphasized that the parameters of large-scale polarization can be properly determined by using physical models for galaxies; inclination and azimuthal dependences of the degree of polarization are given for spirals.

  17. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    PubMed Central

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance. PMID:27001915

  18. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage

    NASA Astrophysics Data System (ADS)

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L.; Sadoway, Donald R.

    2016-03-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance.

  19. Calcium-based multi-element chemistry for grid-scale electrochemical energy storage.

    PubMed

    Ouchi, Takanari; Kim, Hojong; Spatocco, Brian L; Sadoway, Donald R

    2016-01-01

    Calcium is an attractive material for the negative electrode in a rechargeable battery due to its low electronegativity (high cell voltage), double valence, earth abundance and low cost; however, the use of calcium has historically eluded researchers due to its high melting temperature, high reactivity and unfavorably high solubility in molten salts. Here we demonstrate a long-cycle-life calcium-metal-based rechargeable battery for grid-scale energy storage. By deploying a multi-cation binary electrolyte in concert with an alloyed negative electrode, calcium solubility in the electrolyte is suppressed and operating temperature is reduced. These chemical mitigation strategies also engage another element in energy storage reactions resulting in a multi-element battery. These initial results demonstrate how the synergistic effects of deploying multiple chemical mitigation strategies coupled with the relaxation of the requirement of a single itinerant ion can unlock calcium-based chemistries and produce a battery with enhanced performance. PMID:27001915

  20. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  1. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  2. Precision Measurement of Large Scale Structure

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2001-01-01

    The purpose of this grant was to develop and to start to apply new precision methods for measuring the power spectrum and redshift distortions from the anticipated new generation of large redshift surveys. A highlight of work completed during the award period was the application of the new methods developed by the PI to measure the real space power spectrum and redshift distortions of the IRAS PSCz survey, published in January 2000. New features of the measurement include: (1) measurement of power over an unprecedentedly broad range of scales, 4.5 decades in wavenumber, from 0.01 to 300 h/Mpc; (2) at linear scales, not one but three power spectra are measured, the galaxy-galaxy, galaxy-velocity, and velocity-velocity power spectra; (3) at linear scales each of the three power spectra is decorrelated within itself, and disentangled from the other two power spectra (the situation is analogous to disentangling scalar and tensor modes in the Cosmic Microwave Background); and (4) at nonlinear scales the measurement extracts not only the real space power spectrum, but also the full line-of-sight pairwise velocity distribution in redshift space.

  3. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  4. Estimation of large-scale dimension densities.

    PubMed

    Raab, C; Kurths, J

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor. PMID:11461376

  5. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Marriage, Tobias; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Eimer, J.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    Some of the most compelling inflation models predict a background of primordial gravitational waves (PGW) detectable by their imprint of a curl-like "B-mode" pattern in the polarization of the Cosmic Microwave Background (CMB). The Cosmology Large Angular Scale Surveyor (CLASS) is a novel array of telescopes to measure the B-mode signature of the PGW. By targeting the largest angular scales (>2°) with a multifrequency array, novel polarization modulation and detectors optimized for both control of systematics and sensitivity, CLASS sets itself apart in the field of CMB polarization surveys and opens an exciting new discovery space for the PGW and inflation. This poster presents an overview of the CLASS project.

  6. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  7. Estimation of large-scale dimension densities

    NASA Astrophysics Data System (ADS)

    Raab, Corinna; Kurths, Jürgen

    2001-07-01

    We propose a technique to calculate large-scale dimension densities in both higher-dimensional spatio-temporal systems and low-dimensional systems from only a few data points, where known methods usually have an unsatisfactory scaling behavior. This is mainly due to boundary and finite-size effects. With our rather simple method, we normalize boundary effects and get a significant correction of the dimension estimate. This straightforward approach is based on rather general assumptions. So even weak coherent structures obtained from small spatial couplings can be detected with this method, which is impossible by using the Lyapunov-dimension density. We demonstrate the efficiency of our technique for coupled logistic maps, coupled tent maps, the Lorenz attractor, and the Roessler attractor.

  8. Scaling relations for large Martian valleys

    NASA Astrophysics Data System (ADS)

    Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.

    2009-02-01

    The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.

  9. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  10. Nonthermal Components in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Miniati, Francesco

    2004-12-01

    I address the issue of nonthermal processes in the large scale structure of the universe. After reviewing the properties of cosmic shocks and their role as particle accelerators, I discuss the main observational results, from radio to γ-ray and describe the processes that are thought be responsible for the observed nonthermal emissions. Finally, I emphasize the important role of γ-ray astronomy for the progress in the field. Non detections at these photon energies have already allowed us important conclusions. Future observations will tell us more about the physics of the intracluster medium, shocks dissipation and CR acceleration.

  11. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  12. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  13. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  14. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  15. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  16. Large-scale Globally Propagating Coronal Waves

    NASA Astrophysics Data System (ADS)

    Warmuth, Alexander

    2015-09-01

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  17. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  18. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  19. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  20. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  1. Oxidative Regulation of Large Conductance Calcium-Activated Potassium Channels

    PubMed Central

    Tang, Xiang D.; Daggett, Heather; Hanner, Markus; Garcia, Maria L.; McManus, Owen B.; Brot, Nathan; Weissbach, Herbert; Heinemann, Stefan H.; Hoshi, Toshinori

    2001-01-01

    Reactive oxygen/nitrogen species are readily generated in vivo, playing roles in many physiological and pathological conditions, such as Alzheimer's disease and Parkinson's disease, by oxidatively modifying various proteins. Previous studies indicate that large conductance Ca2+-activated K+ channels (BKCa or Slo) are subject to redox regulation. However, conflicting results exist whether oxidation increases or decreases the channel activity. We used chloramine-T, which preferentially oxidizes methionine, to examine the functional consequences of methionine oxidation in the cloned human Slo (hSlo) channel expressed in mammalian cells. In the virtual absence of Ca2+, the oxidant shifted the steady-state macroscopic conductance to a more negative direction and slowed deactivation. The results obtained suggest that oxidation enhances specific voltage-dependent opening transitions and slows the rate-limiting closing transition. Enhancement of the hSlo activity was partially reversed by the enzyme peptide methionine sulfoxide reductase, suggesting that the upregulation is mediated by methionine oxidation. In contrast, hydrogen peroxide and cysteine-specific reagents, DTNB, MTSEA, and PCMB, decreased the channel activity. Chloramine-T was much less effective when concurrently applied with the K+ channel blocker TEA, which is consistent with the possibility that the target methionine lies within the channel pore. Regulation of the Slo channel by methionine oxidation may represent an important link between cellular electrical excitability and metabolism. PMID:11222629

  2. Large-scale databases of proper names.

    PubMed

    Conley, P; Burgess, C; Hage, D

    1999-05-01

    Few tools for research in proper names have been available--specifically, there is no large-scale corpus of proper names. Two corpora of proper names were constructed, one based on U.S. phone book listings, the other derived from a database of Usenet text. Name frequencies from both corpora were compared with human subjects' reaction times (RTs) to the proper names in a naming task. Regression analysis showed that the Usenet frequencies contributed to predictions of human RT, whereas phone book frequencies did not. In addition, semantic neighborhood density measures derived from the HAL corpus were compared with the subjects' RTs and found to be a better predictor of RT than was frequency in either corpus. These new corpora are freely available on line for download. Potentials for these corpora range from using the names as stimuli in experiments to using the corpus data in software applications. PMID:10495803

  3. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  4. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  5. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  6. Batteries for Large Scale Energy Storage

    SciTech Connect

    Soloveichik, Grigorii L.

    2011-07-15

    In recent years, with the deployment of renewable energy sources, advances in electrified transportation, and development in smart grids, the markets for large-scale stationary energy storage have grown rapidly. Electrochemical energy storage methods are strong candidate solutions due to their high energy density, flexibility, and scalability. This review provides an overview of mature and emerging technologies for secondary and redox flow batteries. New developments in the chemistry of secondary and flow batteries as well as regenerative fuel cells are also considered. Advantages and disadvantages of current and prospective electrochemical energy storage options are discussed. The most promising technologies in the short term are high-temperature sodium batteries with β”-alumina electrolyte, lithium-ion batteries, and flow batteries. Regenerative fuel cells and lithium metal batteries with high energy density require further research to become practical.

  7. Large scale water lens for solar concentration.

    PubMed

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation. PMID:26072893

  8. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  9. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  10. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  11. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  12. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Large-Scale Statistics for Cu Electromigration

    NASA Astrophysics Data System (ADS)

    Hauschildt, M.; Gall, M.; Hernandez, R.

    2009-06-01

    Even after the successful introduction of Cu-based metallization, the electromigration failure risk has remained one of the important reliability concerns for advanced process technologies. The observation of strong bimodality for the electron up-flow direction in dual-inlaid Cu interconnects has added complexity, but is now widely accepted. The failure voids can occur both within the via ("early" mode) or within the trench ("late" mode). More recently, bimodality has been reported also in down-flow electromigration, leading to very short lifetimes due to small, slit-shaped voids under vias. For a more thorough investigation of these early failure phenomena, specific test structures were designed based on the Wheatstone Bridge technique. The use of these structures enabled an increase of the tested sample size close to 675000, allowing a direct analysis of electromigration failure mechanisms at the single-digit ppm regime. Results indicate that down-flow electromigration exhibits bimodality at very small percentage levels, not readily identifiable with standard testing methods. The activation energy for the down-flow early failure mechanism was determined to be 0.83±0.02 eV. Within the small error bounds of this large-scale statistical experiment, this value is deemed to be significantly lower than the usually reported activation energy of 0.90 eV for electromigration-induced diffusion along Cu/SiCN interfaces. Due to the advantages of the Wheatstone Bridge technique, we were also able to expand the experimental temperature range down to 150° C, coming quite close to typical operating conditions up to 125° C. As a result of the lowered activation energy, we conclude that the down-flow early failure mode may control the chip lifetime at operating conditions. The slit-like character of the early failure void morphology also raises concerns about the validity of the Blech-effect for this mechanism. A very small amount of Cu depletion may cause failure even before a

  14. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  15. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  16. Large-scale wind turbine structures

    NASA Astrophysics Data System (ADS)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  17. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  18. Gravity and large-scale nonlocal bias

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Scoccimarro, Román; Sheth, Ravi K.

    2012-04-01

    For Gaussian primordial fluctuations the relationship between galaxy and matter overdensities, bias, is most often assumed to be local at the time of observation in the large-scale limit. This hypothesis is however unstable under time evolution, we provide proofs under several (increasingly more realistic) sets of assumptions. In the simplest toy model galaxies are created locally and linearly biased at a single formation time, and subsequently move with the dark matter (no velocity bias) conserving their comoving number density (no merging). We show that, after this formation time, the bias becomes unavoidably nonlocal and nonlinear at large scales. We identify the nonlocal gravitationally induced fields in which the galaxy overdensity can be expanded, showing that they can be constructed out of the invariants of the deformation tensor (Galileons), the main signature of which is a quadrupole field in second-order perturbation theory. In addition, we show that this result persists if we include an arbitrary evolution of the comoving number density of tracers. We then include velocity bias, and show that new contributions appear; these are related to the breaking of Galilean invariance of the bias relation, a dipole field being the signature at second order. We test these predictions by studying the dependence of halo overdensities in cells of fixed dark matter density: measurements in simulations show that departures from the mean bias relation are strongly correlated with the nonlocal gravitationally induced fields identified by our formalism, suggesting that the halo distribution at the present time is indeed more closely related to the mass distribution at an earlier rather than present time. However, the nonlocality seen in the simulations is not fully captured by assuming local bias in Lagrangian space. The effects on nonlocal bias seen in the simulations are most important for the most biased halos, as expected from our predictions. Accounting for these

  19. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  20. Rapid, high-temperature, field test method for evaluation of geothermal calcium carbonate scale inhibitors

    SciTech Connect

    Asperger, R.G.

    1986-09-01

    A new test method is described that allows the rapid field testing of calcium carbonate scale inhibitors at 500/sup 0/F (260/sup 0/C). The method evolved from use of a full-flow test loop on a well with a mass flow rate of about 1 x 10/sup 6/ lbm/hr (126 kg/s). It is a simple, effective way to evaluate the effectiveness of inhibitors under field conditions. Five commercial formulations were chosen for field evaluation on the basis of nonflowing, laboratory screening tests at 500/sup 0/F (260/sup 0/C). Four of these formulations from different suppliers controlled calcium carbonate scale deposition as measured by the test method. Two of these could dislodge recently deposited scale that had not age-hardened. Performance-profile diagrams, which were measured for these four effective inhibitors, show the concentration interrelationship between brine calcium and inhibitor concentrations at which the formulations will and will not stop scale formation in the test apparatus. With these diagrams, one formulation was chosen for testing on the full-flow brine line. The composition was tested for 6 weeks and showed a dramatic decrease in the scaling occurring at the flow-control valve. This scaling was about to force a shutdown of a major, long-term flow test being done for reservoir economic evaluations. The inhibitor stopped the scaling, and the test was performed without interruption.

  1. BENCH-SCALE EVALUATION OF CALCIUM SORBENTS FOR ACID GAS EMISSION CONTROL

    EPA Science Inventory

    Calcium sorbents for acid gas emission control were evaluated for effectiveness in removing SO2/HCl and SO2/NO from simulated incinerator and boiler flue gases. All tests were conducted in a bench-scale reactor (fixed-bed) simulating fabric filter conditions in an acid gas remova...

  2. Large Scale Computer Simulation of Erthocyte Membranes

    NASA Astrophysics Data System (ADS)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  3. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  4. Curvature constraints from large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  5. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  6. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  7. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  8. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  9. Scaling-law equilibria for calcium in canopy-type models of the solar chromosphere

    NASA Technical Reports Server (NTRS)

    Jones, H. P.

    1982-01-01

    Scaling laws for resonance line formation are used to obtain approximate excitation and ionization equilibria for a three-level model of singly ionized calcium. The method has been developed for and is applied to the study of magnetograph response in the 8542 A infrared triplet line to magnetostatic canopies which schematically model diffuse, nearly horizontal fields in the low solar chromosphere. For this application, the method is shown to be efficient and semi-quantitative, and the results indicate the type and range of effects on calcium-line radiation which result from reduced gas pressure inside the magnetic regions.

  10. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  11. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  12. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  13. International space station. Large scale integration approach

    NASA Astrophysics Data System (ADS)

    Cohen, Brad

    The International Space Station is the most complex large scale integration program in development today. The approach developed for specification, subsystem development, and verification lay a firm basis on which future programs of this nature can be based. International Space Station is composed of many critical items, hardware and software, built by numerous International Partners, NASA Institutions, and U.S. Contractors and is launched over a period of five years. Each launch creates a unique configuration that must be safe, survivable, operable, and support ongoing assembly (assemblable) to arrive at the assembly complete configuration in 2003. The approaches to integrating each of the modules into a viable spacecraft and continue the assembly is a challenge in itself. Added to this challenge are the severe schedule constraints and lack of an "Iron Bird", which prevents assembly and checkout of each on-orbit configuration prior to launch. This paper will focus on the following areas: 1) Specification development process explaining how the requirements and specifications were derived using a modular concept driven by launch vehicle capability. Each module is composed of components of subsystems versus completed subsystems. 2) Approach to stage (each stage consists of the launched module added to the current on-orbit spacecraft) specifications. Specifically, how each launched module and stage ensures support of the current and future elements of the assembly. 3) Verification approach, due to the schedule constraints, is primarily analysis supported by testing. Specifically, how are the interfaces ensured to mate and function on-orbit when they cannot be mated before launch. 4) Lessons learned. Where can we improve this complex system design and integration task?

  14. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  15. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  16. Nonsurgical management of large periapical lesion in mature and immature teeth using different calcium hydroxide formulations: case series.

    PubMed

    Kumar, G Vinay; Hegde, Reshma S; Moogi, Prashant P; Prashant, B R; Patil, Basanagouda

    2013-01-01

    This case series evaluates the effectiveness of different calcium hydroxide formulations with various vehicles in management of large periapical lesion in mature and immature teeth. This will help clinicians to make informed judgments about which formulations of calcium hydroxide should be used for specific endodontic procedures. PMID:24858773

  17. Validating Large Scale Networks Using Temporary Local Scale Networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  18. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  19. Dynamic and static calcium gradients inside large snail (Helix aspersa) neurones detected with calcium-sensitive microelectrodes

    PubMed Central

    Thomas, Roger C.; Postma, Marten

    2007-01-01

    We have used quartz Ca2+-sensitive microelectrodes (CASMs) in large voltage-clamped snail neurones to investigate the inward spread of Ca2+ after a brief depolarisation. Both steady state and [Ca2+]i transients changed with depth of penetration. When the CASM tip was within 20 μm of the far side of the cell the [Ca2+]i transient time to peak was 4.4 ± 0.5 s, rising to 14.7 ± 0.7 s at a distance of 80 μm. We estimate that the Ca2+ transients travelled centripetally at an average speed of 6 μm2 s−1 and decreased in size by half over a distance of about 45 μm. Cyclopiazonic acid had little effect on the size and time to peak of Ca2+ transients but slowed their recovery significantly. This suggests that the endoplasmic reticulum curtails rather than reinforces the transients. Injecting the calcium buffer BAPTA made the Ca2+ transients more uniform in size and increased their times to peak and rates of recovery near the membrane. We have developed a computational model for the transients, which includes diffusion, uptake and Ca2+ extrusion. Good fits were obtained with a rather large apparent diffusion coefficient of about 90 ± 20 μm2 s−1.This may assist fast recovery by extrusion. PMID:16962659

  20. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  1. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  2. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  3. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  4. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  5. Calcium Causes Multimerization of the Large Adhesin LapF and Modulates Biofilm Formation by Pseudomonas putida

    PubMed Central

    Martínez-Gil, Marta; Romero, Diego; Kolter, Roberto

    2012-01-01

    LapF is a large secreted protein involved in microcolony formation and biofilm maturation in Pseudomonas putida. Its C-terminal domain shows the characteristics of proteins secreted through a type I secretion system and includes a predicted calcium binding motif. We provide experimental evidence of specific binding of Ca2+ to the purified C-terminal domain of LapF (CLapF). Calcium promotes the formation of large aggregates, which disappear in the presence of the calcium chelator EGTA. Immunolocalization of LapF also shows the tendency of this protein to accumulate in vivo in certain extracellular regions. These findings, along with results showing that calcium influences biofilm formation, lead us to propose a model in which P. putida cells interact with each other via LapF in a calcium-dependent manner during the development of biofilms. PMID:23042991

  6. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  7. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  8. Light propagation and large-scale inhomogeneities

    SciTech Connect

    Brouzakis, Nikolaos; Tetradis, Nikolaos; Tzavara, Eleftheria E-mail: ntetrad@phys.uoa.gr

    2008-04-15

    We consider the effect on the propagation of light of inhomogeneities with sizes of order 10 Mpc or larger. The Universe is approximated through a variation of the Swiss-cheese model. The spherical inhomogeneities are void-like, with central underdensities surrounded by compensating overdense shells. We study the propagation of light in this background, assuming that the source and the observer occupy random positions, so that each beam travels through several inhomogeneities at random angles. The distribution of luminosity distances for sources with the same redshift is asymmetric, with a peak at a value larger than the average one. The width of the distribution and the location of the maximum increase with increasing redshift and length scale of the inhomogeneities. We compute the induced dispersion and bias of cosmological parameters derived from the supernova data. They are too small to explain the perceived acceleration without dark energy, even when the length scale of the inhomogeneities is comparable to the horizon distance. Moreover, the dispersion and bias induced by gravitational lensing at the scales of galaxies or clusters of galaxies are larger by at least an order of magnitude.

  9. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  10. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  11. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo.

    PubMed

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  12. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  13. Large-Conductance Calcium-Activated Potassium Channels in Glomerulus: From Cell Signal Integration to Disease

    PubMed Central

    Tao, Jie; Lan, Zhen; Wang, Yunman; Hei, Hongya; Tian, Lulu; Pan, Wanma; Zhang, Xuemei; Peng, Wen

    2016-01-01

    Large-conductance calcium-activated potassium (BK) channels are currently considered as vital players in a variety of renal physiological processes. In podocytes, BK channels become active in response to stimuli that increase local cytosolic Ca2+, possibly secondary to activation of slit diaphragm TRPC6 channels by chemical or mechanical stimuli. Insulin increases filtration barrier permeability through mobilization of BK channels. In mesangial cells, BK channels co-expressed with β1 subunits act as a major component of the counteractive response to contraction in order to regulate glomerular filtration. This review aims to highlight recent discoveries on the localization, physiological and pathological roles of BK channels in glomerulus.

  14. Large-Conductance Calcium-Activated Potassium Channels in Glomerulus: From Cell Signal Integration to Disease.

    PubMed

    Tao, Jie; Lan, Zhen; Wang, Yunman; Hei, Hongya; Tian, Lulu; Pan, Wanma; Zhang, Xuemei; Peng, Wen

    2016-01-01

    Large-conductance calcium-activated potassium (BK) channels are currently considered as vital players in a variety of renal physiological processes. In podocytes, BK channels become active in response to stimuli that increase local cytosolic Ca(2+), possibly secondary to activation of slit diaphragm TRPC6 channels by chemical or mechanical stimuli. Insulin increases filtration barrier permeability through mobilization of BK channels. In mesangial cells, BK channels co-expressed with β1 subunits act as a major component of the counteractive response to contraction in order to regulate glomerular filtration. This review aims to highlight recent discoveries on the localization, physiological and pathological roles of BK channels in glomerulus. PMID:27445840

  15. Probes of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Suto, Yasushi; Gorski, Krzysztof; Juszkiewicz, Roman; Silk, Joseph

    1988-01-01

    A general formalism is developed which shows that the gravitational instability theory for the origin of the large-scale structure of the universe is now capable of critically confronting observational results on cosmic background radiation angular anisotropies, large-scale bulk motions, and large-scale clumpiness in the galaxy counts. The results indicate that presently advocated cosmological models will have considerable difficulty in simultaneously explaining the observational results.

  16. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  17. Large-Scale Organizational Performance Improvement.

    ERIC Educational Resources Information Center

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  18. Simulation of Large-Scale HPC Architectures

    SciTech Connect

    Jones, Ian S; Engelmann, Christian

    2011-01-01

    The Extreme-scale Simulator (xSim) is a recently developed performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads. It allows observing parallel application performance properties in a simulated extreme-scale HPC system to further assist in HPC hardware and application software co-design on the road toward multi-petascale and exascale computing. This paper presents a newly implemented network model for the xSim performance investigation toolkit that is capable of providing simulation support for a variety of HPC network architectures with the appropriate trade-off between simulation scalability and accuracy. The taken approach focuses on a scalable distributed solution with latency and bandwidth restrictions for the simulated network. Different network architectures, such as star, ring, mesh, torus, twisted torus and tree, as well as hierarchical combinations, such as to simulate network-on-chip and network-on-node, are supported. Network traffic congestion modeling is omitted to gain simulation scalability by reducing simulation accuracy.

  19. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776

  20. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  1. Infrasonic observations of large scale HE events

    SciTech Connect

    Whitaker, R.W.; Mutschlecner, J.P.; Davidson, M.B.; Noel, S.D.

    1990-01-01

    The Los Alamos Infrasound Program has been operating since about mid-1982, making routine measurements of low frequency atmospheric acoustic propagation. Generally, we work between 0.1 Hz to 10 Hz; however, much of our work is concerned with the narrower range of 0.5 to 5.0 Hz. Two permanent stations, St. George, UT, and Los Alamos, NM, have been operational since 1983, collecting data 24 hours a day. This discussion will concentrate on measurements of large, high explosive (HE) events at ranges of 250 km to 5330 km. Because the equipment is well suited for mobile deployments, it can easily establish temporary observing sites for special events. The measurements in this report are from our permanent sites, as well as from various temporary sites. In this short report will not give detailed data from all sites for all events, but rather will present a few observations that are typical of the full data set. The Defense Nuclear Agency sponsors these large explosive tests as part of their program to study airblast effects. A wide variety of experiments are fielded near the explosive by numerous Department of Defense (DOD) services and agencies. This measurement program is independent of this work; use is made of these tests as energetic known sources, which can be measured at large distances. Ammonium nitrate and fuel oil (ANFO) is the specific explosive used by DNA in these tests. 6 refs., 6 figs.

  2. Large scale surface heat fluxes. [through oceans

    NASA Technical Reports Server (NTRS)

    Sarachik, E. S.

    1984-01-01

    The heat flux through the ocean surface, Q, is the sum of the net radiation at the surface, the latent heat flux into the atmosphere, and the sensible heat flux into the atmosphere (all fluxes positive upwards). A review is presented of the geographical distribution of Q and its constituents, and the current accuracy of measuring Q by ground based measurements (both directly and by 'bulk formulae') is assessed. The relation of Q to changes of oceanic heat content, heat flux, and SST is examined and for each of these processes, the accuracy needed for Q is discussed. The needed accuracy for Q varies from process to process, varies geographically, and varies with the time and space scale considered.

  3. Large-scale motions in a plane wall jet

    NASA Astrophysics Data System (ADS)

    Gnanamanickam, Ebenezer; Jonathan, Latim; Shibani, Bhatt

    2015-11-01

    The dynamic significance of large-scale motions in turbulent boundary layers have been the focus of several recent studies, primarily focussing on canonical flows - zero pressure gradient boundary layers, flows within pipes and channels. This work presents an investigation into the large-scale motions in a boundary layer that is used as the prototypical flow field for flows with large-scale mixing and reactions, the plane wall jet. An experimental investigation is carried out in a plane wall jet facility designed to operate at friction Reynolds numbers Reτ > 1000 , which allows for the development of a significant logarithmic region. The streamwise turbulent intensity across the boundary layer is decomposed into small-scale (less than one integral length-scale δ) and large-scale components. The small-scale energy has a peak in the near-wall region associated with the near-wall turbulent cycle as in canonical boundary layers. However, eddies of large-scales are the dominating eddies having significantly higher energy, than the small-scales across almost the entire boundary layer even at the low to moderate Reynolds numbers under consideration. The large-scales also appear to amplitude and frequency modulate the smaller scales across the entire boundary layer.

  4. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  5. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  6. Large-scale GW software development

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  7. Stochastic pattern transitions in large scale swarms

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  8. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  9. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Cowlard, Adam J.; Rouvreau, Sebastien; Toth, Balazs; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  10. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  11. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  12. Large-Scale Structures of Planetary Systems

    NASA Astrophysics Data System (ADS)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  13. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  14. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    EPA Science Inventory

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  15. Using Large-Scale Assessment Scores to Determine Student Grades

    ERIC Educational Resources Information Center

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  16. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  17. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  18. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  19. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  20. Software for large scale tracking studies

    SciTech Connect

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references.

  1. Predictive Mechanical Characterization of Macro-Molecular Material Chemistry Structures of Cement Paste at Nano Scale - Two-phase Macro-Molecular Structures of Calcium Silicate Hydrate, Tri-Calcium Silicate, Di-Calcium Silicate and Calcium Hydroxide

    NASA Astrophysics Data System (ADS)

    Padilla Espinosa, Ingrid Marcela

    Concrete is a hierarchical composite material with a random structure over a wide range of length scales. At submicron length scale the main component of concrete is cement paste, formed by the reaction of Portland cement clinkers and water. Cement paste acts as a binding matrix for the other components and is responsible for the strength of concrete. Cement paste microstructure contains voids, hydrated and unhydrated cement phases. The main crystalline phases of unhydrated cement are tri-calcium silicate (C3S) and di-calcium silicate (C2S), and of hydrated cement are calcium silicate hydrate (CSH) and calcium hydroxide (CH). Although efforts have been made to comprehend the chemical and physical nature of cement paste, studies at molecular level have primarily been focused on individual components. Present research focuses on the development of a method to model, at molecular level, and analysis of the two-phase combination of hydrated and unhydrated phases of cement paste as macromolecular systems. Computational molecular modeling could help in understanding the influence of the phase interactions on the material properties, and mechanical performance of cement paste. Present work also strives to create a framework for molecular level models suitable for potential better comparisons with low length scale experimental methods, in which the sizes of the samples involve the mixture of different hydrated and unhydrated crystalline phases of cement paste. Two approaches based on two-phase cement paste macromolecular structures, one involving admixed molecular phases, and the second involving cluster of two molecular phases are investigated. The mechanical properties of two-phase macromolecular systems of cement paste consisting of key hydrated phase CSH and unhydrated phases C3S or C2S, as well as CSH with the second hydrated phase CH were calculated. It was found that these cement paste two-phase macromolecular systems predicted an isotropic material behavior. Also

  2. Links between small-scale dynamics and large-scale averages and its implication to large-scale hydrology

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2012-04-01

    Changes to the hydrological cycle under a changing climate challenge our understanding of the interaction between hydrology and climate at various spatial and temporal scales. Traditional understanding of the climate-hydrology interaction were developed under a stationary climate and may not adequately summarize the interactions in a transient state when the climate is changing; for instance, opposite long-term temporal trend of precipitation and discharge has been observed in part of the world, as a result of significant warming and the nonlinear nature of the climate and hydrology system. The patterns of internal climate variability, ranging from monthly to multi-centennial time scales, largely determine the past and present climate. The response of these patterns of variability to human-induced climate change will determine much of the regional nature of climate change in the future. Therefore, understanding the basic patterns of variability is of vital importance for climate and hydrological modelers. This work showed that at the scale of large river basins or sub-continents, the temporal variation of climatic variables ranging from daily to inter-annual, could be well represented by multiple sets, each consists of limited number of points (when observations are used) or pixels (when gridded datasets are used), covering a small portion of the total domain area. Combined with hydrological response units, which divide the heterogeneity of the land surface into limited number of categories according to similarity in hydrological behavior, one could describe the climate-hydrology interaction and changes over a large domain with multiple small subsets of the domain area. Those points (when observations are used), or pixels (when gridded data are used), represent different patterns of the climate-hydrology interaction, and contribute uniquely to an averaged dynamic of the entire domain. Statistical methods were developed to identify the minimum number of points or

  3. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  4. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  5. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  6. Dynamic scaling and large scale effects in turbulence in compressible stratified fluid

    NASA Astrophysics Data System (ADS)

    Pharasi, Hirdesh K.; Bhattacharjee, Jayanta K.

    2016-01-01

    We consider the propagation of sound in a turbulent fluid which is confined between two horizontal parallel plates, maintained at different temperatures. In the homogeneous fluid, Staroselsky et al. had predicted a divergent sound speed at large length scales. Here we find a divergent sound speed and a vanishing expansion coefficient at large length scales. Dispersion relation and the question of scale invariance at large distance scales lead to these results.

  7. Calcium carbonate overdose

    MedlinePlus

    Tums overdose; Calcium overdose ... Calcium carbonate can be dangerous in large amounts. ... Some products that contain calcium carbonate are certain: ... and mineral supplements Other products may also contain calcium ...

  8. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  9. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  10. Serum Calcium Increase Correlates With Worsening of Lipid Profile: An Observational Study on a Large Cohort From South Italy.

    PubMed

    Gallo, Luigia; Faniello, Maria C; Canino, Giovanni; Tripolino, Cesare; Gnasso, Agostino; Cuda, Giovanni; Costanzo, Francesco S; Irace, Concetta

    2016-02-01

    Despite the well-documented role of calcium in cell metabolism, its role in the development of cardiovascular disease is still under heavy debate. Several studies suggest that calcium supplementation might be associated with an increased risk of coronary heart disease, whereas others underline a significant effect on lowering high blood pressure and hyperlipidemia. The purpose of this study was to investigate, in a large nonselected cohort from South Italy, if serum calcium levels correlate with lipid values and can therefore be linked to higher individual cardiovascular risk.Eight-thousand-six-hundred-ten outpatients addressed to the Laboratory of Clinical Biochemistry, University of Magna Græcia, Catanzaro, Italy from January 2012 to December 2013 for routine blood tests, were enrolled in the study. Total HDL-, LDL- and non-HDL colesterol, triglycerides, and calcium were determined with standard methods.We observed a significant association between total cholesterol, LDL-cholesterol, HDL-cholesterol, non-HDL cholesterol, triglycerides, and serum calcium in men and postmenopause women. Interestingly, in premenopause women, we only found a direct correlation between serum calcium, total cholesterol, and HDL-cholesterol. Calcium significantly increased while increasing total cholesterol and triglycerides in men and postmenopause women.Our results confirm that progressive increase of serum calcium level correlates with worsening of lipid profile in our study population. Therefore, we suggest that a greater caution should be used in calcium supplement prescription particularly in men and women undergoing menopause, in which an increase of serum lipids is already known to be associated with a higher cardiovascular risk. PMID:26937904

  11. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  12. Large-scale convective instability in an electroconducting medium with small-scale helicity

    SciTech Connect

    Kopp, M. I.; Tur, A. V.; Yanovsky, V. V.

    2015-04-15

    A large-scale instability occurring in a stratified conducting medium with small-scale helicity of the velocity field and magnetic fields is detected using an asymptotic many-scale method. Such a helicity is sustained by small external sources for small Reynolds numbers. Two regimes of instability with zero and nonzero frequencies are detected. The criteria for the occurrence of large-scale instability in such a medium are formulated.

  13. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  14. Large scale suppression of scalar power on a spatial condensation

    NASA Astrophysics Data System (ADS)

    Kouwn, Seyen; Kwon, O.-Kab; Oh, Phillial

    2015-03-01

    We consider a deformed single-field inflation model in terms of three SO(3) symmetric moduli fields. We find that spatially linear solutions for the moduli fields induce a phase transition during the early stage of the inflation and the suppression of scalar power spectrum at large scales. This suppression can be an origin of anomalies for large-scale perturbation modes in the cosmological observation.

  15. Interpretation of large-scale deviations from the Hubble flow

    NASA Astrophysics Data System (ADS)

    Grinstein, B.; Politzer, H. David; Rey, S.-J.; Wise, Mark B.

    1987-03-01

    The theoretical expectation for large-scale streaming velocities relative to the Hubble flow is expressed in terms of statistical correlation functions. Only for objects that trace the mass would these velocities have a simple cosmological interpretation. If some biasing effects the objects' formation, then nonlinear gravitational evolution is essential to predicting the expected large-scale velocities, which also depend on the nature of the biasing.

  16. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  17. Large-scale V/STOL testing. [in wind tunnels

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1977-01-01

    Several facets of large-scale testing of V/STOL aircraft configurations are discussed with particular emphasis on test experience in the Ames 40- by 80-foot wind tunnel. Examples of powered-lift test programs are presented in order to illustrate tradeoffs confronting the planner of V/STOL test programs. It is indicated that large-scale V/STOL wind-tunnel testing can sometimes compete with small-scale testing in the effort required (overall test time) and program costs because of the possibility of conducting a number of different tests with a single large-scale model where several small-scale models would be required. The benefits of both high- and full-scale Reynolds numbers, more detailed configuration simulation, and number and type of onboard measurements increase rapidly with scale. Planning must be more detailed at large scale in order to balance the trade-offs between the increased costs, as number of measurements and model configuration variables increase and the benefits of larger amounts of information coming out of one test.

  18. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  19. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  20. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  1. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects. PMID:26551120

  2. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGESBeta

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  3. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    SciTech Connect

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  4. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  5. Large-scale ER-damper for seismic protection

    NASA Astrophysics Data System (ADS)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  6. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    PubMed

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  7. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  8. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  9. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  10. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  11. Unsaturated Hydraulic Conductivity for Evaporation in Large scale Heterogeneous Soils

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2014-12-01

    In this study we aim to provide some practical guidelines of how the commonly used simple averaging schemes (arithmetic, geometric, or harmonic mean) perform in simulating large scale evaporation in a large scale heterogeneous landscape. Previous studies on hydraulic property upscaling focusing on steady state flux exchanges illustrated that an effective hydraulic property is usually more difficult to define for evaporation. This study focuses on upscaling hydraulic properties of large scale transient evaporation dynamics using the idea of the stream tube approach. Specifically, the two main objectives are: (1) if the three simple averaging schemes (i.e., arithmetic, geometric and harmonic means) of hydraulic parameters are appropriate in representing large scale evaporation processes, and (2) how the applicability of these simple averaging schemes depends on the time scale of evaporation processes in heterogeneous soils. Multiple realizations of local evaporation processes are carried out using HYDRUS-1D computational code (Simunek et al, 1998). The three averaging schemes of soil hydraulic parameters were used to simulate the cumulative flux exchange, which is then compared with the large scale average cumulative flux. The sensitivity of the relative errors to the time frame of evaporation processes is also discussed.

  12. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  13. Large-scale flow generation in turbulent convection

    PubMed Central

    Krishnamurti, Ruby; Howard, Louis N.

    1981-01-01

    In a horizontal layer of fluid heated from below and cooled from above, cellular convection with horizontal length scale comparable to the layer depth occurs for small enough values of the Rayleigh number. As the Rayleigh number is increased, cellular flow disappears and is replaced by a random array of transient plumes. Upon further increase, these plumes drift in one direction near the bottom and in the opposite direction near the top of the layer with the axes of plumes tilted in such a way that horizontal momentum is transported upward via the Reynolds stress. With the onset of this large-scale flow, the largest scale of motion has increased from that comparable to the layer depth to a scale comparable to the layer width. The conditions for occurrence and determination of the direction of this large-scale circulation are described. Images PMID:16592996

  14. Sugar alcohols enhance calcium transport from rat small and large intestine epithelium in vitro.

    PubMed

    Mineo, Hitoshi; Hara, Hiroshi; Tomita, Fusao

    2002-06-01

    We compared the effect of a variety of sugar alcohols on calcium absorption from the rat small and large intestine in vitro. An Ussing chamber technique was used to determine the net transport of Ca across the epithelium isolated from the jejunum, ileum, cecum, and colon of rats. The concentration of Ca in the serosal and mucosal Tris buffer solution was 1.25 mM and 10 mM, respectively. The Ca concentration in the serosal medium was determined after incubation for 30 min and the net Ca absorption was evaluated. The addition of 0.1-200 mM erythritol, xylitol, sorbitol, maltitol, palatinit, or lactitol to the mucosal medium affected net Ca absorption in the intestinal preparations. Differences in Ca transport were observed between portions of the intestine, but not between sugar alcohols tested. We concluded that sugar alcohols directly affect the epithelial tissue and promote Ca absorption from the small and large intestine in vitro. PMID:12064809

  15. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  16. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  17. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  18. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  19. Prostaglandin F2 alpha-induced calcium transient in ovine large luteal cells: II. Modulation of the transient and resting cytosolic free calcium alters progesterone secretion.

    PubMed

    Wegner, J A; Martinez-Zaguilan, R; Gillies, R J; Hoyer, P B

    1991-02-01

    A previous study demonstrated that prostaglandin F2 alpha (PGF2 alpha) stimulates a transient increase in cytosolic free Ca2+ levels [( Ca2+]i) in ovine large luteal cells. In the present study, the magnitude of the PGF2 alpha (0.5 microM)-induced calcium transient in Hanks' medium (87 +/- 2 nM increase above resting levels) was reduced (P less than 0.05) but not completely eliminated in fura-2 loaded large luteal cells incubated in Ca2(+)-free or phosphate- and carbonate-free medium (10 +/- 1 nM, 32 +/- 6 nM, above resting levels; respectively). Preincubation for 2 min with 1 mM LaCl3 (calcium antagonist) eliminated the PGF2 alpha-induced calcium transient. The inhibitory effect of PGF2 alpha on secretion of progesterone was reduced in Ca2(+)-free medium or medium plus LaCl3. Resting [Ca2+]i levels and basal secretion of progesterone were both reduced (P less than 0.05) in large cells incubated in Ca2(+)-free medium (27 +/- 4 nM; 70 +/- 6% control, respectively) or with 5 microM 5,5'-dimethyl bis-(O-aminophenoxy)ethane-N,N,N'N'-tetraacetic acid (40 +/- 2 nM; 49 +/- 1% control; respectively). In addition, secretion of progesterone was inhibited (P less than 0.05) by conditions that increased (P less than 0.05) [Ca2+]i; that is LaCl3 ([Ca2+]i, 120 +/- 17 nM; progesterone, 82 +/- 8% control) and PGF2 alpha ([Ca2+]i, 102 +/- 10 nM; progesterone, 82 +/- 3% control). In small luteal cells, resting [Ca2+]i levels and secretion of progesterone were reduced by incubation in Ca2(+)-free Hanks ([Ca2+]i, 28 +/- 2 nM; progesterone, 71 +/- 6% control), however, neither LaCl3 nor PGF2 alpha increased [Ca2+]i levels or inhibited secretion of progesterone. The findings presented here provide evidence that extracellular as well as intracellular calcium contribute to the PGF2 alpha-induced [Ca2+]i transient in large cells. Furthermore, whereas an adequate level of [Ca2+]i is required to support progesterone production in both small and large cells, optimal progesterone production in

  20. Over-driven control for large-scale MR dampers

    NASA Astrophysics Data System (ADS)

    Friedman, A. J.; Dyke, S. J.; Phillips, B. M.

    2013-04-01

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force-response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications.

  1. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  2. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  3. Effects of small scale energy injection on large scales in turbulent reaction flows

    NASA Astrophysics Data System (ADS)

    Xuan, Yuan

    2014-11-01

    Turbulence causes the generation of eddies of various length scales. In turbulent non-reacting flows, most of the kinetic energy is contained in large scale turbulent structures and dissipated at small scales. This energy cascade process from large scales to small scales provides the foundation of a lot of turbulence models, especially for Large Eddy Simulations. However, in turbulent reacting flows, chemical energy is converted locally to heat and therefore deploys energy at the smallest scales. As such, effects of small scale energy injection due to combustion on large scale turbulent motion may become important. These effects are investigated in the case of auto-ignition under homogeneous isotropic turbulence. Impact of small scale heat release is examined by comparing various turbulent statistics (e.g. energy spectrum, two-point correlation functions, and structure functions) in the reacting case to the non-reacting case. Emphasis is placed on the identification of the most relevant turbulent quantities in reflecting such small-large scale interactions.

  4. Response of Tradewind Cumuli to Large-Scale Processes.

    NASA Astrophysics Data System (ADS)

    Soong, S.-T.; Ogura, Y.

    1980-09-01

    The two-dimensional slab-symmetric numerical cloud model used by Soong and Ogura (1973) for studying the evolution of an isolated cumulus cloud is extended to investigate the statistical properties of cumulus clouds which would be generated under a given large-scale forcing composed of the horizontal advection of temperature and water vapor mixing ratio, vertical velocity, sea surface temperature and radiative cooling. Random disturbances of small amplitude are introduced into the model at low levels to provide random motion for cloud formation.The model is applied to a case of suppressed weather conditions during BOMEX for the period 22-23 June 1969 when a nearly steady state prevailed. The composited temperature and mixing ratio profiles of these two days are used as initial conditions and the time-independent large-scale forcing terms estimated from the observations are applied to the model. The result of numerical integration shows that a number of small clouds start developing after 1 h. Some of them decay quickly, but some of them develop and reach the tradewind inversion. After a few hours of simulation, the vertical profiles of the horizontally averaged temperature and moisture are found to deviate only slightly from the observed profiles, indicating that the large-scale effect and the feedback effects of clouds on temperature and mixing ratio reach an equilibrium state. The three major components of the cloud feedback effect, i.e., condensation, evaporation and vertical fluxes associated with the clouds, are determined from the model output. The vertical profiles of vertical heat and moisture fluxes in the subcloud layer in the model are found to be in general agreement with the observations.Sensitivity tests of the model are made for different magnitudes of the large-scale vertical velocity. The most striking result is that the temperature and humidity in the cloud layer below the inversion do not change significantly in spite of a relatively large

  5. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  6. Emergence of large cliques in random scale-free networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra; Marsili, Matteo

    2006-05-01

    In a network cliques are fully connected subgraphs that reveal which are the tight communities present in it. Cliques of size c > 3 are present in random Erdös and Renyi graphs only in the limit of diverging average connectivity. Starting from the finding that real scale-free graphs have large cliques, we study the clique number in uncorrelated scale-free networks finding both upper and lower bounds. Interestingly, we find that in scale-free networks large cliques appear also when the average degree is finite, i.e. even for networks with power law degree distribution exponents γin(2,3). Moreover, as long as γ < 3, scale-free networks have a maximal clique which diverges with the system size.

  7. Do Large-Scale Topological Features Correlate with Flare Properties?

    NASA Astrophysics Data System (ADS)

    DeRosa, Marc L.; Barnes, Graham

    2016-05-01

    In this study, we aim to identify whether the presence or absence of particular topological features in the large-scale coronal magnetic field are correlated with whether a flare is confined or eruptive. To this end, we first determine the locations of null points, spine lines, and separatrix surfaces within the potential fields associated with the locations of several strong flares from the current and previous sunspot cycles. We then validate the topological skeletons against large-scale features in observations, such as the locations of streamers and pseudostreamers in coronagraph images. Finally, we characterize the topological environment in the vicinity of the flaring active regions and identify the trends involving their large-scale topologies and the properties of the associated flares.

  8. Coupling between convection and large-scale circulation

    NASA Astrophysics Data System (ADS)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  9. Large-scale current systems in the dayside Venus ionosphere

    NASA Technical Reports Server (NTRS)

    Luhmann, J. G.; Elphic, R. C.; Brace, L. H.

    1981-01-01

    The occasional observation of large-scale horizontal magnetic fields within the dayside ionosphere of Venus by the flux gate magnetometer on the Pioneer Venus orbiter suggests the presence of large-scale current systems. Using the measured altitude profiles of the magnetic field and the electron density and temperature, together with the previously reported neutral atmosphere density and composition, it is found that the local ionosphere can be described at these times by a simple steady state model which treats the unobserved quantities, such as the electric field, as parameters. When the model is appropriate, the altitude profiles of the ion and electron velocities and the currents along the satellite trajectory can be inferred. These results elucidate the configurations and sources of the ionospheric current systems which produce the observed large-scale magnetic fields, and in particular illustrate the effect of ion-neutral coupling in the determination of the current system at low altitudes.

  10. Regulation of large conductance calcium- and voltage-activated potassium (BK) channels by S-palmitoylation.

    PubMed

    Shipston, Michael J

    2013-02-01

    BK (large conductance calcium- and voltage-activated potassium) channels are important determinants of physiological control in the nervous, endocrine and vascular systems with channel dysfunction associated with major disorders ranging from epilepsy to hypertension and obesity. Thus the mechanisms that control channel surface expression and/or activity are important determinants of their (patho)physiological function. BK channels are S-acylated (palmitoylated) at two distinct sites within the N- and C-terminus of the pore-forming α-subunit. Palmitoylation of the N-terminus controls channel trafficking and surface expression whereas palmitoylation of the C-terminal domain determines regulation of channel activity by AGC-family protein kinases. Recent studies are beginning to reveal mechanistic insights into how palmitoylation controls channel trafficking and cross-talk with phosphorylation-dependent signalling pathways. Intriguingly, each site of palmitoylation is regulated by distinct zDHHCs (palmitoyl acyltransferases) and APTs (acyl thioesterases). This supports that different mechanisms may control substrate specificity by zDHHCs and APTs even within the same target protein. As palmitoylation is dynamically regulated, this fundamental post-translational modification represents an important determinant of BK channel physiology in health and disease. PMID:23356260

  11. A study on high strength concrete prepared with large volumes of low calcium fly ash

    SciTech Connect

    Poon, C.S.; Lam, L.; Wong, Y.L.

    2000-03-01

    This paper presents the results of a laboratory study on high strength concrete prepared with large volumes of low calcium fly ash. The parameters studied included compressive strength, heat of hydration, chloride diffusivity, degree of hydration, and pore structures of fly ash/cement concrete and corresponding pastes. The experimental results showed that concrete with a 28-day compressive strength of 80 MPA could be obtained with a water-to-binder (w/b) ratio of 0.24, with a fly ash content of 45%. Such concrete has lower heat of hydration and chloride diffusivity than the equivalent plain cement concrete or concrete prepared with lower fly ash contents. The test results showed that at lower w/b ratios, the contribution to strength by the fly ash was higher than in the mixes prepared with higher w/b ratios. The study also quantified the reaction rates of cement and fly ash in the cementitious materials. The results demonstrated the dual effects of fly ash in concrete: (1) act as a micro-aggregate and (2) being a pozzolana. It was also noted that the strength contribution of fly ash in concrete was better than in the equivalent cement/fly ash pastes suggesting the fly ash had improved the interfacial bond between the past and the aggregates in the concrete. Such an improvement was also reflected in the results of the mercury intrusion porosimetry (MIP) test.

  12. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  13. Calcium Isolation from Large-Volume Human Urine Samples for 41Ca Analysis by Accelerator Mass Spectrometry

    PubMed Central

    Miller, James J; Hui, Susanta K; Jackson, George S; Clark, Sara P; Einstein, Jane; Weaver, Connie M; Bhattacharyya, Maryka H

    2013-01-01

    Calcium oxalate precipitation is the first step in preparation of biological samples for 41Ca analysis by accelerator mass spectrometry. A simplified protocol for large-volume human urine samples was characterized, with statistically significant increases in ion current and decreases in interference. This large-volume assay minimizes cost and effort and maximizes time after 41Ca administration during which human samples, collected over a lifetime, provide 41Ca:Ca ratios that are significantly above background. PMID:23672965

  14. Decomposition and coordination of large-scale operations optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Ruoyu

    Nowadays, highly integrated manufacturing has resulted in more and more large-scale industrial operations. As one of the most effective strategies to ensure high-level operations in modern industry, large-scale engineering optimization has garnered a great amount of interest from academic scholars and industrial practitioners. Large-scale optimization problems frequently occur in industrial applications, and many of them naturally present special structure or can be transformed to taking special structure. Some decomposition and coordination methods have the potential to solve these problems at a reasonable speed. This thesis focuses on three classes of large-scale optimization problems: linear programming, quadratic programming, and mixed-integer programming problems. The main contributions include the design of structural complexity analysis for investigating scaling behavior and computational efficiency of decomposition strategies, novel coordination techniques and algorithms to improve the convergence behavior of decomposition and coordination methods, as well as the development of a decentralized optimization framework which embeds the decomposition strategies in a distributed computing environment. The complexity study can provide fundamental guidelines to practical applications of the decomposition and coordination methods. In this thesis, several case studies imply the viability of the proposed decentralized optimization techniques for real industrial applications. A pulp mill benchmark problem is used to investigate the applicability of the LP/QP decentralized optimization strategies, while a truck allocation problem in the decision support of mining operations is used to study the MILP decentralized optimization strategies.

  15. Large-scale drift and Rossby wave turbulence

    NASA Astrophysics Data System (ADS)

    Harper, K. L.; Nazarenko, S. V.

    2016-08-01

    We study drift/Rossby wave turbulence described by the large-scale limit of the Charney–Hasegawa–Mima equation. We define the zonal and meridional regions as Z:= \\{{k} :| {k}y| \\gt \\sqrt{3}{k}x\\} and M:= \\{{k} :| {k}y| \\lt \\sqrt{3}{k}x\\} respectively, where {k}=({k}x,{k}y) is in a plane perpendicular to the magnetic field such that k x is along the isopycnals and k y is along the plasma density gradient. We prove that the only types of resonant triads allowed are M≤ftrightarrow M+Z and Z≤ftrightarrow Z+Z. Therefore, if the spectrum of weak large-scale drift/Rossby turbulence is initially in Z it will remain in Z indefinitely. We present a generalised Fjørtoft’s argument to find transfer directions for the quadratic invariants in the two-dimensional {k}-space. Using direct numerical simulations, we test and confirm our theoretical predictions for weak large-scale drift/Rossby turbulence, and establish qualitative differences with cases when turbulence is strong. We demonstrate that the qualitative features of the large-scale limit survive when the typical turbulent scale is only moderately greater than the Larmor/Rossby radius.

  16. Upscaling of elastic properties for large scale geomechanical simulations

    NASA Astrophysics Data System (ADS)

    Chalon, F.; Mainguy, M.; Longuemare, P.; Lemonnier, P.

    2004-09-01

    Large scale geomechanical simulations are being increasingly used to model the compaction of stress dependent reservoirs, predict the long term integrity of under-ground radioactive waste disposals, and analyse the viability of hot-dry rock geothermal sites. These large scale simulations require the definition of homogenous mechanical properties for each geomechanical cell whereas the rock properties are expected to vary at a smaller scale. Therefore, this paper proposes a new methodology that makes possible to define the equivalent mechanical properties of the geomechanical cells using the fine scale information given in the geological model. This methodology is implemented on a synthetic reservoir case and two upscaling procedures providing the effective elastic properties of the Hooke's law are tested. The first upscaling procedure is an analytical method for perfectly stratified rock mass, whereas the second procedure computes lower and upper bounds of the equivalent properties with no assumption on the small scale heterogeneity distribution. Both procedures are applied to one geomechanical cell extracted from the reservoir structure. The results show that the analytical and numerical upscaling procedures provide accurate estimations of the effective parameters. Furthermore, a large scale simulation using the homogenized properties of each geomechanical cell calculated with the analytical method demonstrates that the overall behaviour of the reservoir structure is well reproduced for two different loading cases. Copyright

  17. Lateral stirring of large-scale tracer fields by altimetry

    NASA Astrophysics Data System (ADS)

    Dencausse, Guillaume; Morrow, Rosemary; Rogé, Marine; Fleury, Sara

    2014-01-01

    Ocean surface fronts and filaments have a strong impact on the global ocean circulation and biogeochemistry. Surface Lagrangian advection with time-evolving altimetric geostrophic velocities can be used to simulate the submesoscale front and filament structures in large-scale tracer fields. We study this technique in the Southern Ocean region south of Tasmania, a domain marked by strong meso- to submesoscale features such as the fronts of the Antarctic Circumpolar Current (ACC). Starting with large-scale surface tracer fields that we stir with altimetric velocities, we determine `advected' fields which compare well with high-resolution in situ or satellite tracer data. We find that fine scales are best represented in a statistical sense after an optimal advection time of ˜2 weeks, with enhanced signatures of the ACC fronts and better spectral energy. The technique works best in moderate to high EKE regions where lateral advection dominates. This technique may be used to infer the distribution of unresolved small scales in any physical or biogeochemical surface tracer that is dominated by lateral advection. Submesoscale dynamics also impact the subsurface of the ocean, and the Lagrangian advection at depth shows promising results. Finally, we show that climatological tracer fields computed from the advected large-scale fields display improved fine-scale mean features, such as the ACC fronts, which can be useful in the context of ocean modelling.

  18. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  19. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  20. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  1. Monochromatic waves induced by large-scale parametric forcing.

    PubMed

    Nepomnyashchy, A; Abarzhi, S I

    2010-03-01

    We study the formation and stability of monochromatic waves induced by large-scale modulations in the framework of the complex Ginzburg-Landau equation with parametric nonresonant forcing dependent on the spatial coordinate. In the limiting case of forcing with very large characteristic length scale, analytical solutions for the equation are found and conditions of their existence are outlined. Stability analysis indicates that the interval of existence of a monochromatic wave can contain a subinterval where the wave is stable. We discuss potential applications of the model in rheology, fluid dynamics, and optics. PMID:20365907

  2. Large-scale liquid scintillation detectors for solar neutrinos

    NASA Astrophysics Data System (ADS)

    Benziger, Jay B.; Calaprice, Frank P.

    2016-04-01

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed.

  3. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  4. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGESBeta

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  5. Generating Large-Scale Longitudinal Data Resources for Aging Research

    PubMed Central

    Hofer, Scott M.

    2011-01-01

    Objectives. The need for large studies and the types of large-scale data resources (LSDRs) are discussed along with their general scientific utility, role in aging research, and affordability. The diversification of approaches to large-scale data resourcing is described in order to facilitate their use in aging research. Methods. The need for LSDRs is discussed in terms of (a) large sample size; (b) longitudinal design; (c) as platforms for additional investigator-initiated research projects; and (d) broad-based access to core genetic, biological, and phenotypic data. Discussion. It is concluded that a “lite-touch, lo-tech, lo-cost” approach to LSDRs is a viable strategy for the development of LSDRs and would enhance the likelihood of LSDRs being established which are dedicated to the wide range of important aging-related issues. PMID:21743049

  6. Large-scale quantification of CVD graphene surface coverage

    NASA Astrophysics Data System (ADS)

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-02-01

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process.The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale

  7. Calcium mobilization from fish scales is mediated by parathyroid hormone related protein via the parathyroid hormone type 1 receptor.

    PubMed

    Rotllant, J; Redruello, B; Guerreiro, P M; Fernandes, H; Canario, A V M; Power, D M

    2005-12-15

    The scales of bony fish represent a significant reservoir of calcium but little is known about their contribution, as well as of bone, to calcium balance and how calcium deposition and mobilization are regulated in calcified tissues. In the present study we report the action of parathyroid hormone-related protein (PTHrP) on calcium mobilization from sea bream (Sparus auratus) scales in an in vitro bioassay. Ligand binding studies of piscine 125I-(1-35(tyr))PTHrP to the membrane fraction of isolated sea bream scales revealed the existence of a single PTH receptor (PTHR) type. RT-PCR of fish scale cDNA using specific primers for two receptor types found in teleosts, PTH1R, and PTH3R, showed expression only of PTH1R. The signalling mechanisms mediating binding of the N-terminal amino acid region of PTHrP were investigated. A synthetic peptide (10(-8) M) based on the N-terminal 1-34 amino acid residues of Fugu rubripes PTHrP strongly stimulated cAMP synthesis and [3H]myo-inositol incorporation in sea bream scales. However, peptides (10(-8) M) with N-terminal deletions, such as (2-34), (3-34) and (7-34)PTHrP, were defective in stimulating cAMP production but stimulated [3H]myo-inositol incorporation. (1-34)PTHrP induced significant osteoclastic activity in scale tissue as indicated by its stimulation of tartrate-resistant acid phosphatase. In contrast, (7-34)PTHrP failed to stimulate the activity of this enzyme. This activity could also be abolished by the adenylyl cyclase inhibitor SQ-22536, but not by the phospholipase C inhibitor U-73122. The results of the study indicate that one mechanism through which N-terminal (1-34)PTHrP stimulates osteoclastic activity of sea bream scales, is through PTH1R and via the cAMP/AC intracellular signalling pathway. It appears, therefore, that fish scales can act as calcium stores and that (1-34)PTHrP regulates calcium mobilization from them; it remains to be established if this mechanism contributes to calcium homeostasis in vivo

  8. Large-scale smart passive system for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Jung, Hyung-Jo; Jang, Dong-Doo; Lee, Heon-Jae; Cho, Sang-Won

    2008-03-01

    The smart passive system consisting of a magnetorheological (MR) damper and an electromagnetic induction (EMI) part has been recently proposed. An EMI part can generate the input current for an MR damper from vibration of a structure according to Faraday's law of electromagnetic induction. The control performance of the smart passive system has been demonstrated mainly by numerical simulations. It was verified from the numerical results that the system could be effective to reduce the structural responses in the cases of civil engineering structures such as buildings and bridges. On the other hand, the experimental validation of the system is not sufficiently conducted yet. In this paper, the feasibility of the smart passive system to real-scale structures is investigated. To do this, the large-scale smart passive system is designed, manufactured, and tested. The system consists of the large-capacity MR damper, which has a maximum force level of approximately +/-10,000N, a maximum stroke level of +/-35mm and the maximum current level of 3 A, and the large-scale EMI part, which is designed to generate sufficient induced current for the damper. The applicability of the smart passive system to large real-scale structures is examined through a series of shaking table tests. The magnitudes of the induced current of the EMI part with various sinusoidal excitation inputs are measured. According to the test results, the large-scale EMI part shows the possibility that it could generate the sufficient current or power for changing the damping characteristics of the large-capacity MR damper.

  9. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  10. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  11. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  12. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  13. Turbulent large-scale structure effects on wake meandering

    NASA Astrophysics Data System (ADS)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  14. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  15. Individual Skill Differences and Large-Scale Environmental Learning

    ERIC Educational Resources Information Center

    Fields, Alexa W.; Shelton, Amy L.

    2006-01-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited…

  16. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  17. Large Scale Field Campaign Contributions to Soil Moisture Remote Sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large-scale field experiments have been an essential component of soil moisture remote sensing for over two decades. They have provided test beds for both the technology and science necessary to develop and refine satellite mission concepts. The high degree of spatial variability of soil moisture an...

  18. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  19. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  20. Ecosystem resilience despite large-scale altered hydro climatic conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  1. The Cosmology Large Angular Scale Surveyor (CLASS) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Chuss, David T.; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Colazo, Felipe; Crowe, Erik; Denis, Kevin L.; Dunner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Gothe, Dominik; Halpern, Mark; Harrington, Kathleen; Hilton, Gene; Hinshaw, Gary F.; Huang, Caroline; Irwin, Kent; Jones, Glenn; Karakla, John; Kogut, Alan J.; Larson, David; Limon, Michele; Lowry, Lindsay; Marriage, Tobias; Mehrle, Nicholas; Stevenson, Thomas; Miller, Nathan J.; Moseley, Samuel H.; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    We describe the instrument architecture of the Johns Hopkins University-led CLASS instrument, a groundbased cosmic microwave background (CMB) polarimeter that will measure the large-scale polarization of the CMB in several frequency bands to search for evidence of inflation.

  2. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  3. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  4. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  5. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  6. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  7. Large-scale search for dark-matter axions

    SciTech Connect

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J.; McBride, J.; Peng, H.; Rosenberg, L.J.; Xin, H.; Laveigne, J.; Sikivie, P.; Sullivan, N.S.; Tanner, D.B.; Moltz, D.M.; Powell, J.; Clarke, J.; Nezrick, F.A.; Turner, M.S.; Golubev, N.A.; Kravchuk, L.V.

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  8. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  9. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  10. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…